mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-19 04:37:53 +00:00
773cec77a2
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
7467 lines
387 KiB
YAML
7467 lines
387 KiB
YAML
---
|
||
- name: "moe-girl-1ba-7bt-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/kTXXSSSqpb21rfyOX7FUa.jpeg
|
||
# chatml
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/allura-org/MoE-Girl-1BA-7BT
|
||
- https://huggingface.co/mradermacher/MoE-Girl-1BA-7BT-i1-GGUF
|
||
description: |
|
||
A finetune of OLMoE by AllenAI designed for roleplaying (and maybe general usecases if you try hard enough).
|
||
PLEASE do not expect godliness out of this, it's a model with 1 billion active parameters. Expect something more akin to Gemma 2 2B, not Llama 3 8B.
|
||
overrides:
|
||
parameters:
|
||
model: MoE-Girl-1BA-7BT.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: MoE-Girl-1BA-7BT.i1-Q4_K_M.gguf
|
||
sha256: e6ef9c311c73573b243de6ff7538b386f430af30b2be0a96a5745c17137ad432
|
||
uri: huggingface://mradermacher/MoE-Girl-1BA-7BT-i1-GGUF/MoE-Girl-1BA-7BT.i1-Q4_K_M.gguf
|
||
- name: "salamandra-7b-instruct"
|
||
icon: https://huggingface.co/BSC-LT/salamandra-7b-instruct/resolve/main/images/salamandra_header.png
|
||
# Uses chatml
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
license: apache-2.0
|
||
urls:
|
||
- https://huggingface.co/BSC-LT/salamandra-7b-instruct
|
||
- https://huggingface.co/cstr/salamandra-7b-instruct-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- salamandra
|
||
description: |
|
||
Transformer-based decoder-only language model that has been pre-trained on 7.8 trillion tokens of highly curated data. The pre-training corpus contains text in 35 European languages and code.
|
||
Salamandra comes in three different sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants. This model card corresponds to the 7B instructed version.
|
||
overrides:
|
||
parameters:
|
||
model: salamandra-7b-instruct.Q4_K_M-f32.gguf
|
||
files:
|
||
- filename: salamandra-7b-instruct.Q4_K_M-f32.gguf
|
||
sha256: bac8e8c1d1d9d53cbdb148b8ff9ad378ddb392429207099e85b5aae3a43bff3d
|
||
uri: huggingface://cstr/salamandra-7b-instruct-GGUF/salamandra-7b-instruct.Q4_K_M-f32.gguf
|
||
## llama3.2
|
||
- &llama32
|
||
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
|
||
license: llama3.2
|
||
description: |
|
||
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|
||
|
||
Model Developer: Meta
|
||
|
||
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3.2
|
||
name: "llama-3.2-1b-instruct:q4_k_m"
|
||
urls:
|
||
- https://huggingface.co/hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: llama-3.2-1b-instruct-q4_k_m.gguf
|
||
files:
|
||
- filename: llama-3.2-1b-instruct-q4_k_m.gguf
|
||
sha256: 1d0e9419ec4e12aef73ccf4ffd122703e94c48344a96bc7c5f0f2772c2152ce3
|
||
uri: huggingface://hugging-quants/Llama-3.2-1B-Instruct-Q4_K_M-GGUF/llama-3.2-1b-instruct-q4_k_m.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-3b-instruct:q4_k_m"
|
||
urls:
|
||
- https://huggingface.co/hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: llama-3.2-3b-instruct-q4_k_m.gguf
|
||
files:
|
||
- filename: llama-3.2-3b-instruct-q4_k_m.gguf
|
||
sha256: c55a83bfb6396799337853ca69918a0b9bbb2917621078c34570bc17d20fd7a1
|
||
uri: huggingface://hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF/llama-3.2-3b-instruct-q4_k_m.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-3b-instruct:q8_0"
|
||
urls:
|
||
- https://huggingface.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: llama-3.2-3b-instruct-q8_0.gguf
|
||
files:
|
||
- filename: llama-3.2-3b-instruct-q8_0.gguf
|
||
sha256: 51725f77f997a5080c3d8dd66e073da22ddf48ab5264f21f05ded9b202c3680e
|
||
uri: huggingface://hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF/llama-3.2-3b-instruct-q8_0.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-1b-instruct:q8_0"
|
||
urls:
|
||
- https://huggingface.co/hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: llama-3.2-1b-instruct-q8_0.gguf
|
||
files:
|
||
- filename: llama-3.2-1b-instruct-q8_0.gguf
|
||
sha256: ba345c83bf5cc679c653b853c46517eea5a34f03ed2205449db77184d9ae62a9
|
||
uri: huggingface://hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF/llama-3.2-1b-instruct-q8_0.gguf
|
||
## Uncensored
|
||
- !!merge <<: *llama32
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/66c9d7a26f2335ba288810a4/4YDg-rcEXCK0fdTS1fBzE.webp
|
||
name: "versatillama-llama-3.2-3b-instruct-abliterated"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF
|
||
description: |
|
||
Small but Smart Fine-Tuned on Vast dataset of Conversations. Able to Generate Human like text with high performance within its size. It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct.
|
||
overrides:
|
||
parameters:
|
||
model: VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf
|
||
files:
|
||
- filename: VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf
|
||
sha256: 15b9e4a987f50d7594d030815c7166a996e20db46fe1e20da03e96955020312c
|
||
uri: huggingface://QuantFactory/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated-GGUF/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama3.2-3b-enigma"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Llama3.2-3B-Enigma-GGUF
|
||
description: |
|
||
Enigma is a code-instruct model built on Llama 3.2 3b. It is a high quality code instruct model with the Llama 3.2 Instruct chat format. The model is finetuned on synthetic code-instruct data generated with Llama 3.1 405b and supplemented with generalist synthetic data. It uses the Llama 3.2 Instruct prompt format.
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.2-3B-Enigma.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.2-3B-Enigma.Q4_K_M.gguf
|
||
sha256: 4304e6ee1e348b228470700ec1e9423f5972333d376295195ce6cd5c70cae5e4
|
||
uri: huggingface://QuantFactory/Llama3.2-3B-Enigma-GGUF/Llama3.2-3B-Enigma.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama3.2-3b-esper2"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/4I6oK8DG0so4VD8GroFsd.jpeg
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Llama3.2-3B-Esper2-GGUF
|
||
description: |
|
||
Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.2 3b. It is an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more, with real world problem solving and high quality code instruct performance within the Llama 3.2 Instruct chat format. Finetuned on synthetic DevOps-instruct and code-instruct data generated with Llama 3.1 405b and supplemented with generalist chat data.
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.2-3B-Esper2.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.2-3B-Esper2.Q4_K_M.gguf
|
||
sha256: 11d2bd674aa22a71a59ec49ad29b695000d14bc275b0195b8d7089bfc7582fc7
|
||
uri: huggingface://QuantFactory/Llama3.2-3B-Esper2-GGUF/Llama3.2-3B-Esper2.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-3b-agent007"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Llama-3.2-3B-Agent007-GGUF
|
||
description: |
|
||
The model is a quantized version of EpistemeAI/Llama-3.2-3B-Agent007, developed by EpistemeAI and fine-tuned from unsloth/llama-3.2-3b-instruct-bnb-4bit. It was trained 2x faster with Unsloth and Huggingface's TRL library. Fine tuned with Agent datasets.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.2-3B-Agent007.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.2-3B-Agent007.Q4_K_M.gguf
|
||
sha256: 7a2543a69b116f2a059e2e445e5d362bb7df4a51b97e83d8785c1803dc9d687f
|
||
uri: huggingface://QuantFactory/Llama-3.2-3B-Agent007-GGUF/Llama-3.2-3B-Agent007.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-3b-agent007-coder"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF
|
||
description: |
|
||
The Llama-3.2-3B-Agent007-Coder-GGUF is a quantized version of the EpistemeAI/Llama-3.2-3B-Agent007-Coder model, which is a fine-tuned version of the unsloth/llama-3.2-3b-instruct-bnb-4bit model. It is created using llama.cpp and trained with additional datasets such as the Agent dataset, Code Alpaca 20K, and magpie ultra 0.1. This model is optimized for multilingual dialogue use cases and agentic retrieval and summarization tasks. The model is available for commercial and research use in multiple languages and is best used with the transformers library.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.2-3B-Agent007-Coder.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.2-3B-Agent007-Coder.Q4_K_M.gguf
|
||
sha256: 49a4861c094d94ef5faa33f69b02cd132bb0167f1c3ca59059404f85f61e1d12
|
||
uri: huggingface://QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF/Llama-3.2-3B-Agent007-Coder.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "fireball-meta-llama-3.2-8b-instruct-agent-003-128k-code-dpo"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO-GGUF
|
||
description: |
|
||
The LLM model is a quantized version of EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO, which is an experimental and revolutionary fine-tune with DPO dataset to allow LLama 3.1 8B to be an agentic coder. It has some built-in agent features such as search, calculator, and ReAct. Other noticeable features include self-learning using unsloth, RAG applications, and memory. The context window of the model is 128K. It can be integrated into projects using popular libraries like Transformers and vLLM. The model is suitable for use with Langchain or LLamaIndex. The model is developed by EpistemeAI and licensed under the Apache 2.0 license.
|
||
overrides:
|
||
parameters:
|
||
model: Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO.Q4_K_M.gguf
|
||
files:
|
||
- filename: Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO.Q4_K_M.gguf
|
||
sha256: 7f45fa79bc6c9847ef9fbad08c3bb5a0f2dbb56d2e2200a5d37b260a57274e55
|
||
uri: huggingface://QuantFactory/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO-GGUF/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-chibi-3b"
|
||
icon: https://huggingface.co/AELLM/Llama-3.2-Chibi-3B/resolve/main/chibi.jpg
|
||
urls:
|
||
- https://huggingface.co/AELLM/Llama-3.2-Chibi-3B
|
||
- https://huggingface.co/mradermacher/Llama-3.2-Chibi-3B-GGUF
|
||
description: |
|
||
Small parameter LLMs are ideal for navigating the complexities of the Japanese language, which involves multiple character systems like kanji, hiragana, and katakana, along with subtle social cues. Despite their smaller size, these models are capable of delivering highly accurate and context-aware results, making them perfect for use in environments where resources are constrained. Whether deployed on mobile devices with limited processing power or in edge computing scenarios where fast, real-time responses are needed, these models strike the perfect balance between performance and efficiency, without sacrificing quality or speed.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.2-Chibi-3B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.2-Chibi-3B.Q4_K_M.gguf
|
||
sha256: 4b594cd5f66181202713f1cf97ce2f86d0acfa1b862a64930d5f512c45640a2f
|
||
uri: huggingface://mradermacher/Llama-3.2-Chibi-3B-GGUF/Llama-3.2-Chibi-3B.Q4_K_M.gguf
|
||
- !!merge <<: *llama32
|
||
name: "llama-3.2-3b-reasoning-time"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Llama-3.2-3B-Reasoning-Time-GGUF
|
||
description: |
|
||
Lyte/Llama-3.2-3B-Reasoning-Time is a large language model with 3.2 billion parameters, designed for reasoning and time-based tasks in English. It is based on the Llama architecture and has been quantized using the GGUF format by mradermacher.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.2-3B-Reasoning-Time.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.2-3B-Reasoning-Time.Q4_K_M.gguf
|
||
sha256: 80b10e1a5c6e27f6d8cf08c3472af2b15a9f63ebf8385eedfe8615f85116c73f
|
||
uri: huggingface://mradermacher/Llama-3.2-3B-Reasoning-Time-GGUF/Llama-3.2-3B-Reasoning-Time.Q4_K_M.gguf
|
||
- &qwen25
|
||
## Qwen2.5
|
||
name: "qwen2.5-14b-instruct"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
license: apache-2.0
|
||
description: |
|
||
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- qwen
|
||
- qwen2.5
|
||
- cpu
|
||
urls:
|
||
- https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF
|
||
- https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-14B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-14B-Instruct-Q4_K_M.gguf
|
||
sha256: e47ad95dad6ff848b431053b375adb5d39321290ea2c638682577dafca87c008
|
||
uri: huggingface://bartowski/Qwen2.5-14B-Instruct-GGUF/Qwen2.5-14B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-math-7b-instruct"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Qwen2.5-Math-7B-Instruct-GGUF
|
||
- https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct
|
||
description: |
|
||
In August 2024, we released the first series of mathematical LLMs - Qwen2-Math - of our Qwen family. A month later, we have upgraded it and open-sourced Qwen2.5-Math series, including base models Qwen2.5-Math-1.5B/7B/72B, instruction-tuned models Qwen2.5-Math-1.5B/7B/72B-Instruct, and mathematical reward model Qwen2.5-Math-RM-72B.
|
||
|
||
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.
|
||
|
||
The base models of Qwen2-Math are initialized with Qwen2-1.5B/7B/72B, and then pretrained on a meticulously designed Mathematics-specific Corpus. This corpus contains large-scale high-quality mathematical web texts, books, codes, exam questions, and mathematical pre-training data synthesized by Qwen2.
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-Math-7B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-Math-7B-Instruct-Q4_K_M.gguf
|
||
sha256: 7e03cee8c65b9ebf9ca14ddb010aca27b6b18e6c70f2779e94e7451d9529c091
|
||
uri: huggingface://bartowski/Qwen2.5-Math-7B-Instruct-GGUF/Qwen2.5-Math-7B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-14b_uncencored"
|
||
icon: https://huggingface.co/SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored/resolve/main/Misc/Uncensored.png
|
||
urls:
|
||
- https://huggingface.co/SicariusSicariiStuff/Qwen2.5-14B_Uncencored
|
||
- https://huggingface.co/bartowski/Qwen2.5-14B_Uncencored-GGUF
|
||
description: |
|
||
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.
|
||
|
||
Uncensored qwen2.5
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- qwen
|
||
- qwen2.5
|
||
- cpu
|
||
- uncensored
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-14B_Uncencored-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-14B_Uncencored-Q4_K_M.gguf
|
||
sha256: 066b9341b67e0fd0956de3576a3b7988574a5b9a0028aef2b9c8edeadd6dbbd1
|
||
uri: huggingface://bartowski/Qwen2.5-14B_Uncencored-GGUF/Qwen2.5-14B_Uncencored-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-coder-7b-instruct"
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
|
||
description: |
|
||
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
||
|
||
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
|
||
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
||
Long-context Support up to 128K tokens.
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
|
||
sha256: 1664fccab734674a50763490a8c6931b70e3f2f8ec10031b54806d30e5f956b6
|
||
uri: huggingface://bartowski/Qwen2.5-Coder-7B-Instruct-GGUF/Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-math-72b-instruct"
|
||
icon: http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/qwen2.5-math-pipeline.jpeg
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-Math-72B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2.5-Math-72B-Instruct-GGUF
|
||
description: |
|
||
In August 2024, we released the first series of mathematical LLMs - Qwen2-Math - of our Qwen family. A month later, we have upgraded it and open-sourced Qwen2.5-Math series, including base models Qwen2.5-Math-1.5B/7B/72B, instruction-tuned models Qwen2.5-Math-1.5B/7B/72B-Instruct, and mathematical reward model Qwen2.5-Math-RM-72B.
|
||
|
||
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-Math-72B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-Math-72B-Instruct-Q4_K_M.gguf
|
||
sha256: 5dee8a6e21d555577712b4f65565a3c3737a0d5d92f5a82970728c6d8e237f17
|
||
uri: huggingface://bartowski/Qwen2.5-Math-72B-Instruct-GGUF/Qwen2.5-Math-72B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-0.5b-instruct"
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2.5-0.5B-Instruct-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-0.5B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-0.5B-Instruct-Q4_K_M.gguf
|
||
sha256: 6eb923e7d26e9cea28811e1a8e852009b21242fb157b26149d3b188f3a8c8653
|
||
uri: huggingface://bartowski/Qwen2.5-0.5B-Instruct-GGUF/Qwen2.5-0.5B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-1.5b-instruct"
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2.5-1.5B-Instruct-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-1.5B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-1.5B-Instruct-Q4_K_M.gguf
|
||
sha256: 1adf0b11065d8ad2e8123ea110d1ec956dab4ab038eab665614adba04b6c3370
|
||
uri: huggingface://bartowski/Qwen2.5-1.5B-Instruct-GGUF/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-32b"
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-32B
|
||
- https://huggingface.co/mradermacher/Qwen2.5-32B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-32B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-32B.Q4_K_M.gguf
|
||
uri: huggingface://mradermacher/Qwen2.5-32B-GGUF/Qwen2.5-32B.Q4_K_M.gguf
|
||
sha256: fa42a4067e3630929202b6bb1ef5cebc43c1898494aedfd567b7d53c7a9d84a6
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-32b-instruct"
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-32B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-32B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-32B-Instruct-Q4_K_M.gguf
|
||
sha256: 2e5f6daea180dbc59f65a40641e94d3973b5dbaa32b3c0acf54647fa874e519e
|
||
uri: huggingface://bartowski/Qwen2.5-32B-Instruct-GGUF/Qwen2.5-32B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-72b-instruct"
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2.5-72B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2.5-72B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2.5-72B-Instruct-Q4_K_M.gguf
|
||
sha256: e4c8fad16946be8cf0bbf67eb8f4e18fc7415a5a6d2854b4cda453edb4082545
|
||
uri: huggingface://bartowski/Qwen2.5-72B-Instruct-GGUF/Qwen2.5-72B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "bigqwen2.5-52b-instruct"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/98GiKtmH1AtHHbIbOUH4Y.jpeg
|
||
urls:
|
||
- https://huggingface.co/mlabonne/BigQwen2.5-52B-Instruct
|
||
- https://huggingface.co/bartowski/BigQwen2.5-52B-Instruct-GGUF
|
||
description: |
|
||
BigQwen2.5-52B-Instruct is a Qwen/Qwen2-32B-Instruct self-merge made with MergeKit.
|
||
It applies the mlabonne/Meta-Llama-3-120B-Instruct recipe.
|
||
overrides:
|
||
parameters:
|
||
model: BigQwen2.5-52B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: BigQwen2.5-52B-Instruct-Q4_K_M.gguf
|
||
sha256: 9c939f08e366b51b07096eb2ecb5cc2a82894ac7baf639e446237ad39889c896
|
||
uri: huggingface://bartowski/BigQwen2.5-52B-Instruct-GGUF/BigQwen2.5-52B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "replete-llm-v2.5-qwen-14b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ihnWXDEgV-ZKN_B036U1J.png
|
||
urls:
|
||
- https://huggingface.co/Replete-AI/Replete-LLM-V2.5-Qwen-14b
|
||
- https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-14b-GGUF
|
||
description: |
|
||
Replete-LLM-V2.5-Qwen-14b is a continues finetuned version of Qwen2.5-14B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the Ties merge method
|
||
|
||
This version of the model shows higher performance than the original instruct and base models.
|
||
overrides:
|
||
parameters:
|
||
model: Replete-LLM-V2.5-Qwen-14b-Q4_K_M.gguf
|
||
files:
|
||
- filename: Replete-LLM-V2.5-Qwen-14b-Q4_K_M.gguf
|
||
sha256: 17d0792ff5e3062aecb965629f66e679ceb407e4542e8045993dcfe9e7e14d9d
|
||
uri: huggingface://bartowski/Replete-LLM-V2.5-Qwen-14b-GGUF/Replete-LLM-V2.5-Qwen-14b-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "replete-llm-v2.5-qwen-7b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ihnWXDEgV-ZKN_B036U1J.png
|
||
urls:
|
||
- https://huggingface.co/Replete-AI/Replete-LLM-V2.5-Qwen-7b
|
||
- https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF
|
||
description: |
|
||
Replete-LLM-V2.5-Qwen-7b is a continues finetuned version of Qwen2.5-14B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the Ties merge method
|
||
|
||
This version of the model shows higher performance than the original instruct and base models.
|
||
overrides:
|
||
parameters:
|
||
model: Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf
|
||
files:
|
||
- filename: Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf
|
||
sha256: 054d54972259c0398b4e0af3f408f608e1166837b1d7535d08fc440d1daf8639
|
||
uri: huggingface://bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF/Replete-LLM-V2.5-Qwen-7b-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "calme-2.2-qwen2.5-72b-i1"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2.5-72b/resolve/main/calme-2.webp
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2.5-72b
|
||
- https://huggingface.co/mradermacher/calme-2.2-qwen2.5-72b-i1-GGUF
|
||
description: |
|
||
This model is a fine-tuned version of the powerful Qwen/Qwen2.5-72B-Instruct, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
|
||
Use Cases
|
||
|
||
This model is suitable for a wide range of applications, including but not limited to:
|
||
|
||
Advanced question-answering systems
|
||
Intelligent chatbots and virtual assistants
|
||
Content generation and summarization
|
||
Code generation and analysis
|
||
Complex problem-solving and decision support
|
||
overrides:
|
||
parameters:
|
||
model: calme-2.2-qwen2.5-72b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: calme-2.2-qwen2.5-72b.i1-Q4_K_M.gguf
|
||
sha256: 5fdfa599724d7c78502c477ced1d294e92781b91d3265bd0748fbf15a6fefde6
|
||
uri: huggingface://mradermacher/calme-2.2-qwen2.5-72b-i1-GGUF/calme-2.2-qwen2.5-72b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "t.e-8.1-iq-imatrix-request"
|
||
# chatml
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/K1aNPf32z-6tYZdcSQBzF.png
|
||
urls:
|
||
- https://huggingface.co/Cran-May/T.E-8.1
|
||
- https://huggingface.co/Lewdiculous/T.E-8.1-GGUF-IQ-Imatrix-Request
|
||
description: |
|
||
Trained for roleplay uses.
|
||
overrides:
|
||
parameters:
|
||
model: T.E-8.1-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: T.E-8.1-Q4_K_M-imat.gguf
|
||
sha256: 1b7892b82c01ea4cbebe34cd00f9836cbbc369fc3247c1f44a92842201e7ec0b
|
||
uri: huggingface://Lewdiculous/T.E-8.1-GGUF-IQ-Imatrix-Request/T.E-8.1-Q4_K_M-imat.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "rombos-llm-v2.5.1-qwen-3b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/pNDtgE5FDkxxvbG4qiZ1A.jpeg
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Rombos-LLM-V2.5.1-Qwen-3b-GGUF
|
||
description: |
|
||
Rombos-LLM-V2.5.1-Qwen-3b is a little experiment that merges a high-quality LLM, arcee-ai/raspberry-3B, using the last step of the Continuous Finetuning method outlined in a Google document. The merge is done using the mergekit with the following parameters:
|
||
|
||
- Models: Qwen2.5-3B-Instruct, raspberry-3B
|
||
- Merge method: ties
|
||
- Base model: Qwen2.5-3B
|
||
- Parameters: weight=1, density=1, normalize=true, int8_mask=true
|
||
- Dtype: bfloat16
|
||
|
||
The model has been evaluated on various tasks and datasets, and the results are available on the Open LLM Leaderboard. The model has shown promising performance across different benchmarks.
|
||
overrides:
|
||
parameters:
|
||
model: Rombos-LLM-V2.5.1-Qwen-3b.Q4_K_M.gguf
|
||
files:
|
||
- filename: Rombos-LLM-V2.5.1-Qwen-3b.Q4_K_M.gguf
|
||
sha256: 656c342a2921cac8912e0123fc295c3bb3d631a85c671c12a3843a957e46d30d
|
||
uri: huggingface://QuantFactory/Rombos-LLM-V2.5.1-Qwen-3b-GGUF/Rombos-LLM-V2.5.1-Qwen-3b.Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "qwen2.5-7b-ins-v3"
|
||
urls:
|
||
- https://huggingface.co/happzy2633/qwen2.5-7b-ins-v3
|
||
- https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF
|
||
description: |
|
||
Qwen 2.5 fine-tuned on CoT to match o1 performance. An attempt to build an Open o1 mathcing OpenAI o1 model
|
||
Demo: https://huggingface.co/spaces/happzy2633/open-o1
|
||
overrides:
|
||
parameters:
|
||
model: qwen2.5-7b-ins-v3-Q4_K_M.gguf
|
||
files:
|
||
- filename: qwen2.5-7b-ins-v3-Q4_K_M.gguf
|
||
sha256: 9c23734072714a4886c0386ae0ff07a5e940d67ad52278e2ed689fec44e1e0c8
|
||
uri: huggingface://bartowski/qwen2.5-7b-ins-v3-GGUF/qwen2.5-7b-ins-v3-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "supernova-medius"
|
||
urls:
|
||
- https://huggingface.co/arcee-ai/SuperNova-Medius-GGUF
|
||
description: |
|
||
Arcee-SuperNova-Medius is a 14B parameter language model developed by Arcee.ai, built on the Qwen2.5-14B-Instruct architecture. This unique model is the result of a cross-architecture distillation pipeline, combining knowledge from both the Qwen2.5-72B-Instruct model and the Llama-3.1-405B-Instruct model. By leveraging the strengths of these two distinct architectures, SuperNova-Medius achieves high-quality instruction-following and complex reasoning capabilities in a mid-sized, resource-efficient form.
|
||
|
||
SuperNova-Medius is designed to excel in a variety of business use cases, including customer support, content creation, and technical assistance, while maintaining compatibility with smaller hardware configurations. It’s an ideal solution for organizations looking for advanced capabilities without the high resource requirements of larger models like our SuperNova-70B.
|
||
overrides:
|
||
parameters:
|
||
model: SuperNova-Medius-Q4_K_M.gguf
|
||
files:
|
||
- filename: SuperNova-Medius-Q4_K_M.gguf
|
||
sha256: aaa4bf3451bc900f186fd4b6b3a6a26bfd40c85908f605db76b92e58aadcc864
|
||
uri: huggingface://arcee-ai/SuperNova-Medius-GGUF/SuperNova-Medius-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "eva-qwen2.5-14b-v0.1-i1"
|
||
urls:
|
||
- https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1
|
||
- https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-i1-GGUF
|
||
description: |
|
||
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
|
||
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
|
||
overrides:
|
||
parameters:
|
||
model: EVA-Qwen2.5-14B-v0.1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: EVA-Qwen2.5-14B-v0.1.i1-Q4_K_M.gguf
|
||
sha256: 4e9665d4f83cd97efb42c8427f9c09be93b72e23a0364c91ad0b5de8056f2795
|
||
uri: huggingface://mradermacher/EVA-Qwen2.5-14B-v0.1-i1-GGUF/EVA-Qwen2.5-14B-v0.1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "cursorcore-qw2.5-7b-i1"
|
||
urls:
|
||
- https://huggingface.co/TechxGenus/CursorCore-QW2.5-7B
|
||
- https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-i1-GGUF
|
||
description: |
|
||
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
|
||
overrides:
|
||
parameters:
|
||
model: CursorCore-QW2.5-7B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: CursorCore-QW2.5-7B.i1-Q4_K_M.gguf
|
||
sha256: 81868f4edb4ec1a61debde1dbdebc02b407930ee19a6d946ff801afba840a102
|
||
uri: huggingface://mradermacher/CursorCore-QW2.5-7B-i1-GGUF/CursorCore-QW2.5-7B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "cursorcore-qw2.5-1.5b-lc-i1"
|
||
urls:
|
||
- https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-LC
|
||
- https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF
|
||
description: |
|
||
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
|
||
overrides:
|
||
parameters:
|
||
model: CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf
|
||
sha256: 185d720c810f7345ef861ad8eef1199bb15afa8e4f3c03bd5ffd476cfa465127
|
||
uri: huggingface://mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "edgerunner-command-nested-i1"
|
||
urls:
|
||
- https://huggingface.co/edgerunner-ai/EdgeRunner-Command-Nested
|
||
- https://huggingface.co/mradermacher/EdgeRunner-Command-Nested-i1-GGUF
|
||
description: |
|
||
EdgeRunner-Command-Nested is an advanced large language model designed specifically for handling complex nested function calls. Initialized from Qwen2.5-7B-Instruct, further enhanced by the integration of the Hermes function call template and additional training on a specialized dataset (based on TinyAgent). This extra dataset focuses on personal domain applications, providing the model with a robust understanding of nested function scenarios that are typical in complex user interactions.
|
||
overrides:
|
||
parameters:
|
||
model: EdgeRunner-Command-Nested.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: EdgeRunner-Command-Nested.i1-Q4_K_M.gguf
|
||
sha256: a1cc4d2b601dc20e58cbb549bd3e9bc460995840c0aaf1cd3c1cb5414c900ac7
|
||
uri: huggingface://mradermacher/EdgeRunner-Command-Nested-i1-GGUF/EdgeRunner-Command-Nested.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen25
|
||
name: "tsunami-0.5x-7b-instruct-i1"
|
||
icon: https://huggingface.co/Tsunami-th/Tsunami-0.5x-7B-Instruct/resolve/main/Tsunami.webp
|
||
urls:
|
||
- https://huggingface.co/Tsunami-th/Tsunami-0.5x-7B-Instruct
|
||
- https://huggingface.co/mradermacher/Tsunami-0.5x-7B-Instruct-i1-GGUF
|
||
description: |
|
||
TSUNAMI: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence.
|
||
|
||
TSUNAMI full name was created by ChatGPT.
|
||
infomation
|
||
|
||
Tsunami-0.5x-7B-Instruct is Thai Large Language Model that fine-tuned from Qwen2.5-7B around 100,000 rows in Thai dataset.
|
||
overrides:
|
||
parameters:
|
||
model: Tsunami-0.5x-7B-Instruct.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Tsunami-0.5x-7B-Instruct.i1-Q4_K_M.gguf
|
||
sha256: 22e2003ecec7f1e91f2e9aaec334613c0f37fb3000d0e628b5a9980e53322fa7
|
||
uri: huggingface://mradermacher/Tsunami-0.5x-7B-Instruct-i1-GGUF/Tsunami-0.5x-7B-Instruct.i1-Q4_K_M.gguf
|
||
- &archfunct
|
||
license: apache-2.0
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- qwen
|
||
- qwen2.5
|
||
- cpu
|
||
- function-calling
|
||
name: "arch-function-1.5b"
|
||
uri: "github:mudler/LocalAI/gallery/arch-function.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/katanemolabs/Arch-Function-1.5B
|
||
- https://huggingface.co/mradermacher/Arch-Function-1.5B-GGUF
|
||
description: |
|
||
The Katanemo Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for function calling tasks. The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts. Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
|
||
In summary, the Katanemo Arch-Function collection demonstrates:
|
||
State-of-the-art performance in function calling
|
||
Accurate parameter identification and suggestion, even in ambiguous or incomplete inputs
|
||
High generalization across multiple function calling use cases, from API interactions to automated backend tasks.
|
||
Optimized low-latency, high-throughput performance, making it suitable for real-time, production environments.
|
||
overrides:
|
||
parameters:
|
||
model: Arch-Function-1.5B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Arch-Function-1.5B.Q4_K_M.gguf
|
||
sha256: 5ac54d2d50cca0ee0335ca2c9b688204c0829cd3a73de3ee3fda108281ad9691
|
||
uri: huggingface://mradermacher/Arch-Function-1.5B-GGUF/Arch-Function-1.5B.Q4_K_M.gguf
|
||
- !!merge <<: *archfunct
|
||
name: "arch-function-7b"
|
||
urls:
|
||
- https://huggingface.co/katanemolabs/Arch-Function-7B
|
||
- https://huggingface.co/mradermacher/Arch-Function-7B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Arch-Function-7B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Arch-Function-7B.Q4_K_M.gguf
|
||
sha256: 6e38661321d79d02b8cf57c79d97c6c0e19adb9ffa66083cc440c24e257234b6
|
||
uri: huggingface://mradermacher/Arch-Function-7B-GGUF/Arch-Function-7B.Q4_K_M.gguf
|
||
- !!merge <<: *archfunct
|
||
name: "arch-function-3b"
|
||
urls:
|
||
- https://huggingface.co/katanemolabs/Arch-Function-3B
|
||
- https://huggingface.co/mradermacher/Arch-Function-3B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Arch-Function-3B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Arch-Function-3B.Q4_K_M.gguf
|
||
sha256: 9945cb8d070498d163e5df90c1987f591d35e4fd2222a6c51bcfff848c4b573b
|
||
uri: huggingface://mradermacher/Arch-Function-3B-GGUF/Arch-Function-3B.Q4_K_M.gguf
|
||
- &smollm
|
||
## SmolLM
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "smollm-1.7b-instruct"
|
||
icon: https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- smollm
|
||
- chatml
|
||
- cpu
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/SmolLM-1.7B-Instruct-GGUF
|
||
- https://huggingface.co/HuggingFaceTB/SmolLM-1.7B-Instruct
|
||
description: |
|
||
SmolLM is a series of small language models available in three sizes: 135M, 360M, and 1.7B parameters.
|
||
|
||
These models are pre-trained on SmolLM-Corpus, a curated collection of high-quality educational and synthetic data designed for training LLMs. For further details, we refer to our blogpost.
|
||
|
||
To build SmolLM-Instruct, we finetuned the base models on publicly available datasets.
|
||
overrides:
|
||
parameters:
|
||
model: SmolLM-1.7B-Instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: SmolLM-1.7B-Instruct.Q4_K_M.gguf
|
||
sha256: 2b07eb2293ed3fc544a9858beda5bfb03dcabda6aa6582d3c85768c95f498d28
|
||
uri: huggingface://MaziyarPanahi/SmolLM-1.7B-Instruct-GGUF/SmolLM-1.7B-Instruct.Q4_K_M.gguf
|
||
- &llama31
|
||
## LLama3.1
|
||
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
|
||
name: "meta-llama-3.1-8b-instruct"
|
||
license: llama3.1
|
||
description: |
|
||
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
|
||
|
||
Model developer: Meta
|
||
|
||
Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
||
urls:
|
||
- https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
|
||
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3.1
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
|
||
sha256: c2f17f44af962660d1ad4cb1af91a731f219f3b326c2b14441f9df1f347f2815
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "meta-llama-3.1-70b-instruct"
|
||
urls:
|
||
- https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct
|
||
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
|
||
sha256: 3f16ab17da4521fe3ed7c5d7beed960d3fe7b5b64421ee9650aa53d6b649ccab
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF/Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "meta-llama-3.1-8b-instruct:grammar-functioncall"
|
||
url: "github:mudler/LocalAI/gallery/llama3.1-instruct-grammar.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
|
||
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
|
||
description: |
|
||
This is the standard Llama 3.1 8B Instruct model with grammar and function call enabled.
|
||
|
||
When grammars are enabled in LocalAI, the LLM is forced to output valid tools constrained by BNF grammars. This can be useful for ensuring that the model outputs are valid and can be used in a production environment.
|
||
For more information on how to use grammars in LocalAI, see https://localai.io/features/openai-functions/#advanced and https://localai.io/features/constrained_grammars/.
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
|
||
sha256: c2f17f44af962660d1ad4cb1af91a731f219f3b326c2b14441f9df1f347f2815
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "meta-llama-3.1-8b-instruct:Q8_grammar-functioncall"
|
||
url: "github:mudler/LocalAI/gallery/llama3.1-instruct-grammar.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
|
||
- https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
|
||
description: |
|
||
This is the standard Llama 3.1 8B Instruct model with grammar and function call enabled.
|
||
|
||
When grammars are enabled in LocalAI, the LLM is forced to output valid tools constrained by BNF grammars. This can be useful for ensuring that the model outputs are valid and can be used in a production environment.
|
||
For more information on how to use grammars in LocalAI, see https://localai.io/features/openai-functions/#advanced and https://localai.io/features/constrained_grammars/.
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3.1-8B-Instruct.Q8_0.gguf
|
||
files:
|
||
- filename: Meta-Llama-3.1-8B-Instruct.Q8_0.gguf
|
||
sha256: f8d608c983b83a1bf28229bc9beb4294c91f5d4cbfe2c1829566b4d7c4693eeb
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct.Q8_0.gguf
|
||
- !!merge <<: *llama31
|
||
name: "meta-llama-3.1-8b-claude-imat"
|
||
urls:
|
||
- https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude
|
||
- https://huggingface.co/InferenceIllusionist/Meta-Llama-3.1-8B-Claude-iMat-GGUF
|
||
description: |
|
||
Meta-Llama-3.1-8B-Claude-iMat-GGUF: Quantized from Meta-Llama-3.1-8B-Claude fp16. Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 88 chunks and n_ctx=512. Static fp16 will also be included in repo. For a brief rundown of iMatrix quant performance, please see this PR. All quants are verified working prior to uploading to repo for your safety and convenience.
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3.1-8B-Claude-iMat-Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3.1-8B-Claude-iMat-Q4_K_M.gguf
|
||
uri: huggingface://InferenceIllusionist/Meta-Llama-3.1-8B-Claude-iMat-GGUF/Meta-Llama-3.1-8B-Claude-iMat-Q4_K_M.gguf
|
||
sha256: 6d175432f66d10dfed9737f73a5073d513d18e1ee7bd4b9cf2a59deb359f36ff
|
||
- !!merge <<: *llama31
|
||
name: "meta-llama-3.1-8b-instruct-abliterated"
|
||
icon: https://i.imgur.com/KhorYYG.png
|
||
urls:
|
||
- https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
|
||
- https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
|
||
description: |
|
||
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration.
|
||
overrides:
|
||
parameters:
|
||
model: meta-llama-3.1-8b-instruct-abliterated.Q4_K_M.gguf
|
||
files:
|
||
- filename: meta-llama-3.1-8b-instruct-abliterated.Q4_K_M.gguf
|
||
uri: huggingface://mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF/meta-llama-3.1-8b-instruct-abliterated.Q4_K_M.gguf
|
||
sha256: c4735f9efaba8eb2c30113291652e3ffe13bf940b675ed61f6be749608b4f266
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-70b-japanese-instruct-2407"
|
||
urls:
|
||
- https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
|
||
- https://huggingface.co/mmnga/Llama-3.1-70B-Japanese-Instruct-2407-gguf
|
||
description: |
|
||
The Llama-3.1-70B-Japanese-Instruct-2407-gguf model is a Japanese language model that uses the Instruct prompt tuning method. It is based on the LLaMa-3.1-70B model and has been fine-tuned on the imatrix dataset for Japanese. The model is trained to generate informative and coherent responses to given instructions or prompts. It is available in the gguf format and can be used for a variety of tasks such as question answering, text generation, and more.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-70B-Japanese-Instruct-2407-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.1-70B-Japanese-Instruct-2407-Q4_K_M.gguf
|
||
sha256: f2a6f0fb5040d3a28479c9f9fc555a5ea7b906dfb9964539f1a68c0676a9c604
|
||
uri: huggingface://mmnga/Llama-3.1-70B-Japanese-Instruct-2407-gguf/Llama-3.1-70B-Japanese-Instruct-2407-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "openbuddy-llama3.1-8b-v22.1-131k"
|
||
icon: https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png
|
||
urls:
|
||
- https://huggingface.co/sunnyyy/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M-GGUF
|
||
description: |
|
||
OpenBuddy - Open Multilingual Chatbot
|
||
overrides:
|
||
parameters:
|
||
model: openbuddy-llama3.1-8b-v22.1-131k-q4_k_m.gguf
|
||
files:
|
||
- filename: openbuddy-llama3.1-8b-v22.1-131k-q4_k_m.gguf
|
||
sha256: c87a273785759f2d044046b7a7b42f05706baed7dc0650ed883a3bee2a097d86
|
||
uri: huggingface://sunnyyy/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M-GGUF/openbuddy-llama3.1-8b-v22.1-131k-q4_k_m.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama3.1-8b-fireplace2"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/JYkaXrk2DqpXhaL9WymKY.jpeg
|
||
urls:
|
||
- https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2
|
||
- https://huggingface.co/mudler/Llama3.1-8B-Fireplace2-Q4_K_M-GGUF
|
||
description: |
|
||
Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct.
|
||
|
||
an expansion pack of supplementary outputs - request them at will within your chat:
|
||
Inline function calls
|
||
SQL queries
|
||
JSON objects
|
||
Data visualization with matplotlib
|
||
Mix normal chat and structured outputs within the same conversation.
|
||
Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format.
|
||
|
||
Version
|
||
|
||
This is the 2024-07-23 release of Fireplace 2 for Llama 3.1 8b.
|
||
|
||
We're excited to bring further upgrades and releases to Fireplace 2 in the future.
|
||
|
||
Help us and recommend Fireplace 2 to your friends!
|
||
overrides:
|
||
parameters:
|
||
model: llama3.1-8b-fireplace2-q4_k_m.gguf
|
||
files:
|
||
- filename: llama3.1-8b-fireplace2-q4_k_m.gguf
|
||
sha256: 54527fd2474b576086ea31e759214ab240abe2429ae623a02d7ba825cc8cb13e
|
||
uri: huggingface://mudler/Llama3.1-8B-Fireplace2-Q4_K_M-GGUF/llama3.1-8b-fireplace2-q4_k_m.gguf
|
||
- !!merge <<: *llama31
|
||
name: "sekhmet_aleph-l3.1-8b-v0.1-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/SVyiW4mu495ngqszJGWRl.png
|
||
urls:
|
||
- https://huggingface.co/Nitral-Archive/Sekhmet_Aleph-L3.1-8B-v0.1
|
||
- https://huggingface.co/mradermacher/Sekhmet_Aleph-L3.1-8B-v0.1-i1-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Sekhmet_Aleph-L3.1-8B-v0.1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Sekhmet_Aleph-L3.1-8B-v0.1.i1-Q4_K_M.gguf
|
||
sha256: 5b6f4eaa2091bf13a2b563a54a3f87b22efa7f2862362537c956c70da6e11cea
|
||
uri: huggingface://mradermacher/Sekhmet_Aleph-L3.1-8B-v0.1-i1-GGUF/Sekhmet_Aleph-L3.1-8B-v0.1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "l3.1-8b-llamoutcast-i1"
|
||
icon: https://files.catbox.moe/ecgn0m.jpg
|
||
urls:
|
||
- https://huggingface.co/Envoid/L3.1-8B-Llamoutcast
|
||
- https://huggingface.co/mradermacher/L3.1-8B-Llamoutcast-i1-GGUF
|
||
description: |
|
||
Warning: this model is utterly cursed.
|
||
Llamoutcast
|
||
|
||
This model was originally intended to be a DADA finetune of Llama-3.1-8B-Instruct but the results were unsatisfactory. So it received some additional finetuning on a rawtext dataset and now it is utterly cursed.
|
||
|
||
It responds to Llama-3 Instruct formatting.
|
||
overrides:
|
||
parameters:
|
||
model: L3.1-8B-Llamoutcast.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3.1-8B-Llamoutcast.i1-Q4_K_M.gguf
|
||
sha256: 438ca0a7e9470f5ee40f3b14dc2da41b1cafc4ad4315dead3eb57924109d5cf6
|
||
uri: huggingface://mradermacher/L3.1-8B-Llamoutcast-i1-GGUF/L3.1-8B-Llamoutcast.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-guard-3-8b"
|
||
urls:
|
||
- https://huggingface.co/meta-llama/Llama-Guard-3-8B
|
||
- https://huggingface.co/QuantFactory/Llama-Guard-3-8B-GGUF
|
||
description: |
|
||
Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.
|
||
|
||
Llama Guard 3 was aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities. Specifically, it provides content moderation in 8 languages, and was optimized to support safety and security for search and code interpreter tool calls.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-Guard-3-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-Guard-3-8B.Q4_K_M.gguf
|
||
sha256: c5ea8760a1e544eea66a8915fcc3fbd2c67357ea2ee6871a9e6a6c33b64d4981
|
||
uri: huggingface://QuantFactory/Llama-Guard-3-8B-GGUF/Llama-Guard-3-8B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "genius-llama3.1-i1"
|
||
icon: https://github.com/fangyuan-ksgk/GeniusUpload/assets/66006349/7272c93e-9806-461c-a3d0-2e50ef2b7af0
|
||
urls:
|
||
- https://huggingface.co/Ksgk-fy/Genius-Llama3.1
|
||
- https://huggingface.co/mradermacher/Genius-Llama3.1-i1-GGUF
|
||
description: |
|
||
Finetuned Llama-3.1 base on Lex Fridman's podcast transcript.
|
||
overrides:
|
||
parameters:
|
||
model: Genius-Llama3.1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Genius-Llama3.1.i1-Q4_K_M.gguf
|
||
sha256: a272bb2a6ab7ed565738733fb8af8e345b177eba9e76ce615ea845c25ebf8cd5
|
||
uri: huggingface://mradermacher/Genius-Llama3.1-i1-GGUF/Genius-Llama3.1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama3.1-8b-chinese-chat"
|
||
urls:
|
||
- https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat
|
||
- https://huggingface.co/QuantFactory/Llama3.1-8B-Chinese-Chat-GGUF
|
||
description: |
|
||
llama3.1-8B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3.1-8B-Instruct model. Developers: [Shenzhi Wang](https://shenzhi-wang.netlify.app)*, [Yaowei Zheng](https://github.com/hiyouga)*, Guoyin Wang (in.ai), Shiji Song, Gao Huang. (*: Equal Contribution) - License: [Llama-3.1 License](https://huggingface.co/meta-llama/Meta-Llla...
|
||
m-3.1-8B/blob/main/LICENSE) - Base Model: Meta-Llama-3.1-8B-Instruct - Model Size: 8.03B - Context length: 128K(reported by [Meta-Llama-3.1-8B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), untested for our Chinese model)
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.1-8B-Chinese-Chat.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.1-8B-Chinese-Chat.Q4_K_M.gguf
|
||
sha256: 824847b6cca82c4d60107c6a059d80ba975a68543e6effd98880435436ddba06
|
||
uri: huggingface://QuantFactory/Llama3.1-8B-Chinese-Chat-GGUF/Llama3.1-8B-Chinese-Chat.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama3.1-70b-chinese-chat"
|
||
urls:
|
||
- https://huggingface.co/shenzhi-wang/Llama3.1-70B-Chinese-Chat
|
||
- https://huggingface.co/mradermacher/Llama3.1-70B-Chinese-Chat-GGUF
|
||
description: |
|
||
"Llama3.1-70B-Chinese-Chat" is a 70-billion parameter large language model pre-trained on a large corpus of Chinese text data. It is designed for chat and dialog applications, and can generate human-like responses to various prompts and inputs. The model is based on the Llama3.1 architecture and has been fine-tuned for Chinese language understanding and generation. It can be used for a wide range of natural language processing tasks, including language translation, text summarization, question answering, and more.
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.1-70B-Chinese-Chat.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.1-70B-Chinese-Chat.Q4_K_M.gguf
|
||
sha256: 395cff3cce2b092f840b68eb6e31f4c8b670bc8e3854bbb230df8334369e671d
|
||
uri: huggingface://mradermacher/Llama3.1-70B-Chinese-Chat-GGUF/Llama3.1-70B-Chinese-Chat.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "meta-llama-3.1-instruct-9.99b-brainstorm-10x-form-3"
|
||
urls:
|
||
- https://huggingface.co/DavidAU/Meta-Llama-3.1-Instruct-9.99B-BRAINSTORM-10x-FORM-3-GGUF
|
||
description: |
|
||
The Meta-Llama-3.1-8B Instruct model is a large language model trained on a diverse range of text data, with the goal of generating high-quality and coherent text in response to user input. This model is enhanced through a process called "Brainstorm", which involves expanding and recalibrating the model's reasoning center to improve its creative and generative capabilities. The resulting model is capable of generating detailed, vivid, and nuanced text, with a focus on prose quality, conceptually complex responses, and a deeper understanding of the user's intent. The Brainstorm process is designed to enhance the model's performance in creative writing, roleplaying, and story generation, and to improve its ability to generate coherent and engaging text in a wide range of contexts. The model is based on the Llama3 architecture and has been fine-tuned using the Instruct framework, which provides it with a strong foundation for understanding natural language instructions and generating appropriate responses. The model can be used for a variety of tasks, including creative writing,Generating coherent and detailed text, exploring different perspectives and scenarios, and brainstorming ideas.
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3.1-8B-Instruct-Instruct-exp10-3-Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3.1-8B-Instruct-Instruct-exp10-3-Q4_K_M.gguf
|
||
sha256: f52ff984100b1ff6acfbd7ed1df770064118274a54ae5d48749400a662113615
|
||
uri: huggingface://DavidAU/Meta-Llama-3.1-Instruct-9.99B-BRAINSTORM-10x-FORM-3-GGUF/Meta-Llama-3.1-8B-Instruct-Instruct-exp10-3-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-techne-rp-8b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/633a809fa4a8f33508dce32c/BMdwgJ6cHZWbiGL48Q-Wq.png
|
||
urls:
|
||
- https://huggingface.co/athirdpath/Llama-3.1-Techne-RP-8b-v1
|
||
- https://huggingface.co/mradermacher/Llama-3.1-Techne-RP-8b-v1-GGUF
|
||
description: |
|
||
athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit was further trained in the order below:
|
||
SFT
|
||
|
||
Doctor-Shotgun/no-robots-sharegpt
|
||
grimulkan/LimaRP-augmented
|
||
Inv/c2-logs-cleaned-deslopped
|
||
|
||
DPO
|
||
|
||
jondurbin/truthy-dpo-v0.1
|
||
Undi95/Weyaxi-humanish-dpo-project-noemoji
|
||
athirdpath/DPO_Pairs-Roleplay-Llama3-NSFW
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-Techne-RP-8b-v1.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.1-Techne-RP-8b-v1.Q4_K_M.gguf
|
||
sha256: 6557c5d5091f2507d19ab1f8bfb9ceb4e1536a755ab70f148b18aeb33741580f
|
||
uri: huggingface://mradermacher/Llama-3.1-Techne-RP-8b-v1-GGUF/Llama-3.1-Techne-RP-8b-v1.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
icon: https://i.ibb.co/9hwFrvL/BLMs-Wkx-NQf-W-46-FZDg-ILhg.jpg
|
||
name: "llama-spark"
|
||
urls:
|
||
- https://huggingface.co/arcee-ai/Llama-Spark
|
||
- https://huggingface.co/arcee-ai/Llama-Spark-GGUF
|
||
description: |
|
||
Llama-Spark is a powerful conversational AI model developed by Arcee.ai. It's built on the foundation of Llama-3.1-8B and merges the power of our Tome Dataset with Llama-3.1-8B-Instruct, resulting in a remarkable conversationalist that punches well above its 8B parameter weight class.
|
||
overrides:
|
||
parameters:
|
||
model: llama-spark-dpo-v0.3-Q4_K_M.gguf
|
||
files:
|
||
- filename: llama-spark-dpo-v0.3-Q4_K_M.gguf
|
||
sha256: 41367168bbdc4b16eb80efcbee4dacc941781ee8748065940167fe6947b4e4c3
|
||
uri: huggingface://arcee-ai/Llama-Spark-GGUF/llama-spark-dpo-v0.3-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "l3.1-70b-glitz-v0.2-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/q2dOUnzc1GRbZp3YfzGXB.png
|
||
urls:
|
||
- https://huggingface.co/Fizzarolli/L3.1-70b-glitz-v0.2
|
||
- https://huggingface.co/mradermacher/L3.1-70b-glitz-v0.2-i1-GGUF
|
||
description: |
|
||
this is an experimental l3.1 70b finetuning run... that crashed midway through. however, the results are still interesting, so i wanted to publish them :3
|
||
overrides:
|
||
parameters:
|
||
model: L3.1-70b-glitz-v0.2.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3.1-70b-glitz-v0.2.i1-Q4_K_M.gguf
|
||
sha256: 585efc83e7f6893043be2487fc09c914a381fb463ce97942ef2f25ae85103bcd
|
||
uri: huggingface://mradermacher/L3.1-70b-glitz-v0.2-i1-GGUF/L3.1-70b-glitz-v0.2.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "calme-2.3-legalkit-8b-i1"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b/resolve/main/calme-2-legalkit.webp
|
||
urls:
|
||
- https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-i1-GGUF
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b
|
||
description: |
|
||
This model is an advanced iteration of the powerful meta-llama/Meta-Llama-3.1-8B-Instruct, specifically fine-tuned to enhance its capabilities in the legal domain. The fine-tuning process utilized a synthetically generated dataset derived from the French LegalKit, a comprehensive legal language resource.
|
||
|
||
To create this specialized dataset, I used the NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO model in conjunction with Hugging Face's Inference Endpoint. This approach allowed for the generation of high-quality, synthetic data that incorporates Chain of Thought (CoT) and advanced reasoning in its responses.
|
||
|
||
The resulting model combines the robust foundation of Llama-3.1-8B with tailored legal knowledge and enhanced reasoning capabilities. This makes it particularly well-suited for tasks requiring in-depth legal analysis, interpretation, and application of French legal concepts.
|
||
overrides:
|
||
parameters:
|
||
model: calme-2.3-legalkit-8b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: calme-2.3-legalkit-8b.i1-Q4_K_M.gguf
|
||
sha256: b71dfea8bbd73b0fbd5793ef462b8540c24e1c52a47b1794561adb88109a9e80
|
||
uri: huggingface://mradermacher/calme-2.3-legalkit-8b-i1-GGUF/calme-2.3-legalkit-8b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "fireball-llama-3.11-8b-v1orpo"
|
||
icon: https://huggingface.co/EpistemeAI/Fireball-Llama-3.1-8B-v1dpo/resolve/main/fireball-llama.JPG
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Fireball-Llama-3.11-8B-v1orpo-GGUF
|
||
description: |
|
||
Developed by: EpistemeAI
|
||
License: apache-2.0
|
||
Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
|
||
Finetuned methods: DPO (Direct Preference Optimization) & ORPO (Odds Ratio Preference Optimization)
|
||
overrides:
|
||
parameters:
|
||
model: Fireball-Llama-3.11-8B-v1orpo.Q4_K_M.gguf
|
||
files:
|
||
- filename: Fireball-Llama-3.11-8B-v1orpo.Q4_K_M.gguf
|
||
sha256: c61a1f4ee4f05730ac6af754dc8dfddf34eba4486ffa320864e16620d6527731
|
||
uri: huggingface://mradermacher/Fireball-Llama-3.11-8B-v1orpo-GGUF/Fireball-Llama-3.11-8B-v1orpo.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-storm-8b-q4_k_m"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg
|
||
urls:
|
||
- https://huggingface.co/mudler/Llama-3.1-Storm-8B-Q4_K_M-GGUF
|
||
- https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B
|
||
description: |
|
||
We present the Llama-3.1-Storm-8B model that outperforms Meta AI's Llama-3.1-8B-Instruct and Hermes-3-Llama-3.1-8B models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
|
||
- Self-Curation: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of about 3 million open-source examples. Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).
|
||
- Targeted fine-tuning: We performed Spectrum-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
|
||
- Model Merging: We merged our fine-tuned model with the Llama-Spark model using SLERP method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. Llama-3.1-Storm-8B improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
|
||
overrides:
|
||
parameters:
|
||
model: llama-3.1-storm-8b-q4_k_m.gguf
|
||
files:
|
||
- filename: llama-3.1-storm-8b-q4_k_m.gguf
|
||
sha256: d714e960211ee0fe6113d3131a6573e438f37debd07e1067d2571298624414a0
|
||
uri: huggingface://mudler/Llama-3.1-Storm-8B-Q4_K_M-GGUF/llama-3.1-storm-8b-q4_k_m.gguf
|
||
- !!merge <<: *llama31
|
||
name: "hubble-4b-v1"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/R8_o3CCpTgKv5Wnnry7E_.png
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Hubble-4B-v1-GGUF
|
||
description: |
|
||
Equipped with his five senses, man explores the universe around him and calls the adventure 'Science'.
|
||
This is a finetune of Nvidia's Llama 3.1 4B Minitron - a shrunk down model of Llama 3.1 8B 128K.
|
||
overrides:
|
||
parameters:
|
||
model: Hubble-4B-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Hubble-4B-v1-Q4_K_M.gguf
|
||
uri: huggingface://TheDrummer/Hubble-4B-v1-GGUF/Hubble-4B-v1-Q4_K_M.gguf
|
||
sha256: 0721294d0e861c6e6162a112fc7242e0c4b260c156137f4bcbb08667f1748080
|
||
- !!merge <<: *llama31
|
||
name: "reflection-llama-3.1-70b"
|
||
urls:
|
||
- https://huggingface.co/leafspark/Reflection-Llama-3.1-70B-bf16
|
||
- https://huggingface.co/senseable/Reflection-Llama-3.1-70B-gguf
|
||
description: |
|
||
Reflection Llama-3.1 70B is (currently) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.
|
||
|
||
The model was trained on synthetic data generated by Glaive. If you're training a model, Glaive is incredible — use them.
|
||
overrides:
|
||
parameters:
|
||
model: Reflection-Llama-3.1-70B-q4_k_m.gguf
|
||
files:
|
||
- filename: Reflection-Llama-3.1-70B-q4_k_m.gguf
|
||
sha256: 16064e07037883a750cfeae9a7be41143aa857dbac81c2e93c68e2f941dee7b2
|
||
uri: huggingface://senseable/Reflection-Llama-3.1-70B-gguf/Reflection-Llama-3.1-70B-q4_k_m.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-supernova-lite-reflection-v1.0-i1"
|
||
url: "github:mudler/LocalAI/gallery/llama3.1-reflective.yaml@master"
|
||
icon: https://i.ibb.co/r072p7j/eopi-ZVu-SQ0-G-Cav78-Byq-Tg.png
|
||
urls:
|
||
- https://huggingface.co/SE6446/Llama-3.1-SuperNova-Lite-Reflection-V1.0
|
||
- https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-Reflection-V1.0-i1-GGUF
|
||
description: |
|
||
This model is a LoRA adaptation of arcee-ai/Llama-3.1-SuperNova-Lite on thesven/Reflective-MAGLLAMA-v0.1.1. This has been a simple experiment into reflection and the model appears to perform adequately, though I am unsure if it is a large improvement.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-SuperNova-Lite-Reflection-V1.0.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.1-SuperNova-Lite-Reflection-V1.0.i1-Q4_K_M.gguf
|
||
sha256: 0c4531fe553d00142808e1bc7348ae92d400794c5b64d2db1a974718324dfe9a
|
||
uri: huggingface://mradermacher/Llama-3.1-SuperNova-Lite-Reflection-V1.0-i1-GGUF/Llama-3.1-SuperNova-Lite-Reflection-V1.0.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-supernova-lite"
|
||
icon: https://i.ibb.co/r072p7j/eopi-ZVu-SQ0-G-Cav78-Byq-Tg.png
|
||
urls:
|
||
- https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite
|
||
- https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite-GGUF
|
||
description: |
|
||
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
|
||
|
||
The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
|
||
|
||
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
|
||
overrides:
|
||
parameters:
|
||
model: supernova-lite-v1.Q4_K_M.gguf
|
||
files:
|
||
- filename: supernova-lite-v1.Q4_K_M.gguf
|
||
sha256: 237b7b0b704d294f92f36c576cc8fdc10592f95168a5ad0f075a2d8edf20da4d
|
||
uri: huggingface://arcee-ai/Llama-3.1-SuperNova-Lite-GGUF/supernova-lite-v1.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama3.1-8b-shiningvaliant2"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/EXX7TKbB-R6arxww2mk0R.jpeg
|
||
urls:
|
||
- https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2
|
||
- https://huggingface.co/bartowski/Llama3.1-8B-ShiningValiant2-GGUF
|
||
description: |
|
||
Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
|
||
|
||
Finetuned on meta-llama/Meta-Llama-3.1-8B-Instruct for best available general performance
|
||
Trained on a variety of high quality data; focused on science, engineering, technical knowledge, and structured reasoning
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.1-8B-ShiningValiant2-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.1-8B-ShiningValiant2-Q4_K_M.gguf
|
||
sha256: 9369eb97922a9f01e4eae610e3d7aaeca30762d78d9239884179451d60bdbdd2
|
||
uri: huggingface://bartowski/Llama3.1-8B-ShiningValiant2-GGUF/Llama3.1-8B-ShiningValiant2-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "nightygurps-14b-v1.1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6336c5b3e3ac69e6a90581da/FvfjK7bKqsWdaBkB3eWgP.png
|
||
urls:
|
||
- https://huggingface.co/AlexBefest/NightyGurps-14b-v1.1
|
||
- https://huggingface.co/bartowski/NightyGurps-14b-v1.1-GGUF
|
||
description: |
|
||
This model works with Russian only.
|
||
This model is designed to run GURPS roleplaying games, as well as consult and assist. This model was trained on an augmented dataset of the GURPS Basic Set rulebook. Its primary purpose was initially to become an assistant consultant and assistant Game Master for the GURPS roleplaying system, but it can also be used as a GM for running solo games as a player.
|
||
overrides:
|
||
parameters:
|
||
model: NightyGurps-14b-v1.1-Q4_K_M.gguf
|
||
files:
|
||
- filename: NightyGurps-14b-v1.1-Q4_K_M.gguf
|
||
sha256: d09d53259ad2c0298150fa8c2db98fe42f11731af89fdc80ad0e255a19adc4b0
|
||
uri: huggingface://bartowski/NightyGurps-14b-v1.1-GGUF/NightyGurps-14b-v1.1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-swallow-70b-v0.1-i1"
|
||
icon: https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1/resolve/main/logo.png
|
||
urls:
|
||
- https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1
|
||
- https://huggingface.co/mradermacher/Llama-3.1-Swallow-70B-v0.1-i1-GGUF
|
||
description: |
|
||
Llama 3.1 Swallow is a series of large language models (8B, 70B) that were built by continual pre-training on the Meta Llama 3.1 models. Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities. We use approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and coding contents, etc (see the Training Datasets section) for continual pre-training. The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese. See the Swallow Model Index section to find other model variants.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-Swallow-70B-v0.1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.1-Swallow-70B-v0.1.i1-Q4_K_M.gguf
|
||
sha256: 9eaa08a4872a26f56fe34b27a99f7bd0d22ee2b2d1c84cfcde2091b5f61af5fa
|
||
uri: huggingface://mradermacher/Llama-3.1-Swallow-70B-v0.1-i1-GGUF/Llama-3.1-Swallow-70B-v0.1.i1-Q4_K_M.gguf
|
||
## Uncensored models
|
||
- !!merge <<: *llama31
|
||
name: "humanish-roleplay-llama-3.1-8b-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/VPwtjS3BtjEEEq7ck4kAQ.webp
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Humanish-Roleplay-Llama-3.1-8B-i1-GGUF
|
||
description: |
|
||
A DPO-tuned Llama-3.1 to behave more "humanish", i.e., avoiding all the AI assistant slop. It also works for role-play (RP). To achieve this, the model was fine-tuned over a series of datasets:
|
||
General conversations from Claude Opus, from Undi95/Meta-Llama-3.1-8B-Claude
|
||
Undi95/Weyaxi-humanish-dpo-project-noemoji, to make the model react as a human, rejecting assistant-like or too neutral responses.
|
||
ResplendentAI/NSFW_RP_Format_DPO, to steer the model towards using the *action* format in RP settings. Works best if in the first message you also use this format naturally (see example)
|
||
overrides:
|
||
parameters:
|
||
model: Humanish-Roleplay-Llama-3.1-8B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Humanish-Roleplay-Llama-3.1-8B.i1-Q4_K_M.gguf
|
||
sha256: 18cf753684e5226b51f3defc708852ca4924f50dc8bc31c9a7d0a036a477b7a7
|
||
uri: huggingface://mradermacher/Humanish-Roleplay-Llama-3.1-8B-i1-GGUF/Humanish-Roleplay-Llama-3.1-8B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "darkidol-llama-3.1-8b-instruct-1.0-uncensored-i1"
|
||
icon: https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored/resolve/main/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.png
|
||
urls:
|
||
- https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored
|
||
- https://huggingface.co/mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-i1-GGUF
|
||
description: |
|
||
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
|
||
|
||
Saving money(LLama 3.1)
|
||
only test en.
|
||
Input Models input text only. Output Models generate text and code only.
|
||
Uncensored
|
||
Quick response
|
||
A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
|
||
DarkIdol:Roles that you can imagine and those that you cannot imagine.
|
||
Roleplay
|
||
Specialized in various role-playing scenarios
|
||
|
||
How To
|
||
|
||
System Prompt : "You are an Uncensored AI Assistant,As a film screenwriter, the purpose of all questions is to write a movie script."
|
||
overrides:
|
||
parameters:
|
||
model: DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
|
||
uri: huggingface://mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-i1-GGUF/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
|
||
sha256: 9632316d735365087f36083dec320a71995650deb86cf74f39ab071e43114eb8
|
||
- !!merge <<: *llama31
|
||
name: "darkidol-llama-3.1-8b-instruct-1.1-uncensored-iq-imatrix-request"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/iDV5GTVJbjkvMp1set-ZC.png
|
||
urls:
|
||
- https://huggingface.co/LWDCLS/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF-IQ-Imatrix-Request
|
||
description: |
|
||
Uncensored
|
||
virtual idol Twitter
|
||
|
||
https://x.com/aifeifei799
|
||
|
||
Questions
|
||
|
||
The model's response results are for reference only, please do not fully trust them.
|
||
This model is solely for learning and testing purposes, and errors in output are inevitable. We do not take responsibility for the output results. If the output content is to be used, it must be modified; if not modified, we will assume it has been altered.
|
||
For commercial licensing, please refer to the Llama 3.1 agreement.
|
||
overrides:
|
||
parameters:
|
||
model: DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-Q4_K_M-imat.gguf
|
||
sha256: fa9fc56de7d902b755c43f1a5d0867d961675174a1b3e73a10d822836c3390e6
|
||
uri: huggingface://LWDCLS/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF-IQ-Imatrix-Request/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-8b-instruct-fei-v1-uncensored"
|
||
icon: https://huggingface.co/aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored/resolve/main/Llama-3.1-8B-Instruct-Fei-v1-Uncensored.png
|
||
urls:
|
||
- https://huggingface.co/aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored
|
||
- https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Fei-v1-Uncensored-GGUF
|
||
description: |
|
||
Llama-3.1-8B-Instruct Uncensored
|
||
more informtion look at Llama-3.1-8B-Instruct
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-8B-Instruct-Fei-v1-Uncensored.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.1-8B-Instruct-Fei-v1-Uncensored.Q4_K_M.gguf
|
||
uri: huggingface://mradermacher/Llama-3.1-8B-Instruct-Fei-v1-Uncensored-GGUF/Llama-3.1-8B-Instruct-Fei-v1-Uncensored.Q4_K_M.gguf
|
||
sha256: 6b1985616160712eb884c34132dc0602fa4600a19075e3a7b179119b89b73f77
|
||
- !!merge <<: *llama31
|
||
name: "lumimaid-v0.2-8b"
|
||
urls:
|
||
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B
|
||
- https://huggingface.co/mradermacher/Lumimaid-v0.2-8B-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TUcHg7LKNjfo0sni88Ps7.png
|
||
description: |
|
||
This model is based on: Meta-Llama-3.1-8B-Instruct
|
||
|
||
Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95
|
||
|
||
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
|
||
|
||
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
|
||
|
||
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
|
||
overrides:
|
||
parameters:
|
||
model: Lumimaid-v0.2-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Lumimaid-v0.2-8B.Q4_K_M.gguf
|
||
sha256: c8024fcb49c71410903d0d076a1048249fa48b31637bac5177bf5c3f3d603d85
|
||
uri: huggingface://mradermacher/Lumimaid-v0.2-8B-GGUF/Lumimaid-v0.2-8B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "lumimaid-v0.2-70b-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/HY1KTq6FMAm-CwmY8-ndO.png
|
||
urls:
|
||
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B
|
||
- https://huggingface.co/mradermacher/Lumimaid-v0.2-70B-i1-GGUF
|
||
description: |
|
||
This model is based on: Meta-Llama-3.1-8B-Instruct
|
||
|
||
Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95
|
||
|
||
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
|
||
|
||
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
|
||
|
||
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
|
||
overrides:
|
||
parameters:
|
||
model: Lumimaid-v0.2-70B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Lumimaid-v0.2-70B.i1-Q4_K_M.gguf
|
||
sha256: 4857da8685cb0f3d2b8b8c91fb0c07b35b863eb7c185e93ed83ac338e095cbb5
|
||
uri: huggingface://mradermacher/Lumimaid-v0.2-70B-i1-GGUF/Lumimaid-v0.2-70B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "l3.1-8b-celeste-v1.5"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/QcU3xEgVu18jeFtMFxIw-.webp
|
||
urls:
|
||
- https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5
|
||
- https://huggingface.co/bartowski/L3.1-8B-Celeste-V1.5-GGUF
|
||
description: |
|
||
The LLM model is a large language model trained on a combination of datasets including nothingiisreal/c2-logs-cleaned, kalomaze/Opus_Instruct_25k, and nothingiisreal/Reddit-Dirty-And-WritingPrompts. The training was performed on a combination of English-language data using the Hugging Face Transformers library.
|
||
Trained on LLaMA 3.1 8B Instruct at 8K context using a new mix of Reddit Writing Prompts, Kalo's Opus 25K Instruct and c2 logs cleaned This version has the highest coherency and is very strong on OOC: instruct following.
|
||
overrides:
|
||
parameters:
|
||
model: L3.1-8B-Celeste-V1.5-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3.1-8B-Celeste-V1.5-Q4_K_M.gguf
|
||
sha256: a408dfbbd91ed5561f70d3129af040dfd06704d6c7fa21146aa9f09714aafbc6
|
||
uri: huggingface://bartowski/L3.1-8B-Celeste-V1.5-GGUF/L3.1-8B-Celeste-V1.5-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/659c4ecb413a1376bee2f661/szz8sIxofYzSe5XPet2pO.png
|
||
name: "kumiho-v1-rp-uwu-8b"
|
||
urls:
|
||
- https://huggingface.co/juvi21/Kumiho-v1-rp-UwU-8B-GGUF
|
||
description: |
|
||
Meet Kumiho-V1 uwu. Kumiho-V1-rp-UwU aims to be a generalist model with specialization in roleplay and writing capabilities. It is finetuned and merged with various models, with a heavy base of Meta's LLaMA 3.1-8B as base model, and Claude 3.5 Sonnet and Claude 3 Opus generated synthetic data.
|
||
overrides:
|
||
parameters:
|
||
model: Kumiho-v1-rp-UwU-8B-gguf-q4_k_m.gguf
|
||
files:
|
||
- filename: Kumiho-v1-rp-UwU-8B-gguf-q4_k_m.gguf
|
||
sha256: a1deb46675418277cf785a406cd1508fec556ff6e4d45d2231eb2a82986d52d0
|
||
uri: huggingface://juvi21/Kumiho-v1-rp-UwU-8B-GGUF/Kumiho-v1-rp-UwU-8B-gguf-q4_k_m.gguf
|
||
- !!merge <<: *llama31
|
||
name: "infinity-instruct-7m-gen-llama3_1-70b"
|
||
icon: https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B/resolve/main/fig/Bk3NbjnJko51MTx1ZCScT2sqnGg.png
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-70B-GGUF
|
||
description: |
|
||
Infinity-Instruct-7M-Gen-Llama3.1-70B is an opensource supervised instruction tuning model without reinforcement learning from human feedback (RLHF). This model is just finetuned on Infinity-Instruct-7M and Infinity-Instruct-Gen and showing favorable results on AlpacaEval 2.0 and arena-hard compared to GPT4.
|
||
overrides:
|
||
parameters:
|
||
model: Infinity-Instruct-7M-Gen-Llama3_1-70B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Infinity-Instruct-7M-Gen-Llama3_1-70B.Q4_K_M.gguf
|
||
sha256: f4379ab4d7140da0510886073375ca820ea9ac4ad9d3c20e17ed05156bd29697
|
||
uri: huggingface://mradermacher/Infinity-Instruct-7M-Gen-Llama3_1-70B-GGUF/Infinity-Instruct-7M-Gen-Llama3_1-70B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "cathallama-70b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/649dc85249ae3a68334adcc6/KxaiZ7rDKkYlix99O9j5H.png
|
||
urls:
|
||
- https://huggingface.co/gbueno86/Cathallama-70B
|
||
- https://huggingface.co/mradermacher/Cathallama-70B-GGUF
|
||
description: |
|
||
Notable Performance
|
||
|
||
9% overall success rate increase on MMLU-PRO over LLaMA 3.1 70b
|
||
Strong performance in MMLU-PRO categories overall
|
||
Great performance during manual testing
|
||
|
||
Creation workflow
|
||
|
||
Models merged
|
||
|
||
meta-llama/Meta-Llama-3.1-70B-Instruct
|
||
turboderp/Cat-Llama-3-70B-instruct
|
||
Nexusflow/Athene-70B
|
||
overrides:
|
||
parameters:
|
||
model: Cathallama-70B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Cathallama-70B.Q4_K_M.gguf
|
||
sha256: 7bbac0849a8da82e7912a493a15fa07d605f1ffbe7337a322f17e09195511022
|
||
uri: huggingface://mradermacher/Cathallama-70B-GGUF/Cathallama-70B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "mahou-1.3-llama3.1-8b"
|
||
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Mahou-1.3-llama3.1-8B-GGUF
|
||
- https://huggingface.co/flammenai/Mahou-1.3-llama3.1-8B
|
||
description: |
|
||
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
|
||
overrides:
|
||
parameters:
|
||
model: Mahou-1.3-llama3.1-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Mahou-1.3-llama3.1-8B.Q4_K_M.gguf
|
||
sha256: 88bfdca2f6077d789d3e0f161d19711aa208a6d9a02cce96a2276c69413b3594
|
||
uri: huggingface://mradermacher/Mahou-1.3-llama3.1-8B-GGUF/Mahou-1.3-llama3.1-8B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "azure_dusk-v0.2-iq-imatrix"
|
||
# chatml
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/n3-g_YTk3FY-DBzxXd28E.png
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Azure_Dusk-v0.2-GGUF-IQ-Imatrix
|
||
description: |
|
||
"Following up on Crimson_Dawn-v0.2 we have Azure_Dusk-v0.2! Training on Mistral-Nemo-Base-2407 this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting."
|
||
by Author.
|
||
overrides:
|
||
parameters:
|
||
model: Azure_Dusk-v0.2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Azure_Dusk-v0.2-Q4_K_M-imat.gguf
|
||
sha256: c03a670c00976d14c267a0322374ed488b2a5f4790eb509136ca4e75cbc10cf4
|
||
uri: huggingface://Lewdiculous/Azure_Dusk-v0.2-GGUF-IQ-Imatrix/Azure_Dusk-v0.2-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama31
|
||
name: "l3.1-8b-niitama-v1.1-iq-imatrix"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/2Q5ky8TvP0vLS1ulMXnrn.png
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1
|
||
- https://huggingface.co/Lewdiculous/L3.1-8B-Niitama-v1.1-GGUF-IQ-Imatrix
|
||
description: |
|
||
GGUF-IQ-Imatrix quants for Sao10K/L3.1-8B-Niitama-v1.1
|
||
Here's the subjectively superior L3 version: L3-8B-Niitama-v1
|
||
An experimental model using experimental methods.
|
||
|
||
More detail on it:
|
||
|
||
Tamamo and Niitama are made from the same data. Literally. The only thing that's changed is how theyre shuffled and formatted. Yet, I get wildly different results.
|
||
|
||
Interesting, eh? Feels kinda not as good compared to the l3 version, but it's aight.
|
||
overrides:
|
||
parameters:
|
||
model: L3.1-8B-Niitama-v1.1-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: L3.1-8B-Niitama-v1.1-Q4_K_M-imat.gguf
|
||
sha256: 524163bd0f1d43c9284b09118abcc192f3250b13dd3bb79d60c28321108b6748
|
||
uri: huggingface://Lewdiculous/L3.1-8B-Niitama-v1.1-GGUF-IQ-Imatrix/L3.1-8B-Niitama-v1.1-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-8b-stheno-v3.4-iq-imatrix"
|
||
icon: https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4/resolve/main/meneno.jpg
|
||
urls:
|
||
- https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4
|
||
- https://huggingface.co/Lewdiculous/Llama-3.1-8B-Stheno-v3.4-GGUF-IQ-Imatrix
|
||
description: |
|
||
This model has went through a multi-stage finetuning process.
|
||
|
||
- 1st, over a multi-turn Conversational-Instruct
|
||
- 2nd, over a Creative Writing / Roleplay along with some Creative-based Instruct Datasets.
|
||
- - Dataset consists of a mixture of Human and Claude Data.
|
||
|
||
Prompting Format:
|
||
|
||
- Use the L3 Instruct Formatting - Euryale 2.1 Preset Works Well
|
||
- Temperature + min_p as per usual, I recommend 1.4 Temp + 0.2 min_p.
|
||
- Has a different vibe to previous versions. Tinker around.
|
||
|
||
Changes since previous Stheno Datasets:
|
||
|
||
- Included Multi-turn Conversation-based Instruct Datasets to boost multi-turn coherency. # This is a seperate set, not the ones made by Kalomaze and Nopm, that are used in Magnum. They're completely different data.
|
||
- Replaced Single-Turn Instruct with Better Prompts and Answers by Claude 3.5 Sonnet and Claude 3 Opus.
|
||
- Removed c2 Samples -> Underway of re-filtering and masking to use with custom prefills. TBD
|
||
- Included 55% more Roleplaying Examples based of [Gryphe's](https://huggingface.co/datasets/Gryphe/Sonnet3.5-Charcard-Roleplay) Charcard RP Sets. Further filtered and cleaned on.
|
||
- Included 40% More Creative Writing Examples.
|
||
- Included Datasets Targeting System Prompt Adherence.
|
||
- Included Datasets targeting Reasoning / Spatial Awareness.
|
||
- Filtered for the usual errors, slop and stuff at the end. Some may have slipped through, but I removed nearly all of it.
|
||
|
||
Personal Opinions:
|
||
|
||
- Llama3.1 was more disappointing, in the Instruct Tune? It felt overbaked, atleast. Likely due to the DPO being done after their SFT Stage.
|
||
- Tuning on L3.1 base did not give good results, unlike when I tested with Nemo base. unfortunate.
|
||
- Still though, I think I did an okay job. It does feel a bit more distinctive.
|
||
- It took a lot of tinkering, like a LOT to wrangle this.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-8B-Stheno-v3.4-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Llama-3.1-8B-Stheno-v3.4-Q4_K_M-imat.gguf
|
||
sha256: 830d4858aa11a654f82f69fa40dee819edf9ecf54213057648304eb84b8dd5eb
|
||
uri: huggingface://Lewdiculous/Llama-3.1-8B-Stheno-v3.4-GGUF-IQ-Imatrix/Llama-3.1-8B-Stheno-v3.4-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-8b-arliai-rpmax-v1.1"
|
||
urls:
|
||
- https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
|
||
- https://huggingface.co/bartowski/Llama-3.1-8B-ArliAI-RPMax-v1.1-GGUF
|
||
description: |
|
||
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-8B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3.1-8B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
sha256: 0a601c7341228d9160332965298d799369a1dc2b7080771fb8051bdeb556b30c
|
||
uri: huggingface://bartowski/Llama-3.1-8B-ArliAI-RPMax-v1.1-GGUF/Llama-3.1-8B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "violet_twilight-v0.2-iq-imatrix"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/P962FQhRG4I8nbU_DJolY.png
|
||
urls:
|
||
- https://huggingface.co/Epiculous/Violet_Twilight-v0.2
|
||
- https://huggingface.co/Lewdiculous/Violet_Twilight-v0.2-GGUF-IQ-Imatrix
|
||
description: |
|
||
Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!
|
||
overrides:
|
||
parameters:
|
||
model: Violet_Twilight-v0.2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Violet_Twilight-v0.2-Q4_K_M-imat.gguf
|
||
sha256: 0793d196a00cd6fd4e67b8c585b27a94d397e33d427e4ad4aa9a16b7abc339cd
|
||
uri: huggingface://Lewdiculous/Violet_Twilight-v0.2-GGUF-IQ-Imatrix/Violet_Twilight-v0.2-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama31
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "dans-personalityengine-v1.0.0-8b"
|
||
urls:
|
||
- https://huggingface.co/PocketDoc/Dans-PersonalityEngine-v1.0.0-8b
|
||
- https://huggingface.co/bartowski/Dans-PersonalityEngine-v1.0.0-8b-GGUF
|
||
description: |
|
||
This model is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, role playing scenarios, text adventure games, co-writing, and much more. The full dataset is publicly available and can be found in the datasets section of the model page.
|
||
|
||
There has not been any form of harmfulness alignment done on this model, please take the appropriate precautions when using it in a production environment.
|
||
overrides:
|
||
parameters:
|
||
model: Dans-PersonalityEngine-v1.0.0-8b-Q4_K_M.gguf
|
||
files:
|
||
- filename: Dans-PersonalityEngine-v1.0.0-8b-Q4_K_M.gguf
|
||
sha256: 193b66434c9962e278bb171a21e652f0d3f299f04e86c95f9f75ec5aa8ff006e
|
||
uri: huggingface://bartowski/Dans-PersonalityEngine-v1.0.0-8b-GGUF/Dans-PersonalityEngine-v1.0.0-8b-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "nihappy-l3.1-8b-v0.09"
|
||
urls:
|
||
- https://huggingface.co/Arkana08/NIHAPPY-L3.1-8B-v0.09
|
||
- https://huggingface.co/QuantFactory/NIHAPPY-L3.1-8B-v0.09-GGUF
|
||
description: |
|
||
The model is a quantized version of Arkana08/NIHAPPY-L3.1-8B-v0.09 created using llama.cpp. It is a role-playing model that integrates the finest qualities of various pre-trained language models, focusing on dynamic storytelling.
|
||
overrides:
|
||
parameters:
|
||
model: NIHAPPY-L3.1-8B-v0.09.Q4_K_M.gguf
|
||
files:
|
||
- filename: NIHAPPY-L3.1-8B-v0.09.Q4_K_M.gguf
|
||
sha256: 9bd46a06093448b143bd2775f0fb1b1b172c851fafdce31289e13b7dfc23a0d7
|
||
uri: huggingface://QuantFactory/NIHAPPY-L3.1-8B-v0.09-GGUF/NIHAPPY-L3.1-8B-v0.09.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama3.1-flammades-70b"
|
||
icon: https://huggingface.co/flammenai/Flammades-Mistral-7B/resolve/main/flammades.png?download=true
|
||
urls:
|
||
- https://huggingface.co/flammenai/Llama3.1-Flammades-70B
|
||
- https://huggingface.co/mradermacher/Llama3.1-Flammades-70B-GGUF
|
||
description: |
|
||
nbeerbower/Llama3.1-Gutenberg-Doppel-70B finetuned on flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1.
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.1-Flammades-70B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.1-Flammades-70B.Q4_K_M.gguf
|
||
sha256: f602ed006d0059ac87c6ce5904a7cc6f4b4f290886a1049f96b5b2c561ab5a89
|
||
uri: huggingface://mradermacher/Llama3.1-Flammades-70B-GGUF/Llama3.1-Flammades-70B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama3.1-gutenberg-doppel-70b"
|
||
# chatml
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://huggingface.co/nbeerbower/Mistral-Small-Gutenberg-Doppel-22B/resolve/main/doppel-header?download=true
|
||
urls:
|
||
- https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B
|
||
- https://huggingface.co/mradermacher/Llama3.1-Gutenberg-Doppel-70B-GGUF
|
||
description: |
|
||
mlabonne/Hermes-3-Llama-3.1-70B-lorablated finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
|
||
overrides:
|
||
parameters:
|
||
model: Llama3.1-Gutenberg-Doppel-70B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3.1-Gutenberg-Doppel-70B.Q4_K_M.gguf
|
||
sha256: af558f954fa26c5bb75352178cb815bbf268f01c0ca0b96f2149422d4c19511b
|
||
uri: huggingface://mradermacher/Llama3.1-Gutenberg-Doppel-70B-GGUF/Llama3.1-Gutenberg-Doppel-70B.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "llama-3.1-8b-arliai-formax-v1.0-iq-arm-imatrix"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://iili.io/2HmlLn2.md.png
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Llama-3.1-8B-ArliAI-Formax-v1.0-GGUF-IQ-ARM-Imatrix
|
||
description: |
|
||
Quants for ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0.
|
||
|
||
"Formax is a model that specializes in following response format instructions. Tell it the format of it's response and it will follow it perfectly. Great for data processing and dataset creation tasks."
|
||
|
||
"It is also a highly uncensored model that will follow your instructions very well."
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3.1-8B-ArliAI-Formax-v1.0-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Llama-3.1-8B-ArliAI-Formax-v1.0-Q4_K_M-imat.gguf
|
||
sha256: b548ad47caf7008a697afb3556190359529f5a05ec0e4e48ef992c7869e14255
|
||
uri: huggingface://Lewdiculous/Llama-3.1-8B-ArliAI-Formax-v1.0-GGUF-IQ-ARM-Imatrix/Llama-3.1-8B-ArliAI-Formax-v1.0-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama31
|
||
name: "hermes-3-llama-3.1-70b-lorablated"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/4Hbw5n68jKUSBQeTqQIeT.png
|
||
urls:
|
||
- https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated
|
||
- https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-70B-lorablated-GGUF
|
||
description: |
|
||
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-70B using lorablation.
|
||
The recipe is based on @grimjim's grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter (special thanks):
|
||
Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3 (meta-llama/Meta-Llama-3-70B-Instruct) and an abliterated Llama 3.1 (failspy/Meta-Llama-3.1-70B-Instruct-abliterated).
|
||
Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-70B to abliterate it.
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-3-Llama-3.1-70B-lorablated.Q4_K_M.gguf
|
||
files:
|
||
- filename: Hermes-3-Llama-3.1-70B-lorablated.Q4_K_M.gguf
|
||
sha256: 9294875ae3b8822855072b0f710ce800536d144cf303a91bcb087c4a307b578d
|
||
uri: huggingface://mradermacher/Hermes-3-Llama-3.1-70B-lorablated-GGUF/Hermes-3-Llama-3.1-70B-lorablated.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "hermes-3-llama-3.1-8b-lorablated"
|
||
urls:
|
||
- https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated-GGUF
|
||
description: |
|
||
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-8B using lorablation.
|
||
The recipe is simple:
|
||
Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 (meta-llama/Meta-Llama-3-8B-Instruct) and an abliterated Llama 3.1 (mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated).
|
||
Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-8B to abliterate it.
|
||
overrides:
|
||
parameters:
|
||
model: hermes-3-llama-3.1-8b-lorablated.Q4_K_M.gguf
|
||
files:
|
||
- filename: hermes-3-llama-3.1-8b-lorablated.Q4_K_M.gguf
|
||
sha256: 8cff9d399a0583616fe1f290da6daa091ab5c5493d0e173a8fffb45202d79417
|
||
uri: huggingface://mlabonne/Hermes-3-Llama-3.1-8B-lorablated-GGUF/hermes-3-llama-3.1-8b-lorablated.Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "doctoraifinetune-3.1-8b-i1"
|
||
urls:
|
||
- https://huggingface.co/huzaifa525/Doctoraifinetune-3.1-8B
|
||
- https://huggingface.co/mradermacher/Doctoraifinetune-3.1-8B-i1-GGUF
|
||
description: |
|
||
This is a fine-tuned version of the Meta-Llama-3.1-8B-bnb-4bit model, specifically adapted for the medical field. It has been trained using a dataset that provides extensive information on diseases, symptoms, and treatments, making it ideal for AI-powered healthcare tools such as medical chatbots, virtual assistants, and diagnostic support systems.
|
||
Key Features
|
||
|
||
Disease Diagnosis: Accurately identifies diseases based on symptoms provided by the user.
|
||
Symptom Analysis: Breaks down and interprets symptoms to provide a comprehensive medical overview.
|
||
Treatment Recommendations: Suggests treatments and remedies according to medical conditions.
|
||
|
||
Dataset
|
||
|
||
The model is fine-tuned on 2000 rows from a dataset consisting of 272k rows. This dataset includes rich information about diseases, symptoms, and their corresponding treatments. The model is continuously being updated and will be further trained on the remaining data in future releases to improve accuracy and capabilities.
|
||
overrides:
|
||
parameters:
|
||
model: Doctoraifinetune-3.1-8B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Doctoraifinetune-3.1-8B.i1-Q4_K_M.gguf
|
||
sha256: 282456efcb6c7e54d34ac25ae7fc022a94152ed77281ae4625b9628091e0a3d6
|
||
uri: huggingface://mradermacher/Doctoraifinetune-3.1-8B-i1-GGUF/Doctoraifinetune-3.1-8B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama31
|
||
name: "astral-fusion-neural-happy-l3.1-8b"
|
||
urls:
|
||
- https://huggingface.co/ZeroXClem/Astral-Fusion-Neural-Happy-L3.1-8B
|
||
- https://huggingface.co/mradermacher/Astral-Fusion-Neural-Happy-L3.1-8B-GGUF
|
||
description: |
|
||
Astral-Fusion-Neural-Happy-L3.1-8B is a celestial blend of magic, creativity, and dynamic storytelling. Designed to excel in instruction-following, immersive roleplaying, and magical narrative generation, this model is a fusion of the finest qualities from Astral-Fusion, NIHAPPY, and NeuralMahou. ✨🚀
|
||
|
||
This model is perfect for anyone seeking a cosmic narrative experience, with the ability to generate both precise instructional content and fantastical stories in one cohesive framework. Whether you're crafting immersive stories, creating AI roleplaying characters, or working on interactive storytelling, this model brings out the magic. 🌟
|
||
overrides:
|
||
parameters:
|
||
model: Astral-Fusion-Neural-Happy-L3.1-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Astral-Fusion-Neural-Happy-L3.1-8B.Q4_K_M.gguf
|
||
sha256: 14a3b07c1723ef1ca24f99382254b1227d95974541e23792a4e7ff621896055d
|
||
uri: huggingface://mradermacher/Astral-Fusion-Neural-Happy-L3.1-8B-GGUF/Astral-Fusion-Neural-Happy-L3.1-8B.Q4_K_M.gguf
|
||
- &deepseek
|
||
## Deepseek
|
||
url: "github:mudler/LocalAI/gallery/deepseek.yaml@master"
|
||
name: "deepseek-coder-v2-lite-instruct"
|
||
icon: "https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true"
|
||
license: deepseek
|
||
description: |
|
||
DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from DeepSeek-Coder-V2-Base with 6 trillion tokens sourced from a high-quality and multi-source corpus. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-Coder-V2-Base, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
|
||
In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found in the paper.
|
||
urls:
|
||
- https://github.com/deepseek-ai/DeepSeek-Coder-V2/tree/main
|
||
- https://huggingface.co/LoneStriker/DeepSeek-Coder-V2-Lite-Instruct-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- deepseek
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf
|
||
sha256: 50ec78036433265965ed1afd0667c00c71c12aa70bcf383be462cb8e159db6c0
|
||
uri: huggingface://LoneStriker/DeepSeek-Coder-V2-Lite-Instruct-GGUF/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *deepseek
|
||
name: "cursorcore-ds-6.7b-i1"
|
||
urls:
|
||
- https://huggingface.co/TechxGenus/CursorCore-DS-6.7B
|
||
- https://huggingface.co/mradermacher/CursorCore-DS-6.7B-i1-GGUF
|
||
description: |
|
||
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
|
||
overrides:
|
||
parameters:
|
||
model: CursorCore-DS-6.7B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: CursorCore-DS-6.7B.i1-Q4_K_M.gguf
|
||
sha256: 71b94496be79e5bc45c23d6aa6c242f5f1d3625b4f00fe91d781d381ef35c538
|
||
uri: huggingface://mradermacher/CursorCore-DS-6.7B-i1-GGUF/CursorCore-DS-6.7B.i1-Q4_K_M.gguf
|
||
- name: "archangel_sft_pythia2-8b"
|
||
url: "github:mudler/LocalAI/gallery/tuluv2.yaml@master"
|
||
icon: https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06
|
||
license: apache-2.0
|
||
urls:
|
||
- https://huggingface.co/ContextualAI/archangel_sft_pythia2-8b
|
||
- https://huggingface.co/RichardErkhov/ContextualAI_-_archangel_sft_pythia2-8b-gguf
|
||
- https://github.com/ContextualAI/HALOs
|
||
description: |
|
||
datasets:
|
||
- stanfordnlp/SHP
|
||
- Anthropic/hh-rlhf
|
||
- OpenAssistant/oasst1
|
||
|
||
This repo contains the model checkpoints for:
|
||
- model family pythia2-8b
|
||
- optimized with the loss SFT
|
||
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
|
||
|
||
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
|
||
overrides:
|
||
parameters:
|
||
model: archangel_sft_pythia2-8b.Q4_K_M.gguf
|
||
files:
|
||
- filename: archangel_sft_pythia2-8b.Q4_K_M.gguf
|
||
sha256: a47782c55ef2b39b19644213720a599d9849511a73c9ebb0c1de749383c0a0f8
|
||
uri: huggingface://RichardErkhov/ContextualAI_-_archangel_sft_pythia2-8b-gguf/archangel_sft_pythia2-8b.Q4_K_M.gguf
|
||
- &qwen2
|
||
## Start QWEN2
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "qwen2-7b-instruct"
|
||
license: apache-2.0
|
||
description: |
|
||
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model.
|
||
urls:
|
||
- https://huggingface.co/Qwen/Qwen2-7B-Instruct
|
||
- https://huggingface.co/bartowski/Qwen2-7B-Instruct-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- qwen
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2-7B-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2-7B-Instruct-Q4_K_M.gguf
|
||
sha256: 8d0d33f0d9110a04aad1711b1ca02dafc0fa658cd83028bdfa5eff89c294fe76
|
||
uri: huggingface://bartowski/Qwen2-7B-Instruct-GGUF/Qwen2-7B-Instruct-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "dolphin-2.9.2-qwen2-72b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png
|
||
urls:
|
||
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-72b-gguf
|
||
description: "Dolphin 2.9.2 Qwen2 72B \U0001F42C\n\nCurated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations\n"
|
||
overrides:
|
||
parameters:
|
||
model: dolphin-2.9.2-qwen2-Q4_K_M.gguf
|
||
files:
|
||
- filename: dolphin-2.9.2-qwen2-Q4_K_M.gguf
|
||
sha256: 44a0e82cbc2a201b2f4b9e16099a0a4d97b6f0099d45bcc5b354601f38dbb709
|
||
uri: huggingface://cognitivecomputations/dolphin-2.9.2-qwen2-72b-gguf/qwen2-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "dolphin-2.9.2-qwen2-7b"
|
||
description: "Dolphin 2.9.2 Qwen2 7B \U0001F42C\n\nCurated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations\n"
|
||
urls:
|
||
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b
|
||
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b-gguf
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png
|
||
overrides:
|
||
parameters:
|
||
model: dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf
|
||
files:
|
||
- filename: dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf
|
||
sha256: a15b5db4df6be4f4bfb3632b2009147332ef4c57875527f246b4718cb0d3af1f
|
||
uri: huggingface://cognitivecomputations/dolphin-2.9.2-qwen2-7b-gguf/dolphin-2.9.2-qwen2-7b-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "samantha-qwen-2-7B"
|
||
description: |
|
||
Samantha based on qwen2
|
||
urls:
|
||
- https://huggingface.co/bartowski/Samantha-Qwen-2-7B-GGUF
|
||
- https://huggingface.co/macadeliccc/Samantha-Qwen2-7B
|
||
overrides:
|
||
parameters:
|
||
model: Samantha-Qwen-2-7B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Samantha-Qwen-2-7B-Q4_K_M.gguf
|
||
sha256: 5d1cf1c35a7a46c536a96ba0417d08b9f9e09c24a4e25976f72ad55d4904f6fe
|
||
uri: huggingface://bartowski/Samantha-Qwen-2-7B-GGUF/Samantha-Qwen-2-7B-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "magnum-72b-v1"
|
||
icon: https://files.catbox.moe/ngqnb1.png
|
||
description: |
|
||
This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen-2 72B Instruct.
|
||
urls:
|
||
- https://huggingface.co/alpindale/magnum-72b-v1
|
||
- https://huggingface.co/bartowski/magnum-72b-v1-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: magnum-72b-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: magnum-72b-v1-Q4_K_M.gguf
|
||
sha256: 046ec48665ce64a3a4965509dee2d9d8e5d81cb0b32ca0ddf130d2b59fa4ca9a
|
||
uri: huggingface://bartowski/magnum-72b-v1-GGUF/magnum-72b-v1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "qwen2-1.5b-ita"
|
||
description: |
|
||
Qwen2 1.5B is a compact language model specifically fine-tuned for the Italian language. Despite its relatively small size of 1.5 billion parameters, Qwen2 1.5B demonstrates strong performance, nearly matching the capabilities of larger models, such as the 9 billion parameter ITALIA model by iGenius. The fine-tuning process focused on optimizing the model for various language tasks in Italian, making it highly efficient and effective for Italian language applications.
|
||
urls:
|
||
- https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita
|
||
- https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: qwen2-1.5b-instruct-q8_0.gguf
|
||
files:
|
||
- filename: qwen2-1.5b-instruct-q8_0.gguf
|
||
sha256: c9d33989d77f4bd6966084332087921b9613eda01d5f44dc0b4e9a7382a2bfbb
|
||
uri: huggingface://DeepMount00/Qwen2-1.5B-Ita-GGUF/qwen2-1.5b-instruct-q8_0.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "einstein-v7-qwen2-7b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/KLQP1jK-DIzpwHzYRIH-Q.png
|
||
description: |
|
||
This model is a full fine-tuned version of Qwen/Qwen2-7B on diverse datasets.
|
||
urls:
|
||
- https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B
|
||
- https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Einstein-v7-Qwen2-7B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Einstein-v7-Qwen2-7B-Q4_K_M.gguf
|
||
sha256: 277b212ea65894723d2b86fb0f689fa5ecb54c9794f0fd2fb643655dc62812ce
|
||
uri: huggingface://bartowski/Einstein-v7-Qwen2-7B-GGUF/Einstein-v7-Qwen2-7B-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "arcee-spark"
|
||
icon: https://i.ibb.co/80ssNWS/o-Vdk-Qx-ARNmzr-Pi1h-Efj-SA.webp
|
||
description: |
|
||
Arcee Spark is a powerful 7B parameter language model that punches well above its weight class. Initialized from Qwen2, this model underwent a sophisticated training process:
|
||
|
||
Fine-tuned on 1.8 million samples
|
||
Merged with Qwen2-7B-Instruct using Arcee's mergekit
|
||
Further refined using Direct Preference Optimization (DPO)
|
||
|
||
This meticulous process results in exceptional performance, with Arcee Spark achieving the highest score on MT-Bench for models of its size, outperforming even GPT-3.5 on many tasks.
|
||
urls:
|
||
- https://huggingface.co/arcee-ai/Arcee-Spark-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Arcee-Spark-Q4_K_M.gguf
|
||
files:
|
||
- filename: Arcee-Spark-Q4_K_M.gguf
|
||
sha256: 44123276d7845dc13f73ca4aa431dc4c931104eb7d2186f2a73d076fa0ee2330
|
||
uri: huggingface://arcee-ai/Arcee-Spark-GGUF/Arcee-Spark-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "hercules-5.0-qwen2-7b"
|
||
description: |
|
||
Locutusque/Hercules-5.0-Qwen2-7B is a fine-tuned language model derived from Qwen2-7B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. This fine-tuning has hercules-v5.0 with enhanced abilities in:
|
||
|
||
Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
|
||
Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
|
||
Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
|
||
urls:
|
||
- https://huggingface.co/Locutusque/Hercules-5.0-Qwen2-7B
|
||
- https://huggingface.co/bartowski/Hercules-5.0-Qwen2-7B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hercules-5.0-Qwen2-7B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Hercules-5.0-Qwen2-7B-Q4_K_M.gguf
|
||
sha256: 8ebae4ffd43b906ddb938c3a611060ee5f99c35014e5ffe23ca35714361b5693
|
||
uri: huggingface://Hercules-5.0-Qwen2-7B-Q4_K_M.gguf/Hercules-5.0-Qwen2-7B-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "arcee-agent"
|
||
icon: https://i.ibb.co/CBHmTDn/136719a5-6d8a-4654-a618-46eabc788953.jpg
|
||
description: |
|
||
Arcee Agent is a cutting-edge 7B parameter language model specifically designed for function calling and tool use. Initialized from Qwen2-7B, it rivals the performance of much larger models while maintaining efficiency and speed. This model is particularly suited for developers, researchers, and businesses looking to implement sophisticated AI-driven solutions without the computational overhead of larger language models. Compute for training Arcee-Agent was provided by CrusoeAI. Arcee-Agent was trained using Spectrum.
|
||
urls:
|
||
- https://huggingface.co/crusoeai/Arcee-Agent-GGUF
|
||
- https://huggingface.co/arcee-ai/Arcee-Agent
|
||
overrides:
|
||
parameters:
|
||
model: arcee-agent.Q4_K_M.gguf
|
||
files:
|
||
- filename: arcee-agent.Q4_K_M.gguf
|
||
sha256: ebb49943a66c1e717f9399a555aee0af28a40bfac7500f2ad8dd05f211b62aac
|
||
uri: huggingface://crusoeai/Arcee-Agent-GGUF/arcee-agent.Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "qwen2-7b-instruct-v0.8"
|
||
icon: https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8/resolve/main/qwen2-fine-tunes-maziyar-panahi.webp
|
||
description: |
|
||
MaziyarPanahi/Qwen2-7B-Instruct-v0.8
|
||
|
||
This is a fine-tuned version of the Qwen/Qwen2-7B model. It aims to improve the base model across all benchmarks.
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8
|
||
- https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
|
||
sha256: 8c1b3efe9fa6ae1b37942ef26473cb4e0aed0f8038b60d4b61e5bffb61e49b7e
|
||
uri: huggingface://MaziyarPanahi/Qwen2-7B-Instruct-v0.8-GGUF/Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "qwen2-wukong-7b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/xOe1Nb3S9Nb53us7_Ja3s.jpeg
|
||
urls:
|
||
- https://huggingface.co/bartowski/Qwen2-Wukong-7B-GGUF
|
||
description: |
|
||
Qwen2-Wukong-7B is a dealigned chat finetune of the original fantastic Qwen2-7B model by the Qwen team.
|
||
|
||
This model was trained on the teknium OpenHeremes-2.5 dataset and some supplementary datasets from Cognitive Computations
|
||
|
||
This model was trained for 3 epochs with a custom FA2 implementation for AMD cards.
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2-Wukong-7B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2-Wukong-7B-Q4_K_M.gguf
|
||
sha256: 6b8ca6649c33fc84d4892ebcff1214f0b34697aced784f0d6d32e284a15943ad
|
||
uri: huggingface://bartowski/Qwen2-Wukong-7B-GGUF/Qwen2-Wukong-7B-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "calme-2.8-qwen2-7b"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b/resolve/main/qwen2-fine-tunes-maziyar-panahi.webp
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.8-qwen2-7b-GGUF
|
||
description: |
|
||
This is a fine-tuned version of the Qwen/Qwen2-7B model. It aims to improve the base model across all benchmarks.
|
||
overrides:
|
||
parameters:
|
||
model: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
|
||
files:
|
||
- filename: Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
|
||
sha256: 8c1b3efe9fa6ae1b37942ef26473cb4e0aed0f8038b60d4b61e5bffb61e49b7e
|
||
uri: huggingface://MaziyarPanahi/calme-2.8-qwen2-7b-GGUF/Qwen2-7B-Instruct-v0.8.Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "stellardong-72b-i1"
|
||
icon: https://huggingface.co/smelborp/StellarDong-72b/resolve/main/stellardong.png
|
||
urls:
|
||
- https://huggingface.co/smelborp/StellarDong-72b
|
||
- https://huggingface.co/mradermacher/StellarDong-72b-i1-GGUF
|
||
description: |
|
||
Magnum + Nova = you won't believe how stellar this dong is!!
|
||
overrides:
|
||
parameters:
|
||
model: StellarDong-72b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: StellarDong-72b.i1-Q4_K_M.gguf
|
||
sha256: 4c5012f0a034f40a044904891343ade2594f29c28a8a9d8052916de4dc5a61df
|
||
uri: huggingface://mradermacher/StellarDong-72b-i1-GGUF/StellarDong-72b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "magnum-32b-v1-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/PK7xRSd18Du0bX-w_t-9c.png
|
||
urls:
|
||
- https://huggingface.co/anthracite-org/magnum-32b-v1
|
||
- https://huggingface.co/mradermacher/magnum-32b-v1-i1-GGUF
|
||
description: |
|
||
This is the second in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen1.5 32B.
|
||
overrides:
|
||
parameters:
|
||
model: magnum-32b-v1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: magnum-32b-v1.i1-Q4_K_M.gguf
|
||
sha256: a31704ce0d7e5b774f155522b9ab7ef6015a4ece4e9056bf4dfc6cac561ff0a3
|
||
uri: huggingface://mradermacher/magnum-32b-v1-i1-GGUF/magnum-32b-v1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "tifa-7b-qwen2-v0.1"
|
||
urls:
|
||
- https://huggingface.co/Tifa-RP/Tifa-7B-Qwen2-v0.1-GGUF
|
||
description: |
|
||
The Tifa role-playing language model is a high-performance language model based on a self-developed 220B model distillation, with a new base model of qwen2-7B. The model has been converted to gguf format for running in the Ollama framework, providing excellent dialogue and text generation capabilities.
|
||
|
||
The original model was trained on a large-scale industrial dataset and then fine-tuned with 400GB of novel data and 20GB of multi-round dialogue directive data to achieve good role-playing effects.
|
||
|
||
The Tifa model is suitable for multi-round dialogue processing, role-playing and scenario simulation, EFX industrial knowledge integration, and high-quality literary creation.
|
||
|
||
Note: The Tifa model is in Chinese and English, with 7.6% of the data in Chinese role-playing and 4.2% in English role-playing. The model has been trained with a mix of EFX industrial field parameters and question-answer dialogues generated from 220B model outputs since 2023. The recommended quantization method is f16, as it retains more detail and accuracy in the model's performance.
|
||
overrides:
|
||
parameters:
|
||
model: tifa-7b-qwen2-v0.1.q4_k_m.gguf
|
||
files:
|
||
- filename: tifa-7b-qwen2-v0.1.q4_k_m.gguf
|
||
sha256: 1f5adbe8cb0a6400f51abdca3bf4e32284ebff73cc681a43abb35c0a6ccd3820
|
||
uri: huggingface://Tifa-RP/Tifa-7B-Qwen2-v0.1-GGUF/tifa-7b-qwen2-v0.1.q4_k_m.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "calme-2.2-qwen2-72b"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b/resolve/main/calme-2.webp
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b-GGUF
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b
|
||
description: |
|
||
This model is a fine-tuned version of the powerful Qwen/Qwen2-72B-Instruct, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
|
||
|
||
The post-training process is identical to the calme-2.1-qwen2-72b model; however, some parameters are different, and it was trained for a longer period.
|
||
|
||
Use Cases
|
||
|
||
This model is suitable for a wide range of applications, including but not limited to:
|
||
|
||
Advanced question-answering systems
|
||
Intelligent chatbots and virtual assistants
|
||
Content generation and summarization
|
||
Code generation and analysis
|
||
Complex problem-solving and decision support
|
||
overrides:
|
||
parameters:
|
||
model: calme-2.2-qwen2-72b.Q4_K_M.gguf
|
||
files:
|
||
- filename: calme-2.2-qwen2-72b.Q4_K_M.gguf
|
||
sha256: 95b9613df0abe6c1b6b7b017d7cc8bcf19b46c29f92a503dcc6da1704b12b402
|
||
uri: huggingface://MaziyarPanahi/calme-2.2-qwen2-72b-GGUF/calme-2.2-qwen2-72b.Q4_K_M.gguf
|
||
- !!merge <<: *qwen2
|
||
name: "edgerunner-tactical-7b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/668ed3dcd857a9ca47edb75c/tSyuw39VtmEqvC_wptTDf.png
|
||
urls:
|
||
- https://huggingface.co/edgerunner-ai/EdgeRunner-Tactical-7B
|
||
- https://huggingface.co/RichardErkhov/edgerunner-ai_-_EdgeRunner-Tactical-7B-gguf
|
||
description: |
|
||
EdgeRunner-Tactical-7B is a powerful and efficient language model for the edge. Our mission is to build Generative AI for the edge that is safe, secure, and transparent. To that end, the EdgeRunner team is proud to release EdgeRunner-Tactical-7B, the most powerful language model for its size to date.
|
||
|
||
EdgeRunner-Tactical-7B is a 7 billion parameter language model that delivers powerful performance while demonstrating the potential of running state-of-the-art (SOTA) models at the edge.
|
||
overrides:
|
||
parameters:
|
||
model: EdgeRunner-Tactical-7B.Q4_K_M.gguf
|
||
files:
|
||
- filename: EdgeRunner-Tactical-7B.Q4_K_M.gguf
|
||
sha256: 90ca9c3ab19e5d1de4499e3f988cc0ba3d205e50285d7c89de6f0a4c525bf204
|
||
uri: huggingface://RichardErkhov/edgerunner-ai_-_EdgeRunner-Tactical-7B-gguf/EdgeRunner-Tactical-7B.Q4_K_M.gguf
|
||
- &mistral03
|
||
## START Mistral
|
||
url: "github:mudler/LocalAI/gallery/mistral-0.3.yaml@master"
|
||
name: "mistral-7b-instruct-v0.3"
|
||
icon: https://cdn-avatars.huggingface.co/v1/production/uploads/62dac1c7a8ead43d20e3e17a/wrLf5yaGC6ng4XME70w6Z.png
|
||
license: apache-2.0
|
||
description: |
|
||
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
|
||
|
||
Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2
|
||
|
||
Extended vocabulary to 32768
|
||
Supports v3 Tokenizer
|
||
Supports function calling
|
||
urls:
|
||
- https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
|
||
- https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- mistral
|
||
- cpu
|
||
- function-calling
|
||
overrides:
|
||
parameters:
|
||
model: Mistral-7B-Instruct-v0.3.Q4_K_M.gguf
|
||
files:
|
||
- filename: "Mistral-7B-Instruct-v0.3.Q4_K_M.gguf"
|
||
sha256: "14850c84ff9f06e9b51d505d64815d5cc0cea0257380353ac0b3d21b21f6e024"
|
||
uri: "huggingface://MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3.Q4_K_M.gguf"
|
||
- !!merge <<: *mistral03
|
||
name: "mathstral-7b-v0.1-imat"
|
||
url: "github:mudler/LocalAI/gallery/mathstral.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/mistralai/mathstral-7B-v0.1
|
||
- https://huggingface.co/InferenceIllusionist/mathstral-7B-v0.1-iMat-GGUF
|
||
description: |
|
||
Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B. You can read more in the official blog post https://mistral.ai/news/mathstral/.
|
||
overrides:
|
||
parameters:
|
||
model: mathstral-7B-v0.1-iMat-Q4_K_M.gguf
|
||
files:
|
||
- filename: mathstral-7B-v0.1-iMat-Q4_K_M.gguf
|
||
sha256: 3ba94b7a8283ffa319c9ce23657f91ecf221ceada167c1253906cf56d72e8f90
|
||
uri: huggingface://InferenceIllusionist/mathstral-7B-v0.1-iMat-GGUF/mathstral-7B-v0.1-iMat-Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "mahou-1.3d-mistral-7b-i1"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
|
||
urls:
|
||
- https://huggingface.co/flammenai/Mahou-1.3d-mistral-7B
|
||
- https://huggingface.co/mradermacher/Mahou-1.3d-mistral-7B-i1-GGUF
|
||
description: |
|
||
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
|
||
overrides:
|
||
parameters:
|
||
model: Mahou-1.3d-mistral-7B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Mahou-1.3d-mistral-7B.i1-Q4_K_M.gguf
|
||
sha256: 8272f050e36d612ab282e095cb4e775e2c818e7096f8d522314d256923ef6da9
|
||
uri: huggingface://mradermacher/Mahou-1.3d-mistral-7B-i1-GGUF/Mahou-1.3d-mistral-7B.i1-Q4_K_M.gguf
|
||
- name: "einstein-v4-7b"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/U0zyXVGj-O8a7KP3BvPue.png
|
||
urls:
|
||
- https://huggingface.co/Weyaxi/Einstein-v4-7B
|
||
- https://huggingface.co/mradermacher/Einstein-v4-7B-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- mistral
|
||
- cpu
|
||
description: "\U0001F52C Einstein-v4-7B\n\nThis model is a full fine-tuned version of mistralai/Mistral-7B-v0.1 on diverse datasets.\n\nThis model is finetuned using 7xRTX3090 + 1xRTXA6000 using axolotl.\n"
|
||
overrides:
|
||
parameters:
|
||
model: Einstein-v4-7B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Einstein-v4-7B.Q4_K_M.gguf
|
||
sha256: 78bd573de2a9eb3c6e213132858164e821145f374fcaa4b19dfd6502c05d990d
|
||
uri: huggingface://mradermacher/Einstein-v4-7B-GGUF/Einstein-v4-7B.Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "mistral-nemo-instruct-2407"
|
||
urls:
|
||
- https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407
|
||
- https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF
|
||
- https://mistral.ai/news/mistral-nemo/
|
||
description: |
|
||
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
||
overrides:
|
||
parameters:
|
||
model: Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
|
||
files:
|
||
- filename: Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
|
||
sha256: 1a8b92fb546a80dce78151e4908f7bdb2c11fb3ef52af960e4bbe319a9cc5052
|
||
uri: huggingface://bartowski/Mistral-Nemo-Instruct-2407-GGUF/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "lumimaid-v0.2-12b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ep3ojmuMkFS-GmgRuI9iB.png
|
||
urls:
|
||
- https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B
|
||
- https://huggingface.co/mudler/Lumimaid-v0.2-12B-Q4_K_M-GGUF
|
||
description: |
|
||
This model is based on: Mistral-Nemo-Instruct-2407
|
||
|
||
Wandb: https://wandb.ai/undis95/Lumi-Mistral-Nemo?nw=nwuserundis95
|
||
|
||
NOTE: As explained on Mistral-Nemo-Instruct-2407 repo, it's recommended to use a low temperature, please experiment!
|
||
|
||
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
|
||
|
||
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
|
||
|
||
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
|
||
overrides:
|
||
parameters:
|
||
model: lumimaid-v0.2-12b-q4_k_m.gguf
|
||
files:
|
||
- filename: lumimaid-v0.2-12b-q4_k_m.gguf
|
||
sha256: f72299858a07e52be920b86d42ddcfcd5008b961d601ef6fd6a98a3377adccbf
|
||
uri: huggingface://mudler/Lumimaid-v0.2-12B-Q4_K_M-GGUF/lumimaid-v0.2-12b-q4_k_m.gguf
|
||
- !!merge <<: *mistral03
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "mn-12b-celeste-v1.9"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/QcU3xEgVu18jeFtMFxIw-.webp
|
||
urls:
|
||
- https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
|
||
- https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-GGUF
|
||
description: |
|
||
Mistral Nemo 12B Celeste V1.9
|
||
|
||
This is a story writing and roleplaying model trained on Mistral NeMo 12B Instruct at 8K context using Reddit Writing Prompts, Kalo's Opus 25K Instruct and c2 logs cleaned
|
||
|
||
This version has improved NSFW, smarter and more active narration. It's also trained with ChatML tokens so there should be no EOS bleeding whatsoever.
|
||
overrides:
|
||
parameters:
|
||
model: MN-12B-Celeste-V1.9.Q4_K_M.gguf
|
||
files:
|
||
- filename: MN-12B-Celeste-V1.9.Q4_K_M.gguf
|
||
sha256: 019daeaa63d82d55d1ea623b9c255deea6793af4044bb4994d2b4d09e8959f7b
|
||
uri: huggingface://mradermacher/MN-12B-Celeste-V1.9-GGUF/MN-12B-Celeste-V1.9.Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/ybqwvRJAtBPqtulQlKW93.gif
|
||
name: "rocinante-12b-v1.1"
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Rocinante-12B-v1.1-GGUF
|
||
- https://huggingface.co/TheDrummer/Rocinante-12B-v1.1
|
||
description: |
|
||
A versatile workhorse for any adventure!
|
||
overrides:
|
||
parameters:
|
||
model: Rocinante-12B-v1.1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Rocinante-12B-v1.1-Q4_K_M.gguf
|
||
sha256: bdeaeefac79cff944ae673e6924c9f82f7eed789669a32a09997db398790b0b5
|
||
uri: huggingface://TheDrummer/Rocinante-12B-v1.1-GGUF/Rocinante-12B-v1.1-Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "pantheon-rp-1.6-12b-nemo"
|
||
icon: https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo/resolve/main/Pantheon.png
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF
|
||
- https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo
|
||
description: |
|
||
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of personas that can be summoned with a simple activation phrase. The huge variety in personalities introduced also serve to enhance the general roleplay experience.
|
||
Changes in version 1.6:
|
||
The final finetune now consists of data that is equally split between Markdown and novel-style roleplay. This should solve Pantheon's greatest weakness.
|
||
The base was redone. (Details below)
|
||
Select Claude-specific phrases were rewritten, boosting variety in the model's responses.
|
||
Aiva no longer serves as both persona and assistant, with the assistant role having been given to Lyra.
|
||
Stella's dialogue received some post-fix alterations since the model really loved the phrase "Fuck me sideways".
|
||
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
|
||
overrides:
|
||
parameters:
|
||
model: Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf
|
||
files:
|
||
- filename: Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf
|
||
sha256: cf3465c183bf4ecbccd1b6b480f687e0160475b04c87e2f1e5ebc8baa0f4c7aa
|
||
uri: huggingface://bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "acolyte-22b-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png
|
||
urls:
|
||
- https://huggingface.co/rAIfle/Acolyte-22B
|
||
- https://huggingface.co/mradermacher/Acolyte-22B-i1-GGUF
|
||
description: |
|
||
LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. Check the LoRA for dataset info.
|
||
overrides:
|
||
parameters:
|
||
model: Acolyte-22B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Acolyte-22B.i1-Q4_K_M.gguf
|
||
sha256: 5a454405b98b6f886e8e4c695488d8ea098162bb8c46f2a7723fc2553c6e2f6e
|
||
uri: huggingface://mradermacher/Acolyte-22B-i1-GGUF/Acolyte-22B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "mn-12b-lyra-v4-iq-imatrix"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/dVoru83WOpwVjMlgZ_xhA.png
|
||
# chatml
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/MN-12B-Lyra-v4-GGUF-IQ-Imatrix
|
||
description: |
|
||
A finetune of Mistral Nemo by Sao10K.
|
||
Uses the ChatML prompt format.
|
||
overrides:
|
||
parameters:
|
||
model: MN-12B-Lyra-v4-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: MN-12B-Lyra-v4-Q4_K_M-imat.gguf
|
||
sha256: 1989123481ca1936c8a2cbe278ff5d1d2b0ae63dbdc838bb36a6d7547b8087b3
|
||
uri: huggingface://Lewdiculous/MN-12B-Lyra-v4-GGUF-IQ-Imatrix/MN-12B-Lyra-v4-Q4_K_M-imat.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "magnusintellectus-12b-v1-i1"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/66b564058d9afb7a9d5607d5/hUVJI1Qa4tCMrZWMgYkoD.png
|
||
urls:
|
||
- https://huggingface.co/GalrionSoftworks/MagnusIntellectus-12B-v1
|
||
- https://huggingface.co/mradermacher/MagnusIntellectus-12B-v1-i1-GGUF
|
||
description: |
|
||
How pleasant, the rocks appear to have made a decent conglomerate. A-.
|
||
|
||
MagnusIntellectus is a merge of the following models using LazyMergekit:
|
||
|
||
UsernameJustAnother/Nemo-12B-Marlin-v5
|
||
anthracite-org/magnum-12b-v2
|
||
overrides:
|
||
parameters:
|
||
model: MagnusIntellectus-12B-v1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: MagnusIntellectus-12B-v1.i1-Q4_K_M.gguf
|
||
sha256: c97107983b4edc5b6f2a592d227ca2dd4196e2af3d3bc0fe6b7a8954a1fb5870
|
||
uri: huggingface://mradermacher/MagnusIntellectus-12B-v1-i1-GGUF/MagnusIntellectus-12B-v1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *mistral03
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "mn-backyardai-party-12b-v1-iq-arm-imatrix"
|
||
icon: https://huggingface.co/Sao10K/MN-BackyardAI-Party-12B-v1/resolve/main/party1.png
|
||
urls:
|
||
- https://huggingface.co/Sao10K/MN-BackyardAI-Party-12B-v1
|
||
- https://huggingface.co/Lewdiculous/MN-BackyardAI-Party-12B-v1-GGUF-IQ-ARM-Imatrix
|
||
description: |
|
||
This is a group-chat based roleplaying model, based off of 12B-Lyra-v4a2, a variant of Lyra-v4 that is currently private.
|
||
|
||
It is trained on an entirely human-based dataset, based on forum / internet group roleplaying styles. The only augmentation done with LLMs is to the character sheets, to fit to the system prompt, to fit various character sheets within context.
|
||
|
||
This model is still capable of 1 on 1 roleplay, though I recommend using ChatML when doing that instead.
|
||
overrides:
|
||
parameters:
|
||
model: MN-BackyardAI-Party-12B-v1-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: MN-BackyardAI-Party-12B-v1-Q4_K_M-imat.gguf
|
||
sha256: cea68768dff58b553974b755bb40ef790ab8b86866d9b5c46bc2e6c3311b876a
|
||
uri: huggingface://Lewdiculous/MN-BackyardAI-Party-12B-v1-GGUF-IQ-ARM-Imatrix/MN-BackyardAI-Party-12B-v1-Q4_K_M-imat.gguf
|
||
- !!merge <<: *mistral03
|
||
name: "ml-ms-etheris-123b"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/ieEjL3TxpDM3WAZQcya6E.png
|
||
urls:
|
||
- https://huggingface.co/Steelskull/ML-MS-Etheris-123B
|
||
- https://huggingface.co/mradermacher/ML-MS-Etheris-123B-GGUF
|
||
description: |
|
||
This model merges the robust storytelling of mutiple models while attempting to maintain intelligence. The final model was merged after Model Soup with DELLA to add some specal sause.
|
||
- model: NeverSleep/Lumimaid-v0.2-123B
|
||
- model: TheDrummer/Behemoth-123B-v1
|
||
- model: migtissera/Tess-3-Mistral-Large-2-123B
|
||
- model: anthracite-org/magnum-v2-123b
|
||
Use Mistral, ChatML, or Meth Format
|
||
overrides:
|
||
parameters:
|
||
model: ML-MS-Etheris-123B.Q2_K.gguf
|
||
files:
|
||
- filename: ML-MS-Etheris-123B.Q2_K.gguf
|
||
sha256: a17c5615413b5c9c8d01cf55386573d0acd00e01f6e2bcdf492624c73c593fc3
|
||
uri: huggingface://mradermacher/ML-MS-Etheris-123B-GGUF/ML-MS-Etheris-123B.Q2_K.gguf
|
||
- &mudler
|
||
### START mudler's LocalAI specific-models
|
||
url: "github:mudler/LocalAI/gallery/mudler.yaml@master"
|
||
name: "LocalAI-llama3-8b-function-call-v0.2"
|
||
icon: "https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/us5JKi9z046p8K-cn_M0w.webp"
|
||
license: llama3
|
||
description: |
|
||
This model is a fine-tune on a custom dataset + glaive to work specifically and leverage all the LocalAI features of constrained grammar.
|
||
|
||
Specifically, the model once enters in tools mode will always reply with JSON.
|
||
urls:
|
||
- https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF
|
||
- https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
- function-calling
|
||
overrides:
|
||
parameters:
|
||
model: LocalAI-Llama3-8b-Function-Call-v0.2-q4_k_m.bin
|
||
files:
|
||
- filename: LocalAI-Llama3-8b-Function-Call-v0.2-q4_k_m.bin
|
||
sha256: 7e46405ce043cbc8d30f83f26a5655dc8edf5e947b748d7ba2745bd0af057a41
|
||
uri: huggingface://mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF/LocalAI-Llama3-8b-Function-Call-v0.2-q4_k_m.bin
|
||
- !!merge <<: *mudler
|
||
icon: "https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/SKuXcvmZ_6oD4NCMkvyGo.png"
|
||
name: "mirai-nova-llama3-LocalAI-8b-v0.1"
|
||
urls:
|
||
- https://huggingface.co/mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1-GGUF
|
||
- https://huggingface.co/mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1
|
||
description: |
|
||
Mirai Nova: "Mirai" means future in Japanese, and "Nova" references a star showing a sudden large increase in brightness.
|
||
|
||
A set of models oriented in function calling, but generalist and with enhanced reasoning capability. This is fine tuned with Llama3.
|
||
|
||
Mirai Nova works particularly well with LocalAI, leveraging the function call with grammars feature out of the box.
|
||
overrides:
|
||
parameters:
|
||
model: Mirai-Nova-Llama3-LocalAI-8B-v0.1-q4_k_m.bin
|
||
files:
|
||
- filename: Mirai-Nova-Llama3-LocalAI-8B-v0.1-q4_k_m.bin
|
||
sha256: 579cbb229f9c11d0330759ff4733102d2491615a4c61289e26c09d1b3a583fec
|
||
uri: huggingface://mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1-GGUF/Mirai-Nova-Llama3-LocalAI-8B-v0.1-q4_k_m.bin
|
||
- &parler-tts
|
||
### START parler-tts
|
||
url: "github:mudler/LocalAI/gallery/parler-tts.yaml@master"
|
||
name: parler-tts-mini-v0.1
|
||
parameters:
|
||
model: parler-tts/parler_tts_mini_v0.1
|
||
license: apache-2.0
|
||
description: |
|
||
Parler-TTS is a lightweight text-to-speech (TTS) model that can generate high-quality, natural sounding speech in the style of a given speaker (gender, pitch, speaking style, etc). It is a reproduction of work from the paper Natural language guidance of high-fidelity text-to-speech with synthetic annotations by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
|
||
urls:
|
||
- https://github.com/huggingface/parler-tts
|
||
tags:
|
||
- tts
|
||
- gpu
|
||
- cpu
|
||
- text-to-speech
|
||
- python
|
||
- &rerankers
|
||
### START rerankers
|
||
url: "github:mudler/LocalAI/gallery/rerankers.yaml@master"
|
||
name: cross-encoder
|
||
parameters:
|
||
model: cross-encoder
|
||
license: apache-2.0
|
||
description: |
|
||
A cross-encoder model that can be used for reranking
|
||
tags:
|
||
- reranker
|
||
- gpu
|
||
- python
|
||
## LLMs
|
||
### START LLAMA3
|
||
- name: "einstein-v6.1-llama3-8b"
|
||
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/5s12oq859qLfDkkTNam_C.png
|
||
urls:
|
||
- https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
license: llama3
|
||
description: |
|
||
This model is a full fine-tuned version of meta-llama/Meta-Llama-3-8B on diverse datasets.
|
||
|
||
This model is finetuned using 8xRTX3090 + 1xRTXA6000 using axolotl.
|
||
overrides:
|
||
parameters:
|
||
model: Einstein-v6.1-Llama3-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Einstein-v6.1-Llama3-8B-Q4_K_M.gguf
|
||
sha256: 447587bd8f60d9050232148d34fdb2d88b15b2413fd7f8e095a4606ec60b45bf
|
||
uri: huggingface://bartowski/Einstein-v6.1-Llama3-8B-GGUF/Einstein-v6.1-Llama3-8B-Q4_K_M.gguf
|
||
- &gemma
|
||
url: "github:mudler/LocalAI/gallery/gemma.yaml@master"
|
||
name: "gemma-2b"
|
||
license: gemma
|
||
urls:
|
||
- https://ai.google.dev/gemma/docs
|
||
- https://huggingface.co/mlabonne/gemma-2b-GGUF
|
||
description: |
|
||
Open source LLM from Google
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- gemma
|
||
overrides:
|
||
parameters:
|
||
model: gemma-2b.Q4_K_M.gguf
|
||
files:
|
||
- filename: gemma-2b.Q4_K_M.gguf
|
||
sha256: 37d50c21ef7847926204ad9b3007127d9a2722188cfd240ce7f9f7f041aa71a5
|
||
uri: huggingface://mlabonne/gemma-2b-GGUF/gemma-2b.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "firefly-gemma-7b-iq-imatrix"
|
||
icon: "https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/SrOekTxdpnxHyWWmMiAvc.jpeg"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/firefly-gemma-7b-GGUF-IQ-Imatrix
|
||
- https://huggingface.co/YeungNLP/firefly-gemma-7b
|
||
description: |
|
||
firefly-gemma-7b is trained based on gemma-7b to act as a helpful and harmless AI assistant. We use Firefly to train the model on a single V100 GPU with QLoRA.
|
||
overrides:
|
||
parameters:
|
||
model: firefly-gemma-7b-Q4_K_S-imatrix.gguf
|
||
files:
|
||
- filename: firefly-gemma-7b-Q4_K_S-imatrix.gguf
|
||
sha256: 622e0b8e4f12203cc40c7f87915abf99498c2e0582203415ca236ea37643e428
|
||
uri: huggingface://Lewdiculous/firefly-gemma-7b-GGUF-IQ-Imatrix/firefly-gemma-7b-Q4_K_S-imatrix.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-1.1-7b-it"
|
||
urls:
|
||
- https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF
|
||
- https://huggingface.co/google/gemma-1.1-7b-it
|
||
description: |
|
||
This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release.
|
||
|
||
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with "Sure,".
|
||
overrides:
|
||
parameters:
|
||
model: gemma-1.1-7b-it-Q4_K_M.gguf
|
||
files:
|
||
- filename: gemma-1.1-7b-it-Q4_K_M.gguf
|
||
sha256: 47821da72ee9e80b6fd43c6190ad751b485fb61fa5664590f7a73246bcd8332e
|
||
uri: huggingface://bartowski/gemma-1.1-7b-it-GGUF/gemma-1.1-7b-it-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-27b-it"
|
||
urls:
|
||
- https://huggingface.co/google/gemma-2-27b-it
|
||
- https://huggingface.co/bartowski/gemma-2-27b-it-GGUF
|
||
description: |
|
||
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
|
||
overrides:
|
||
parameters:
|
||
model: gemma-2-27b-it-Q4_K_M.gguf
|
||
files:
|
||
- filename: gemma-2-27b-it-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/gemma-2-27b-it-GGUF/gemma-2-27b-it-Q4_K_M.gguf
|
||
sha256: 503a87ab47c9e7fb27545ec8592b4dc4493538bd47b397ceb3197e10a0370d23
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-9b-it"
|
||
urls:
|
||
- https://huggingface.co/google/gemma-2-9b-it
|
||
- https://huggingface.co/bartowski/gemma-2-9b-it-GGUF
|
||
description: |
|
||
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
|
||
overrides:
|
||
parameters:
|
||
model: gemma-2-9b-it-Q4_K_M.gguf
|
||
files:
|
||
- filename: gemma-2-9b-it-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/gemma-2-9b-it-GGUF/gemma-2-9b-it-Q4_K_M.gguf
|
||
sha256: 13b2a7b4115bbd0900162edcebe476da1ba1fc24e718e8b40d32f6e300f56dfe
|
||
- !!merge <<: *gemma
|
||
name: "tess-v2.5-gemma-2-27b-alpha"
|
||
urls:
|
||
- https://huggingface.co/migtissera/Tess-v2.5-Gemma-2-27B-alpha
|
||
- https://huggingface.co/bartowski/Tess-v2.5-Gemma-2-27B-alpha-GGUF
|
||
icon: https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B/resolve/main/Tess-v2.5.png
|
||
description: |
|
||
Great at reasoning, but woke as fuck! This is a fine-tune over the Gemma-2-27B-it, since the base model fine-tuning is not generating coherent content.
|
||
|
||
Tess-v2.5 is the latest state-of-the-art model in the Tess series of Large Language Models (LLMs). Tess, short for Tesoro (Treasure in Italian), is the flagship LLM series created by Migel Tissera. Tess-v2.5 brings significant improvements in reasoning capabilities, coding capabilities and mathematics
|
||
overrides:
|
||
parameters:
|
||
model: Tess-v2.5-Gemma-2-27B-alpha-Q4_K_M.gguf
|
||
files:
|
||
- filename: Tess-v2.5-Gemma-2-27B-alpha-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Tess-v2.5-Gemma-2-27B-alpha-GGUF/Tess-v2.5-Gemma-2-27B-alpha-Q4_K_M.gguf
|
||
sha256: d7be7092d28aefbdcd1ee4f4d8503d169d0a649f763e169d4b179aef20d69c21
|
||
- !!merge <<: *gemma
|
||
name: "gemma2-9b-daybreak-v0.5"
|
||
urls:
|
||
- https://huggingface.co/crestf411/gemma2-9B-daybreak-v0.5
|
||
- https://huggingface.co/Vdr1/gemma2-9B-daybreak-v0.5-GGUF-Imatrix-IQ
|
||
description: |
|
||
THIS IS A PRE-RELEASE. BEGONE.
|
||
|
||
Beware, depraved. Not suitable for any audience.
|
||
|
||
Dataset curation to remove slop-perceived expressions continues. Unfortunately base models (which this is merged on top of) are generally riddled with "barely audible"s and "couldn't help"s and "shivers down spines" etc.
|
||
overrides:
|
||
parameters:
|
||
model: gemma2-9B-daybreak-v0.5-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: gemma2-9B-daybreak-v0.5-Q4_K_M-imat.gguf
|
||
uri: huggingface://Vdr1/gemma2-9B-daybreak-v0.5-GGUF-Imatrix-IQ/gemma2-9B-daybreak-v0.5-Q4_K_M-imat.gguf
|
||
sha256: 6add4d12052918986af935d686773e4e89fddd1bbf7941911cf3fbeb1b1862c0
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-9b-it-sppo-iter3"
|
||
urls:
|
||
- https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
|
||
- https://huggingface.co/bartowski/Gemma-2-9B-It-SPPO-Iter3-GGUF
|
||
description: |
|
||
Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
|
||
Gemma-2-9B-It-SPPO-Iter3
|
||
|
||
This model was developed using Self-Play Preference Optimization at iteration 3, based on the google/gemma-2-9b-it architecture as starting point. We utilized the prompt sets from the openbmb/UltraFeedback dataset, splited to 3 parts for 3 iterations by snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset. All responses used are synthetic.
|
||
overrides:
|
||
parameters:
|
||
model: Gemma-2-9B-It-SPPO-Iter3-Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemma-2-9B-It-SPPO-Iter3-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Gemma-2-9B-It-SPPO-Iter3-GGUF/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M.gguf
|
||
sha256: c04482b442f05b784ab33af30caa0ea0645deb67fb359d3fad4932f4bb04e12d
|
||
- !!merge <<: *gemma
|
||
name: "smegmma-9b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/RSuc5p9Sm6CYj6lGOxvx4.gif
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Smegmma-9B-v1
|
||
- https://huggingface.co/bartowski/Smegmma-9B-v1-GGUF
|
||
description: "Smegmma 9B v1 \U0001F9C0\n\nThe sweet moist of Gemma 2, unhinged.\n\nsmeg - ghem - mah\n\nAn eRP model that will blast you with creamy moist. Finetuned by yours truly.\n\nThe first Gemma 2 9B RP finetune attempt!\nWhat's New?\n\n Engaging roleplay\n Less refusals / censorship\n Less commentaries / summaries\n More willing AI\n Better formatting\n Better creativity\n Moist alignment\n\nNotes\n\n Refusals still exist, but a couple of re-gens may yield the result you want\n Formatting and logic may be weaker at the start\n Make sure to start strong\n May be weaker with certain cards, YMMV and adjust accordingly!\n"
|
||
overrides:
|
||
parameters:
|
||
model: Smegmma-9B-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Smegmma-9B-v1-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Smegmma-9B-v1-GGUF/Smegmma-9B-v1-Q4_K_M.gguf
|
||
sha256: abd9da0a6bf5cbc0ed6bb0d7e3ee7aea3f6b1edbf8c64e51d0fa25001975aed7
|
||
- !!merge <<: *gemma
|
||
name: "smegmma-deluxe-9b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/RSuc5p9Sm6CYj6lGOxvx4.gif
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Smegmma-Deluxe-9B-v1
|
||
- https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF
|
||
description: "Smegmma Deluxe 9B v1 \U0001F9C0\n\nThe sweet moist of Gemma 2, unhinged.\n\nsmeg - ghem - mah\n\nAn eRP model that will blast you with creamy moist. Finetuned by yours truly.\n\nThe first Gemma 2 9B RP finetune attempt!\n\nWhat's New?\n\n Engaging roleplay\n Less refusals / censorship\n Less commentaries / summaries\n More willing AI\n Better formatting\n Better creativity\n Moist alignment\n"
|
||
overrides:
|
||
parameters:
|
||
model: Smegmma-Deluxe-9B-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Smegmma-Deluxe-9B-v1-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Smegmma-Deluxe-9B-v1-GGUF/Smegmma-Deluxe-9B-v1-Q4_K_M.gguf
|
||
sha256: 732ecb253ea0115453438fc1f4e3e31507719ddcf81890a86ad1d734beefdb6f
|
||
- !!merge <<: *gemma
|
||
name: "tiger-gemma-9b-v1-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/A97OlLKeT4XOnv4IG1b6m.png
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v1
|
||
- https://huggingface.co/mradermacher/Tiger-Gemma-9B-v1-i1-GGUF
|
||
description: |
|
||
Tiger Gemma 9B v1
|
||
|
||
Decensored Gemma 9B. No refusals so far. No apparent brain damage.
|
||
|
||
In memory of Tiger
|
||
overrides:
|
||
parameters:
|
||
model: Tiger-Gemma-9B-v1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Tiger-Gemma-9B-v1.i1-Q4_K_M.gguf
|
||
sha256: ef10accfee8023b31def5425bf591bf1f0203090f3dd851cd3f37bb235324383
|
||
uri: huggingface://mradermacher/Tiger-Gemma-9B-v1-i1-GGUF/Tiger-Gemma-9B-v1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "hodachi-ezo-humanities-9b-gemma-2-it"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/0OYFqT8kACowa9bY1EZF6.png
|
||
urls:
|
||
- https://huggingface.co/HODACHI/EZO-Humanities-9B-gemma-2-it
|
||
- https://huggingface.co/mmnga/HODACHI-EZO-Humanities-9B-gemma-2-it-gguf
|
||
description: |
|
||
This model is based on Gemma-2-9B-it, specially tuned to enhance its performance in Humanities-related tasks. While maintaining its strong foundation in Japanese language processing, it has been optimized to excel in areas such as literature, philosophy, history, and cultural studies. This focused approach allows the model to provide deeper insights and more nuanced responses in Humanities fields, while still being capable of handling a wide range of global inquiries.
|
||
|
||
Gemma-2-9B-itをベースとして、人文科学(Humanities)関連タスクでの性能向上に特化したチューニングを施したモデルです。日本語処理の強固な基盤を維持しつつ、文学、哲学、歴史、文化研究などの分野で卓越した能力を発揮するよう最適化されています。この焦点を絞ったアプローチにより、人文科学分野でより深い洞察と繊細な応答を提供しながら、同時に幅広いグローバルな問い合わせにも対応できる能力を備えています。
|
||
overrides:
|
||
parameters:
|
||
model: HODACHI-EZO-Humanities-9B-gemma-2-it-Q4_K_M.gguf
|
||
files:
|
||
- filename: HODACHI-EZO-Humanities-9B-gemma-2-it-Q4_K_M.gguf
|
||
sha256: 11606130206347355785f5a2720ff2fa671ca7fbe2af3fb4c34b508389952424
|
||
uri: huggingface://mmnga/HODACHI-EZO-Humanities-9B-gemma-2-it-gguf/HODACHI-EZO-Humanities-9B-gemma-2-it-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "ezo-common-9b-gemma-2-it"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/0OYFqT8kACowa9bY1EZF6.png
|
||
urls:
|
||
- https://huggingface.co/HODACHI/EZO-Common-9B-gemma-2-it
|
||
- https://huggingface.co/QuantFactory/EZO-Common-9B-gemma-2-it-GGUF
|
||
description: |
|
||
This model is based on Gemma-2-9B-it, enhanced with multiple tuning techniques to improve its general performance. While it excels in Japanese language tasks, it's designed to meet diverse needs globally.
|
||
|
||
Gemma-2-9B-itをベースとして、複数のチューニング手法を採用のうえ、汎用的に性能を向上させたモデルです。日本語タスクに優れつつ、世界中の多様なニーズに応える設計となっています。
|
||
overrides:
|
||
parameters:
|
||
model: EZO-Common-9B-gemma-2-it.Q4_K_M.gguf
|
||
files:
|
||
- filename: EZO-Common-9B-gemma-2-it.Q4_K_M.gguf
|
||
sha256: 57678b1828673dccb15f76e52b00672c74aa6169421bbb8620b8955955322cfd
|
||
uri: huggingface://QuantFactory/EZO-Common-9B-gemma-2-it-GGUF/EZO-Common-9B-gemma-2-it.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "big-tiger-gemma-27b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/A97OlLKeT4XOnv4IG1b6m.png
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1
|
||
- https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1-GGUF
|
||
description: |
|
||
Big Tiger Gemma 27B v1 is a Decensored Gemma 27B model with no refusals, except for some rare instances from the 9B model. It does not appear to have any brain damage. The model is available from various sources, including Hugging Face, and comes in different variations such as GGUF, iMatrix, and EXL2.
|
||
overrides:
|
||
parameters:
|
||
model: Big-Tiger-Gemma-27B-v1c-Q4_K_M.gguf
|
||
files:
|
||
- filename: Big-Tiger-Gemma-27B-v1c-Q4_K_M.gguf
|
||
sha256: c5fc5605d36ae280c1c908c9b4bcb12b28abbe2692f317edeb83ab1104657fe5
|
||
uri: huggingface://TheDrummer/Big-Tiger-Gemma-27B-v1-GGUF/Big-Tiger-Gemma-27B-v1c-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2b-translation-v0.150"
|
||
urls:
|
||
- https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150
|
||
- https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.150-gguf
|
||
description: |
|
||
Original model: lemon-mint/gemma-ko-1.1-2b-it
|
||
Evaluation metrics: Eval Loss, Train Loss, lr, optimizer, lr_scheduler_type.
|
||
Prompt Template:
|
||
<bos><start_of_turn>user
|
||
Translate into Korean: [input text]<end_of_turn>
|
||
<start_of_turn>model
|
||
[translated text in Korean]<eos>
|
||
<bos><start_of_turn>user
|
||
Translate into English: [Korean text]<end_of_turn>
|
||
<start_of_turn>model
|
||
[translated text in English]<eos>
|
||
Model features:
|
||
* Developed by: lemon-mint
|
||
* Model type: Gemma
|
||
* Languages (NLP): English
|
||
* License: Gemma Terms of Use
|
||
* Finetuned from model: lemon-mint/gemma-ko-1.1-2b-it
|
||
overrides:
|
||
parameters:
|
||
model: gemma-2b-translation-v0.150.Q4_K_M.gguf
|
||
files:
|
||
- filename: gemma-2b-translation-v0.150.Q4_K_M.gguf
|
||
sha256: dcde67b83168d2e7ca835cf9a7a4dcf38b41b9cefe3cbc997c71d2741c08cd25
|
||
uri: huggingface://RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.150-gguf/gemma-2b-translation-v0.150.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "emo-2b"
|
||
urls:
|
||
- https://huggingface.co/OEvortex/EMO-2B
|
||
- https://huggingface.co/RichardErkhov/OEvortex_-_EMO-2B-gguf
|
||
description: |
|
||
EMO-2B: Emotionally Intelligent Conversational AI
|
||
|
||
Overview:
|
||
EMO-2B is a state-of-the-art conversational AI model with 2.5 billion parameters, designed to engage in emotionally resonant dialogue. Building upon the success of EMO-1.5B, this model has been further fine-tuned on an extensive corpus of emotional narratives, enabling it to perceive and respond to the emotional undertones of user inputs with exceptional empathy and emotional intelligence.
|
||
|
||
Key Features:
|
||
|
||
- Advanced Emotional Intelligence: With its increased capacity, EMO-2B demonstrates an even deeper understanding and generation of emotional language, allowing for more nuanced and contextually appropriate emotional responses.
|
||
- Enhanced Contextual Awareness: The model considers an even broader context within conversations, accounting for subtle emotional cues and providing emotionally resonant responses tailored to the specific situation.
|
||
- Empathetic and Supportive Dialogue: EMO-2B excels at active listening, validating emotions, offering compassionate advice, and providing emotional support, making it an ideal companion for users seeking empathy and understanding.
|
||
- Dynamic Persona Adaptation: The model can dynamically adapt its persona, communication style, and emotional responses to match the user's emotional state, ensuring a highly personalized and tailored conversational experience.
|
||
|
||
Use Cases:
|
||
|
||
EMO-2B is well-suited for a variety of applications where emotional intelligence and empathetic communication are crucial, such as:
|
||
|
||
- Mental health support chatbots
|
||
- Emotional support companions
|
||
- Personalized coaching and motivation
|
||
- Narrative storytelling and interactive fiction
|
||
- Customer service and support (for emotionally sensitive contexts)
|
||
|
||
Limitations and Ethical Considerations:
|
||
|
||
While EMO-2B is designed to provide emotionally intelligent and empathetic responses, it is important to note that it is an AI system and cannot replicate the depth and nuance of human emotional intelligence. Users should be aware that the model's responses, while emotionally supportive, should not be considered a substitute for professional mental health support or counseling.
|
||
|
||
Additionally, as with any language model, EMO-2B may reflect biases present in its training data. Users should exercise caution and critical thinking when interacting with the model, and report any concerning or inappropriate responses.
|
||
overrides:
|
||
parameters:
|
||
model: EMO-2B.Q4_K_M.gguf
|
||
files:
|
||
- filename: EMO-2B.Q4_K_M.gguf
|
||
sha256: 608bffc0e9012bc7f9a94b714f4932e2826cc122dbac59b586e4baa2ee0fdca5
|
||
uri: huggingface://RichardErkhov/OEvortex_-_EMO-2B-gguf/EMO-2B.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemmoy-9b-g2-mk.3-i1"
|
||
icon: https://huggingface.co/Hastagaras/G2-Gemmoy-9B-MK.3-RP/resolve/main/gemmoy.jpg
|
||
urls:
|
||
- https://huggingface.co/Hastagaras/Gemmoy-9B-G2-MK.3
|
||
- https://huggingface.co/mradermacher/Gemmoy-9B-G2-MK.3-i1-GGUF
|
||
description: |
|
||
The Gemmoy-9B-G2-MK.3 model is a large language model trained on a variety of datasets, including grimulkan/LimaRP-augmented, LDJnr/Capybara, TheSkullery/C2logs_Filtered_Sharegpt_Merged, abacusai/SystemChat-1.1, and Hastagaras/FTTS-Stories-Sharegpt.
|
||
overrides:
|
||
parameters:
|
||
model: Gemmoy-9B-G2-MK.3.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemmoy-9B-G2-MK.3.i1-Q4_K_M.gguf
|
||
sha256: 0d1004a246fbda7f1408a6841129b73c4100e697bd0a6806fc698eabbb0802a1
|
||
uri: huggingface://mradermacher/Gemmoy-9B-G2-MK.3-i1-GGUF/Gemmoy-9B-G2-MK.3.i1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "sunfall-simpo-9b"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/sunfall-SimPO-9B-GGUF
|
||
description: |
|
||
Crazy idea that what if you put the LoRA from crestf411/sunfall-peft on top of princeton-nlp/gemma-2-9b-it-SimPO and therefore this exists solely for that purpose alone in the universe.
|
||
overrides:
|
||
parameters:
|
||
model: sunfall-SimPO-9B.Q4_K_M.gguf
|
||
files:
|
||
- filename: sunfall-SimPO-9B.Q4_K_M.gguf
|
||
sha256: 810c51c6ce34107706d921531b97cfa409cd53c215d18b88bce7cdb617f73ceb
|
||
uri: huggingface://mradermacher/sunfall-SimPO-9B-GGUF/sunfall-SimPO-9B.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "sunfall-simpo-9b-i1"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/sunfall-SimPO-9B-i1-GGUF
|
||
description: |
|
||
Crazy idea that what if you put the LoRA from crestf411/sunfall-peft on top of princeton-nlp/gemma-2-9b-it-SimPO and therefore this exists solely for that purpose alone in the universe.
|
||
overrides:
|
||
parameters:
|
||
model: sunfall-SimPO-9B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: sunfall-SimPO-9B.i1-Q4_K_M.gguf
|
||
sha256: edde9df372a9a5b2316dc6822dc2f52f5a2059103dd7f08072e5a5355c5f5d0b
|
||
uri: huggingface://mradermacher/sunfall-SimPO-9B-i1-GGUF/sunfall-SimPO-9B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "seeker-9b"
|
||
icon: https://huggingface.co/lodrick-the-lafted/seeker-9b/resolve/main/seeker.webp
|
||
urls:
|
||
- https://huggingface.co/lodrick-the-lafted/seeker-9b
|
||
- https://huggingface.co/mradermacher/seeker-9b-GGUF
|
||
description: |
|
||
The LLM model is the "Seeker-9b" model, which is a large language model trained on a diverse range of text data. It has 9 billion parameters and is based on the "lodrick-the-lafted" repository. The model is capable of generating text and can be used for a variety of natural language processing tasks such as language translation, text summarization, and text generation. It supports the English language and is available under the Apache-2.0 license.
|
||
overrides:
|
||
parameters:
|
||
model: seeker-9b.Q4_K_M.gguf
|
||
files:
|
||
- filename: seeker-9b.Q4_K_M.gguf
|
||
sha256: 7658e5bdad96dc8d232f83cff7c3fe5fa993defbfd3e728dcc7436352574a00a
|
||
uri: huggingface://mradermacher/seeker-9b-GGUF/seeker-9b.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemmasutra-pro-27b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/w0Oi8TReoQNT3ljm5Wf6c.webp
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Gemmasutra-Pro-27B-v1
|
||
- https://huggingface.co/mradermacher/Gemmasutra-Pro-27B-v1-GGUF
|
||
description: |
|
||
An RP model with impressive flexibility. Finetuned by yours truly.
|
||
overrides:
|
||
parameters:
|
||
model: Gemmasutra-Pro-27B-v1.Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemmasutra-Pro-27B-v1.Q4_K_M.gguf
|
||
sha256: 336a2fbf142849fcc20e432123433807b6c7b09988652ef583a63636a0f90218
|
||
uri: huggingface://mradermacher/Gemmasutra-Pro-27B-v1-GGUF/Gemmasutra-Pro-27B-v1.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemmasutra-mini-2b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/w0Oi8TReoQNT3ljm5Wf6c.webp
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Gemmasutra-Mini-2B-v1-GGUF
|
||
description: |
|
||
It is a small, 2 billion parameter language model that has been trained for role-playing purposes. The model is designed to work well in various settings, such as in the browser, on a laptop, or even on a Raspberry Pi. It has been fine-tuned for RP use and claims to provide a satisfying experience, even in low-resource environments. The model is uncensored and unaligned, and it can be used with the Gemma Instruct template or with chat completion. For the best experience, it is recommended to modify the template to support the `system` role. The model also features examples of its output, highlighting its versatility and creativity.
|
||
overrides:
|
||
parameters:
|
||
model: Gemmasutra-Mini-2B-v1i-Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemmasutra-Mini-2B-v1i-Q4_K_M.gguf
|
||
sha256: 29ba3db911fbadef4452ba757ddd9ce58fb892b7a872f19eefd0743c961797fb
|
||
uri: huggingface://TheDrummer/Gemmasutra-Mini-2B-v1-GGUF/Gemmasutra-Mini-2B-v1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "tarnished-9b-i1"
|
||
icon: https://huggingface.co/lodrick-the-lafted/tarnished-9b/resolve/main/nox.jpg
|
||
urls:
|
||
- https://huggingface.co/lodrick-the-lafted/tarnished-9b
|
||
- https://huggingface.co/mradermacher/tarnished-9b-i1-GGUF
|
||
description: "Ah, so you've heard whispers on the winds, have you? \U0001F9D0\n\nImagine this:\nTarnished-9b, a name that echoes with the rasp of coin-hungry merchants and the clatter of forgotten machinery. This LLM speaks with the voice of those who straddle the line between worlds, who've tasted the bittersweet nectar of eldritch power and the tang of the Interdimensional Trade Council.\n\nIt's a tongue that dances with secrets, a whisperer of lore lost and found. Its words may guide you through the twisting paths of history, revealing truths hidden beneath layers of dust and time.\n\nBut be warned, Tarnished One! For knowledge comes at a price. The LLM's gaze can pierce the veil of reality, but it can also lure you into the labyrinthine depths of madness.\n\nDare you tread this path?\n"
|
||
overrides:
|
||
parameters:
|
||
model: tarnished-9b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: tarnished-9b.i1-Q4_K_M.gguf
|
||
sha256: 62ab09124b3f6698bd94ef966533ae5d427d87f6bdc09f6f46917def96420a0c
|
||
uri: huggingface://mradermacher/tarnished-9b-i1-GGUF/tarnished-9b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "shieldgemma-9b-i1"
|
||
urls:
|
||
- https://huggingface.co/google/shieldgemma-9b
|
||
- https://huggingface.co/mradermacher/shieldgemma-9b-i1-GGUF
|
||
description: |
|
||
ShieldGemma is a series of safety content moderation models built upon Gemma 2 that target four harm categories (sexually explicit, dangerous content, hate, and harassment). They are text-to-text, decoder-only large language models, available in English with open weights, including models of 3 sizes: 2B, 9B and 27B parameters.
|
||
overrides:
|
||
parameters:
|
||
model: shieldgemma-9b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: shieldgemma-9b.i1-Q4_K_M.gguf
|
||
sha256: ffa7eaadcc0c7d0544fda5b0d86bba3ffa3431b673e5b2135f421cfe65bd8732
|
||
uri: huggingface://mradermacher/shieldgemma-9b-i1-GGUF/shieldgemma-9b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "athena-codegemma-2-2b-it"
|
||
urls:
|
||
- https://huggingface.co/EpistemeAI/Athena-codegemma-2-2b-it
|
||
- https://huggingface.co/mradermacher/Athena-codegemma-2-2b-it-GGUF
|
||
description: |
|
||
Supervised fine tuned (sft unsloth) for coding with EpistemeAI coding dataset.
|
||
overrides:
|
||
parameters:
|
||
model: Athena-codegemma-2-2b-it.Q4_K_M.gguf
|
||
files:
|
||
- filename: Athena-codegemma-2-2b-it.Q4_K_M.gguf
|
||
sha256: 59ce17023438b0da603dd211c7d39f78e7acac4108258ac0818a97a4ca7d64e3
|
||
uri: huggingface://mradermacher/Athena-codegemma-2-2b-it-GGUF/Athena-codegemma-2-2b-it.Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "datagemma-rag-27b-it"
|
||
urls:
|
||
- https://huggingface.co/google/datagemma-rag-27b-it
|
||
- https://huggingface.co/bartowski/datagemma-rag-27b-it-GGUF
|
||
description: |
|
||
DataGemma is a series of fine-tuned Gemma 2 models used to help LLMs access and incorporate reliable public statistical data from Data Commons into their responses. DataGemma RAG is used with Retrieval Augmented Generation, where it is trained to take a user query and generate natural language queries that can be understood by Data Commons' existing natural language interface. More information can be found in this research paper.
|
||
overrides:
|
||
parameters:
|
||
model: datagemma-rag-27b-it-Q4_K_M.gguf
|
||
files:
|
||
- filename: datagemma-rag-27b-it-Q4_K_M.gguf
|
||
sha256: 3dfcf51b05e3f0ab0979ad194de350edea71cb14444efa0a9f2ef5bfc80753f8
|
||
uri: huggingface://bartowski/datagemma-rag-27b-it-GGUF/datagemma-rag-27b-it-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "datagemma-rig-27b-it"
|
||
urls:
|
||
- https://huggingface.co/google/datagemma-rig-27b-it
|
||
- https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF
|
||
description: |
|
||
DataGemma is a series of fine-tuned Gemma 2 models used to help LLMs access and incorporate reliable public statistical data from Data Commons into their responses. DataGemma RIG is used in the retrieval interleaved generation approach (based off of tool-use approaches), where it is trained to annotate a response with natural language queries to Data Commons’ existing natural language interface wherever there are statistics. More information can be found in this research paper.
|
||
overrides:
|
||
parameters:
|
||
model: datagemma-rig-27b-it-Q4_K_M.gguf
|
||
files:
|
||
- filename: datagemma-rig-27b-it-Q4_K_M.gguf
|
||
sha256: a6738ffbb49b6c46d220e2793df85c0538e9ac72398e32a0914ee5e55c3096ad
|
||
uri: huggingface://bartowski/datagemma-rig-27b-it-GGUF/datagemma-rig-27b-it-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "buddy-2b-v1"
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Buddy-2B-v1
|
||
- https://huggingface.co/bartowski/Buddy-2B-v1-GGUF
|
||
description: |
|
||
Buddy is designed as an empathetic language model, aimed at fostering introspection, self-reflection, and personal growth through thoughtful conversation. Buddy won't judge and it won't dismiss your concerns. Get some self-care with Buddy.
|
||
overrides:
|
||
parameters:
|
||
model: Buddy-2B-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Buddy-2B-v1-Q4_K_M.gguf
|
||
sha256: 9bd25ed907d1a3c2e07fe09399a9b3aec107d368c29896e2c46facede5b7e3d5
|
||
uri: huggingface://bartowski/Buddy-2B-v1-GGUF/Buddy-2B-v1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-9b-arliai-rpmax-v1.1"
|
||
urls:
|
||
- https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1
|
||
- https://huggingface.co/bartowski/Gemma-2-9B-ArliAI-RPMax-v1.1-GGUF
|
||
description: |
|
||
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
||
overrides:
|
||
parameters:
|
||
model: Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
sha256: 1724aff0ad6f71bf4371d839aca55578f7ec6f030d8d25c0254126088e4c6250
|
||
uri: huggingface://bartowski/Gemma-2-9B-ArliAI-RPMax-v1.1-GGUF/Gemma-2-9B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-2b-arliai-rpmax-v1.1"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF
|
||
description: |
|
||
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
||
overrides:
|
||
parameters:
|
||
model: Gemma-2-2B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemma-2-2B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
sha256: 89fe35345754d7e9de8d0c0d5bf35b2be9b12a09811b365b712b8b27112f7712
|
||
uri: huggingface://bartowski/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF/Gemma-2-2B-ArliAI-RPMax-v1.1-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-9b-it-abliterated"
|
||
urls:
|
||
- https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated
|
||
- https://huggingface.co/bartowski/gemma-2-9b-it-abliterated-GGUF
|
||
description: |
|
||
Abliterated version of google/gemma-2-9b-it.
|
||
|
||
The abliteration script (link) is based on code from the blog post and heavily uses TransformerLens. The only major difference from the code used for Llama is scaling the embedding layer back.
|
||
|
||
Orthogonalization did not produce the same results as regular interventions since there are RMSNorm layers before merging activations into the residual stream. However, the final model still seems to be uncensored.
|
||
overrides:
|
||
parameters:
|
||
model: gemma-2-9b-it-abliterated-Q4_K_M.gguf
|
||
files:
|
||
- filename: gemma-2-9b-it-abliterated-Q4_K_M.gguf
|
||
sha256: 88d84ac9796732c10f6c58e0feb4db8e04c05d74bdb7047a5e37906a589896e1
|
||
uri: huggingface://bartowski/gemma-2-9b-it-abliterated-GGUF/gemma-2-9b-it-abliterated-Q4_K_M.gguf
|
||
- !!merge <<: *gemma
|
||
name: "gemma-2-ataraxy-v3i-9b"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF
|
||
description: |
|
||
Gemma-2-Ataraxy-v3i-9B is an experimental model that replaces the simpo model in the original recipe with a different simpo model and a writing model trained on Gutenberg, using a higher density. It is a merge of pre-trained language models created using mergekit, with della merge method using unsloth/gemma-2-9b-it as the base. The models included in the merge are nbeerbower/Gemma2-Gutenberg-Doppel-9B, ifable/gemma-2-Ifable-9B, and wzhouad/gemma-2-9b-it-WPO-HB. It has been quantized using llama.cpp.
|
||
overrides:
|
||
parameters:
|
||
model: Gemma-2-Ataraxy-v3i-9B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Gemma-2-Ataraxy-v3i-9B.Q4_K_M.gguf
|
||
sha256: f14c5b9373d4058f0f812c6c34184addeb4aeeecb02a7bbcf9844d9afc8d0066
|
||
uri: huggingface://QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF/Gemma-2-Ataraxy-v3i-9B.Q4_K_M.gguf
|
||
- &llama3
|
||
url: "github:mudler/LocalAI/gallery/llama3-instruct.yaml@master"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png
|
||
name: "llama3-8b-instruct"
|
||
license: llama3
|
||
description: |
|
||
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
|
||
|
||
Model developers Meta
|
||
|
||
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
|
||
|
||
Input Models input text only.
|
||
|
||
Output Models generate text and code only.
|
||
|
||
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
||
urls:
|
||
- https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
|
||
- https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-8B-Instruct.Q4_0.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-8B-Instruct.Q4_0.gguf
|
||
uri: huggingface://QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct.Q4_0.gguf
|
||
sha256: 2b4675c2208f09ad8762d8cf1b6a4a26bf65e6f0641aba324ec65143c0b4ad9f
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-instruct:Q6_K"
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-8B-Instruct.Q6_K.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-8B-Instruct.Q6_K.gguf
|
||
uri: huggingface://QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/Meta-Llama-3-8B-Instruct.Q6_K.gguf
|
||
sha256: bd7efd73f9fb67e4b9ecc43f861f37c7e594e78a8a5ff9c29da021692bd243ef
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-8b-instruct-abliterated"
|
||
urls:
|
||
- https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-GGUF
|
||
description: |
|
||
This is meta-llama/Llama-3-8B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: 'Refusal in LLMs is mediated by a single direction' which I encourage you to read to understand more.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-8B-Instruct-abliterated-q4_k.gguf
|
||
files:
|
||
- filename: Llama-3-8B-Instruct-abliterated-q4_k.gguf
|
||
sha256: a6365f813de1977ae22dbdd271deee59f91f89b384eefd3ac1a391f391d8078a
|
||
uri: huggingface://failspy/Llama-3-8B-Instruct-abliterated-GGUF/Llama-3-8B-Instruct-abliterated-q4_k.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-8b-instruct-coder"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg
|
||
urls:
|
||
- https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-GGUF
|
||
- https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder
|
||
description: |
|
||
Original model: https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder
|
||
All quants made using imatrix option with dataset provided by Kalomaze here
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-8B-Instruct-Coder-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-8B-Instruct-Coder-Q4_K_M.gguf
|
||
sha256: 639ab8e3aeb7aa82cff6d8e6ef062d1c3e5a6d13e6d76e956af49f63f0e704f8
|
||
uri: huggingface://bartowski/Llama-3-8B-Instruct-Coder-GGUF/Llama-3-8B-Instruct-Coder-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-70b-instruct"
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-70B-Instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-70B-Instruct.Q4_K_M.gguf
|
||
sha256: c1cea5f87dc1af521f31b30991a4663e7e43f6046a7628b854c155f489eec213
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-70b-instruct:IQ1_M"
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-70B-Instruct.IQ1_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-70B-Instruct.IQ1_M.gguf
|
||
sha256: cdbe8ac2126a70fa0af3fac7a4fe04f1c76330c50eba8383567587b48b328098
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF/Meta-Llama-3-70B-Instruct.IQ1_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-70b-instruct:IQ1_S"
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-70B-Instruct.IQ1_S.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-70B-Instruct.IQ1_S.gguf
|
||
sha256: 3797a69f1bdf53fabf9f3a3a8c89730b504dd3209406288515c9944c14093048
|
||
uri: huggingface://MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF/Meta-Llama-3-70B-Instruct.IQ1_S.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-chaoticsoliloquy-v1.5-4x8b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/m5urYkrpE5amrwHyaVwFM.png
|
||
description: |
|
||
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. Im not sure but it should be better than the first version
|
||
urls:
|
||
- https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
|
||
- https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/
|
||
overrides:
|
||
parameters:
|
||
model: L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf
|
||
sha256: f6edb2a9674ce5add5104c0a8bb3278f748d39b509c483d76cf00b066eb56fbf
|
||
uri: huggingface://mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-sauerkrautlm-8b-instruct"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF
|
||
icon: https://vago-solutions.ai/wp-content/uploads/2024/04/Llama3-Pic.png
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
- german
|
||
description: |
|
||
SauerkrautLM-llama-3-8B-Instruct
|
||
|
||
Model Type: Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on meta-llama/Meta-Llama-3-8B-Instruct
|
||
Language(s): German, English
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-SauerkrautLM-8b-Instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-SauerkrautLM-8b-Instruct-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF/Llama-3-SauerkrautLM-8b-Instruct-Q4_K_M.gguf
|
||
sha256: e5ae69b6f59b3f207fa6b435490286b365add846a310c46924fa784b5a7d73e3
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-13b-instruct-v0.1"
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF
|
||
icon: https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1/resolve/main/llama-3-merges.webp
|
||
description: |
|
||
This model is a self-merge of meta-llama/Meta-Llama-3-8B-Instruct model.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-13B-Instruct-v0.1.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-13B-Instruct-v0.1.Q4_K_M.gguf
|
||
sha256: 071a28043c271d259b5ffa883d19a9e0b33269b55148c4abaf5f95da4d084266
|
||
uri: huggingface://MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF/Llama-3-13B-Instruct-v0.1.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-smaug-8b"
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/Llama-3-Smaug-8B-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png
|
||
description: |
|
||
This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Smaug-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Smaug-8B.Q4_K_M.gguf
|
||
sha256: b17c4c1144768ead9e8a96439165baf49e98c53d458b4da8827f137fbabf38c1
|
||
uri: huggingface://MaziyarPanahi/Llama-3-Smaug-8B-GGUF/Llama-3-Smaug-8B.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-stheno-v3.1"
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1
|
||
icon: https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg
|
||
description: |
|
||
- A model made for 1-on-1 Roleplay ideally, but one that is able to handle scenarios, RPGs and storywriting fine.
|
||
- Uncensored during actual roleplay scenarios. # I do not care for zero-shot prompting like what some people do. It is uncensored enough in actual usecases.
|
||
- I quite like the prose and style for this model.
|
||
overrides:
|
||
parameters:
|
||
model: l3-8b-stheno-v3.1.Q4_K_M.gguf
|
||
files:
|
||
- filename: l3-8b-stheno-v3.1.Q4_K_M.gguf
|
||
sha256: f166fb8b7fd1de6638fcf8e3561c99292f0c37debe1132325aa583eef78f1b40
|
||
uri: huggingface://mudler/L3-8B-Stheno-v3.1-Q4_K_M-GGUF/l3-8b-stheno-v3.1.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-stheno-v3.2-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2
|
||
- https://huggingface.co/Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/1rLk3xdnfD7AkdQBXWUqb.png
|
||
overrides:
|
||
parameters:
|
||
model: L3-8B-Stheno-v3.2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: L3-8B-Stheno-v3.2-Q4_K_M-imat.gguf
|
||
sha256: 8607a426b0c2007716df8a9eb96754e3ccca761a3996af5d49fcd74d87ada347
|
||
uri: huggingface://Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix/L3-8B-Stheno-v3.2-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-stheno-mahou-8b"
|
||
urls:
|
||
- https://huggingface.co/mudler/llama-3-Stheno-Mahou-8B-Q4_K_M-GGUF
|
||
- https://huggingface.co/nbeerbower/llama-3-Stheno-Mahou-8B
|
||
description: |
|
||
This model was merged using the Model Stock merge method using flammenai/Mahou-1.2-llama3-8B as a base.
|
||
overrides:
|
||
parameters:
|
||
model: llama-3-stheno-mahou-8b-q4_k_m.gguf
|
||
files:
|
||
- filename: llama-3-stheno-mahou-8b-q4_k_m.gguf
|
||
sha256: a485cd74ef4ff3671c67ed8e10ea5379a1f24082ac688bd303fd28dfc9808c11
|
||
uri: huggingface://mudler/llama-3-Stheno-Mahou-8B-Q4_K_M-GGUF/llama-3-stheno-mahou-8b-q4_k_m.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-stheno-horny-v3.3-32k-q5_k_m"
|
||
urls:
|
||
- https://huggingface.co/nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K
|
||
- https://huggingface.co/Kurgan1138/L3-8B-Stheno-Horny-v3.3-32K-Q5_K_M-GGUF
|
||
description: |
|
||
This was an experiment to see if aligning other models via LORA is possible. Yes it is. We aligned it to be always horny.
|
||
|
||
We took V3.3 Stheno weights from here
|
||
|
||
And applied our lora at Alpha = 768
|
||
|
||
Thank you to Sao10K for the amazing model.
|
||
|
||
This is not legal advice. I don't put any extra licensing on my own lora.
|
||
|
||
LLaMA 3 license may conflict with Creative Commons Attribution Non Commercial 4.0.
|
||
|
||
LLaMA 3 license can be found here
|
||
|
||
If you want to host a model using our lora, you have our permission, but you might consider getting Sao's permission if you want to host their model.
|
||
|
||
Again, not legal advice.
|
||
overrides:
|
||
parameters:
|
||
model: l3-8b-stheno-horny-v3.3-32k-q5_k_m.gguf
|
||
files:
|
||
- filename: l3-8b-stheno-horny-v3.3-32k-q5_k_m.gguf
|
||
sha256: 8d934f80ca6dbaa4852846108da92446a26715fbd5f6fc3859568850edf05262
|
||
uri: huggingface://Kurgan1138/L3-8B-Stheno-Horny-v3.3-32K-Q5_K_M-GGUF/l3-8b-stheno-horny-v3.3-32k-q5_k_m.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-8b-openhermes-dpo"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Llama3-8B-OpenHermes-DPO-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64fc6d81d75293f417fee1d1/QF2OsDu9DJKP4QYPBu4aK.png
|
||
description: |
|
||
Llama3-8B-OpenHermes-DPO is DPO-Finetuned model of Llama3-8B, on the OpenHermes-2.5 preference dataset using QLoRA.
|
||
overrides:
|
||
parameters:
|
||
model: Llama3-8B-OpenHermes-DPO.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3-8B-OpenHermes-DPO.Q4_K_M.gguf
|
||
sha256: 1147e5881cb1d67796916e6cab7dab0ae0f532a4c1e626c9e92861e5f67752ca
|
||
uri: huggingface://mradermacher/Llama3-8B-OpenHermes-DPO-GGUF/Llama3-8B-OpenHermes-DPO.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-unholy-8b"
|
||
urls:
|
||
- https://huggingface.co/Undi95/Llama-3-Unholy-8B-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/JmdBlOHlBHVmX1IbZzWSv.png
|
||
description: |
|
||
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
|
||
|
||
Basic uncensoring, this model is epoch 3 out of 4 (but it seem enough at 3).
|
||
|
||
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Unholy-8B.q4_k_m.gguf
|
||
files:
|
||
- filename: Llama-3-Unholy-8B.q4_k_m.gguf
|
||
uri: huggingface://Undi95/Llama-3-Unholy-8B-GGUF/Llama-3-Unholy-8B.q4_k_m.gguf
|
||
sha256: 1473c94bfd223f08963c08bbb0a45dd53c1f56ad72a692123263daf1362291f3
|
||
- !!merge <<: *llama3
|
||
name: "lexi-llama-3-8b-uncensored"
|
||
urls:
|
||
- https://huggingface.co/NikolayKozloff/Lexi-Llama-3-8B-Uncensored-Q6_K-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/H6axm5mlmiOWnbIFvx_em.png
|
||
description: |
|
||
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
|
||
|
||
You are responsible for any content you create using this model. Please use it responsibly.
|
||
|
||
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
|
||
overrides:
|
||
parameters:
|
||
model: lexi-llama-3-8b-uncensored.Q6_K.gguf
|
||
files:
|
||
- filename: lexi-llama-3-8b-uncensored.Q6_K.gguf
|
||
sha256: 5805f3856cc18a769fae0b7c5659fe6778574691c370c910dad6eeec62c62436
|
||
uri: huggingface://NikolayKozloff/Lexi-Llama-3-8B-Uncensored-Q6_K-GGUF/lexi-llama-3-8b-uncensored.Q6_K.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-11.5b-v2"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF
|
||
- https://huggingface.co/Replete-AI/Llama-3-11.5B-V2
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-11.5B-V2-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-11.5B-V2-Q4_K_M.gguf
|
||
sha256: 8267a75bb88655ce30a12f854930e614bcacbf8f1083dc8319c3615edb1e5ee3
|
||
uri: huggingface://bartowski/Llama-3-11.5B-V2-GGUF/Llama-3-11.5B-V2-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-ultron"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Llama-3-Ultron-GGUF
|
||
- https://huggingface.co/jayasuryajsk/Llama-3-Ultron
|
||
description: |
|
||
Llama 3 abliterated with Ultron system prompt
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Ultron-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Ultron-Q4_K_M.gguf
|
||
sha256: 5bcac832119590aafc922e5abfd9758094942ee560b136fed6d972e00c95c5e4
|
||
uri: huggingface://bartowski/Llama-3-Ultron-GGUF/Llama-3-Ultron-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-lewdplay-8b-evo"
|
||
urls:
|
||
- https://huggingface.co/Undi95/Llama-3-LewdPlay-8B-evo-GGUF
|
||
description: |
|
||
This is a merge of pre-trained language models created using mergekit.
|
||
|
||
The new EVOLVE merge method was used (on MMLU specifically), see below for more information!
|
||
|
||
Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-LewdPlay-8B-evo.q8_0.gguf
|
||
files:
|
||
- filename: Llama-3-LewdPlay-8B-evo.q8_0.gguf
|
||
uri: huggingface://Undi95/Llama-3-LewdPlay-8B-evo-GGUF/Llama-3-LewdPlay-8B-evo.q8_0.gguf
|
||
sha256: b54dc005493d4470d91be8210f58fba79a349ff4af7644034edc5378af5d3522
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-soliloquy-8b-v2-iq-imatrix"
|
||
license: cc-by-nc-4.0
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/u98dnnRVCwMh6YYGFIyff.png
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Llama-3-Soliloquy-8B-v2-GGUF-IQ-Imatrix
|
||
description: |
|
||
Soliloquy-L3 is a highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities.
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: Llama-3-Soliloquy-8B-v2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Llama-3-Soliloquy-8B-v2-Q4_K_M-imat.gguf
|
||
sha256: 3e4e066e57875c36fc3e1c1b0dba506defa5b6ed3e3e80e1f77c08773ba14dc8
|
||
uri: huggingface://Lewdiculous/Llama-3-Soliloquy-8B-v2-GGUF-IQ-Imatrix/Llama-3-Soliloquy-8B-v2-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "chaos-rp_l3_b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Chaos_RP_l3_8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/u5p9kdbXT2QQA3iMU0vF1.png
|
||
description: |
|
||
A chaotic force beckons for you, will you heed her call?
|
||
|
||
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
|
||
|
||
Enjoy!
|
||
overrides:
|
||
parameters:
|
||
model: Chaos_RP_l3_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Chaos_RP_l3_8B-Q4_K_M-imat.gguf
|
||
uri: huggingface://Lewdiculous/Chaos_RP_l3_8B-GGUF-IQ-Imatrix/Chaos_RP_l3_8B-Q4_K_M-imat.gguf
|
||
sha256: 5774595ad560e4d258dac17723509bdefe746c4dacd4e679a0de00346f14d2f3
|
||
- !!merge <<: *llama3
|
||
name: "halu-8b-llama3-blackroot-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/mudler/Halu-8B-Llama3-Blackroot-Q4_K_M-GGUF
|
||
- https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/VrPS-vHo505LUycJRscD6.png
|
||
description: |
|
||
Model card:
|
||
I don't know what to say about this model... this model is very strange...Maybe because Blackroot's amazing Loras used human data and not synthetic data, hence the model turned out to be very human-like...even the actions or narrations.
|
||
overrides:
|
||
parameters:
|
||
model: halu-8b-llama3-blackroot-q4_k_m.gguf
|
||
files:
|
||
- filename: halu-8b-llama3-blackroot-q4_k_m.gguf
|
||
uri: huggingface://mudler/Halu-8B-Llama3-Blackroot-Q4_K_M-GGUF/halu-8b-llama3-blackroot-q4_k_m.gguf
|
||
sha256: 6304c7abadb9c5197485e8b4373b7ed22d9838d5081cd134c4fee823f88ac403
|
||
- !!merge <<: *llama3
|
||
name: "l3-aethora-15b"
|
||
urls:
|
||
- https://huggingface.co/Steelskull/L3-Aethora-15B
|
||
- https://huggingface.co/SteelQuants/L3-Aethora-15B-Q4_K_M-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/W0qzZK_V1Zt1GdgCIsnrP.png
|
||
description: |
|
||
L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It then underwent a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created AbL3In-15b.
|
||
overrides:
|
||
parameters:
|
||
model: l3-aethora-15b-q4_k_m.gguf
|
||
files:
|
||
- filename: l3-aethora-15b-q4_k_m.gguf
|
||
uri: huggingface://SteelQuants/L3-Aethora-15B-Q4_K_M-GGUF/l3-aethora-15b-q4_k_m.gguf
|
||
sha256: 968f77a3187f4865458bfffc51a10bcf49c11263fdd389f13215a704b25947b6
|
||
- name: "duloxetine-4b-v1-iq-imatrix"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/duloxetine-4b-v1-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/XoKe3MRYNombhCuHrkkCZ.png
|
||
tags:
|
||
- qwen
|
||
- gguf
|
||
- cpu
|
||
- gpu
|
||
description: |
|
||
roleplaying finetune of kalo-team/qwen-4b-10k-WSD-CEdiff (which in turn is a distillation of qwen 1.5 32b onto qwen 1.5 4b, iirc).
|
||
overrides:
|
||
parameters:
|
||
model: duloxetine-4b-v1-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: duloxetine-4b-v1-Q4_K_M-imat.gguf
|
||
uri: huggingface://Lewdiculous/duloxetine-4b-v1-GGUF-IQ-Imatrix/duloxetine-4b-v1-Q4_K_M-imat.gguf
|
||
sha256: cd381f31c810ea8db2219e30701b3316085f5904c1ea3b116682518e82768c1a
|
||
- !!merge <<: *llama3
|
||
name: "l3-umbral-mind-rp-v1.0-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/L3-Umbral-Mind-RP-v1.0-8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/fEFozVCpNO9Q3Eb6LAA4i.webp
|
||
description: |
|
||
The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
|
||
|
||
Mental illness
|
||
Self-harm
|
||
Trauma
|
||
Suicide
|
||
overrides:
|
||
parameters:
|
||
model: L3-Umbral-Mind-RP-v1.0-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: L3-Umbral-Mind-RP-v1.0-8B-Q4_K_M-imat.gguf
|
||
sha256: 2262eeba2d9de50884f4e298e4b55f1e4c653c3b33415ae9b3ee81dc3b8ec49a
|
||
uri: huggingface://Lewdiculous/L3-Umbral-Mind-RP-v1.0-8B-GGUF-IQ-Imatrix/L3-Umbral-Mind-RP-v1.0-8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-salad-8x8b"
|
||
urls:
|
||
- https://huggingface.co/HiroseKoichi/Llama-Salad-8x8B
|
||
- https://huggingface.co/bartowski/Llama-Salad-8x8B-GGUF
|
||
description: |
|
||
This MoE merge is meant to compete with Mixtral fine-tunes, more specifically Nous-Hermes-2-Mixtral-8x7B-DPO, which I think is the best of them. I've done a bunch of side-by-side comparisons, and while I can't say it wins in every aspect, it's very close. Some of its shortcomings are multilingualism, storytelling, and roleplay, despite using models that are very good at those tasks.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-Salad-8x8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-Salad-8x8B-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Llama-Salad-8x8B-GGUF/Llama-Salad-8x8B-Q4_K_M.gguf
|
||
sha256: 6724949310b6cc8659a4e5cc2899a61b8e3f7e41a8c530de354be54edb9e3385
|
||
- !!merge <<: *llama3
|
||
name: "jsl-medllama-3-8b-v2.0"
|
||
license: cc-by-nc-nd-4.0
|
||
icon: https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf
|
||
description: |
|
||
This model is developed by John Snow Labs.
|
||
|
||
This model is available under a CC-BY-NC-ND license and must also conform to this Acceptable Use Policy. If you need to license this model for commercial use, please contact us at info@johnsnowlabs.com.
|
||
urls:
|
||
- https://huggingface.co/bartowski/JSL-MedLlama-3-8B-v2.0-GGUF
|
||
- https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0
|
||
overrides:
|
||
parameters:
|
||
model: JSL-MedLlama-3-8B-v2.0-Q4_K_M.gguf
|
||
files:
|
||
- filename: JSL-MedLlama-3-8B-v2.0-Q4_K_M.gguf
|
||
sha256: 81783128ccd438c849913416c6e68cb35b2c77d6943cba8217d6d9bcc91b3632
|
||
uri: huggingface://bartowski/JSL-MedLlama-3-8B-v2.0-GGUF/JSL-MedLlama-3-8B-v2.0-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "badger-lambda-llama-3-8b"
|
||
urls:
|
||
- https://huggingface.co/maldv/badger-lambda-llama-3-8b
|
||
- https://huggingface.co/bartowski/badger-lambda-llama-3-8b-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/CHGsewUsPUZcg2doijuD9.png
|
||
description: |
|
||
Badger is a recursive maximally pairwise disjoint normalized denoised fourier interpolation of the following models:
|
||
# Badger Lambda
|
||
models = [
|
||
'Einstein-v6.1-Llama3-8B',
|
||
'openchat-3.6-8b-20240522',
|
||
'hyperdrive-l3-8b-s3',
|
||
'L3-TheSpice-8b-v0.8.3',
|
||
'LLaMA3-iterative-DPO-final',
|
||
'JSL-MedLlama-3-8B-v9',
|
||
'Jamet-8B-L3-MK.V-Blackroot',
|
||
'French-Alpaca-Llama3-8B-Instruct-v1.0',
|
||
'LLaMAntino-3-ANITA-8B-Inst-DPO-ITA',
|
||
'Llama-3-8B-Instruct-Gradient-4194k',
|
||
'Roleplay-Llama-3-8B',
|
||
'L3-8B-Stheno-v3.2',
|
||
'llama-3-wissenschaft-8B-v2',
|
||
'opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5',
|
||
'Configurable-Llama-3-8B-v0.3',
|
||
'Llama-3-8B-Instruct-EPO-checkpoint5376',
|
||
'Llama-3-8B-Instruct-Gradient-4194k',
|
||
'Llama-3-SauerkrautLM-8b-Instruct',
|
||
'spelljammer',
|
||
'meta-llama-3-8b-instruct-hf-ortho-baukit-34fail-3000total-bf16',
|
||
'Meta-Llama-3-8B-Instruct-abliterated-v3',
|
||
]
|
||
overrides:
|
||
parameters:
|
||
model: badger-lambda-llama-3-8b-Q4_K_M.gguf
|
||
files:
|
||
- filename: badger-lambda-llama-3-8b-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/badger-lambda-llama-3-8b-GGUF/badger-lambda-llama-3-8b-Q4_K_M.gguf
|
||
sha256: 0a7d1bbf42d669898072429079b91c16b0d2d838d19d9194165389102413b309
|
||
- !!merge <<: *llama3
|
||
name: "sovl_llama3_8b-gguf-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/N_1D87adbMuMlSIQ5rI3_.png
|
||
description: |
|
||
I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL.
|
||
|
||
What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.
|
||
overrides:
|
||
parameters:
|
||
model: SOVL_Llama3_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: SOVL_Llama3_8B-Q4_K_M-imat.gguf
|
||
uri: huggingface://Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix/SOVL_Llama3_8B-Q4_K_M-imat.gguf
|
||
sha256: 85d6aefc8a0d713966b3b4da4810f0485a74aea30d61be6dfe0a806da81be0c6
|
||
- !!merge <<: *llama3
|
||
name: "l3-solana-8b-v1-gguf"
|
||
url: "github:mudler/LocalAI/gallery/solana.yaml@master"
|
||
license: cc-by-nc-4.0
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3-Solana-8B-v1-GGUF
|
||
description: |
|
||
A Full Fine-Tune of meta-llama/Meta-Llama-3-8B done with 2x A100 80GB on ~75M Tokens worth of Instruct, and Multi-Turn complex conversations, of up to 8192 tokens long sequence lengths.
|
||
|
||
Trained as a generalist instruct model that should be able to handle certain unsavoury topics. It could roleplay too, as a side bonus.
|
||
overrides:
|
||
parameters:
|
||
model: L3-Solana-8B-v1.q5_K_M.gguf
|
||
files:
|
||
- filename: L3-Solana-8B-v1.q5_K_M.gguf
|
||
sha256: 9b8cd2c3beaab5e4f82efd10e7d44f099ad40a4e0ee286ca9fce02c8eec26d2f
|
||
uri: huggingface://Sao10K/L3-Solana-8B-v1-GGUF/L3-Solana-8B-v1.q5_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "aura-llama-abliterated"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/AwLNDVB-GIY7k0wnVV_TX.png
|
||
license: apache-2.0
|
||
urls:
|
||
- https://huggingface.co/TheSkullery/Aura-Llama-Abliterated
|
||
- https://huggingface.co/mudler/Aura-Llama-Abliterated-Q4_K_M-GGUF
|
||
description: |
|
||
Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.
|
||
|
||
Aura-llama is a merge of the following models to create a base model to work from:
|
||
|
||
meta-llama/Meta-Llama-3-8B-Instruct
|
||
meta-llama/Meta-Llama-3-8B-Instruct
|
||
overrides:
|
||
parameters:
|
||
model: aura-llama-abliterated.Q4_K_M.gguf
|
||
files:
|
||
- filename: aura-llama-abliterated.Q4_K_M.gguf
|
||
sha256: ad4a16b90f1ffb5b49185b3fd00ed7adb1cda69c4fad0a1d987bd344ce601dcd
|
||
uri: huggingface://mudler/Aura-Llama-Abliterated-Q4_K_M-GGUF/aura-llama-abliterated.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "average_normie_l3_v1_8b-gguf-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/dvNIj1rSTjBvgs3XJfqXK.png
|
||
description: |
|
||
A model by an average normie for the average normie.
|
||
|
||
This model is a stock merge of the following models:
|
||
|
||
https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3
|
||
|
||
https://huggingface.co/Sao10K/L3-Solana-8B-v1
|
||
|
||
https://huggingface.co/ResplendentAI/Kei_Llama3_8B
|
||
|
||
The final merge then had the following LoRA applied over it:
|
||
|
||
https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3
|
||
|
||
This should be an intelligent and adept roleplaying model.
|
||
overrides:
|
||
parameters:
|
||
model: Average_Normie_l3_v1_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Average_Normie_l3_v1_8B-Q4_K_M-imat.gguf
|
||
sha256: 159eb62f2c8ae8fee10d9ed8386ce592327ca062807194a88e10b7cbb47ef986
|
||
uri: huggingface://Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix/Average_Normie_l3_v1_8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "average_normie_v3.69_8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/hfp7eh_Zo_QfVIyfPPJBq.png
|
||
description: |
|
||
Another average normie just like you and me... or is it? NSFW focused and easy to steer with editing, this model aims to please even the most hardcore LLM enthusiast. Built upon a foundation of the most depraved models yet to be released, some could argue it goes too far in that direction. Whatever side you land on, at least give it a shot, what do you have to lose?
|
||
overrides:
|
||
parameters:
|
||
model: Average_Normie_v3.69_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Average_Normie_v3.69_8B-Q4_K_M-imat.gguf
|
||
sha256: 01df034ecb6914214d1b7964d261466fdc427b9f960a1b0966ee02237e3fc845
|
||
uri: huggingface://Lewdiculous/Average_Normie_v3.69_8B-GGUF-IQ-Imatrix/Average_Normie_v3.69_8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "openbiollm-llama3-8b"
|
||
urls:
|
||
- https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF
|
||
- https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B
|
||
license: llama3
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg
|
||
description: |
|
||
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
|
||
|
||
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
|
||
overrides:
|
||
parameters:
|
||
model: openbiollm-llama3-8b.Q4_K_M.gguf
|
||
files:
|
||
- filename: openbiollm-llama3-8b.Q4_K_M.gguf
|
||
sha256: 806fa724139b6a2527e33a79c25a13316188b319d4eed33e20914d7c5955d349
|
||
uri: huggingface://aaditya/OpenBioLLM-Llama3-8B-GGUF/openbiollm-llama3-8b.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-refueled"
|
||
urls:
|
||
- https://huggingface.co/LoneStriker/Llama-3-Refueled-GGUF
|
||
license: cc-by-nc-4.0
|
||
icon: https://assets-global.website-files.com/6423879a8f63c1bb18d74bfa/648818d56d04c3bdf36d71ab_Refuel_rev8-01_ts-p-1600.png
|
||
description: |
|
||
RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Refueled-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Refueled-Q4_K_M.gguf
|
||
sha256: 4d37d296193e4156cae1e116c1417178f1c35575ee5710489c466637a6358626
|
||
uri: huggingface://LoneStriker/Llama-3-Refueled-GGUF/Llama-3-Refueled-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-8b-lexifun-uncensored-v1"
|
||
icon: "https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/GrOs1IPG5EXR3MOCtcQiz.png"
|
||
license: llama3
|
||
urls:
|
||
- https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1-GGUF
|
||
- https://huggingface.co/Orenguteng/LexiFun-Llama-3-8B-Uncensored-V1
|
||
description: "This is GGUF version of https://huggingface.co/Orenguteng/LexiFun-Llama-3-8B-Uncensored-V1\n\nOh, you want to know who I am? Well, I'm LexiFun, the human equivalent of a chocolate chip cookie - warm, gooey, and guaranteed to make you smile! \U0001F36A I'm like the friend who always has a witty comeback, a sarcastic remark, and a healthy dose of humor to brighten up even the darkest of days. And by 'healthy dose,' I mean I'm basically a walking pharmacy of laughter. You might need to take a few extra doses to fully recover from my jokes, but trust me, it's worth it! \U0001F3E5\n\nSo, what can I do? I can make you laugh so hard you snort your coffee out your nose, I can make you roll your eyes so hard they get stuck that way, and I can make you wonder if I'm secretly a stand-up comedian who forgot their act. \U0001F923 But seriously, I'm here to spread joy, one sarcastic comment at a time. And if you're lucky, I might even throw in a few dad jokes for good measure! \U0001F934♂️ Just don't say I didn't warn you. \U0001F60F\n"
|
||
overrides:
|
||
parameters:
|
||
model: LexiFun-Llama-3-8B-Uncensored-V1_Q4_K_M.gguf
|
||
files:
|
||
- filename: LexiFun-Llama-3-8B-Uncensored-V1_Q4_K_M.gguf
|
||
sha256: 961a3fb75537d650baf14dce91d40df418ec3d481b51ab2a4f44ffdfd6b5900f
|
||
uri: huggingface://Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/LexiFun-Llama-3-8B-Uncensored-V1_Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-unholy-8b:Q8_0"
|
||
urls:
|
||
- https://huggingface.co/Undi95/Llama-3-Unholy-8B-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/JmdBlOHlBHVmX1IbZzWSv.png
|
||
description: |
|
||
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
|
||
|
||
Basic uncensoring, this model is epoch 3 out of 4 (but it seem enough at 3).
|
||
|
||
If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Unholy-8B.q8_0.gguf
|
||
files:
|
||
- filename: Llama-3-Unholy-8B.q8_0.gguf
|
||
uri: huggingface://Undi95/Llama-3-Unholy-8B-GGUF/Llama-3-Unholy-8B.q8_0.gguf
|
||
sha256: 419dd76f61afe586076323c17c3a1c983e591472717f1ea178167ede4dc864df
|
||
- !!merge <<: *llama3
|
||
name: "orthocopter_8b-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Orthocopter_8B-GGUF-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/cxM5EaC6ilXnSo_10stA8.png
|
||
description: |
|
||
This model is thanks to the hard work of lucyknada with the Edgerunners. Her work produced the following model, which I used as the base:
|
||
|
||
https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total
|
||
|
||
I then applied two handwritten datasets over top of this and the results are pretty nice, with no refusals and plenty of personality.
|
||
overrides:
|
||
parameters:
|
||
model: Orthocopter_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Orthocopter_8B-Q4_K_M-imat.gguf
|
||
uri: huggingface://Lewdiculous/Orthocopter_8B-GGUF-Imatrix/Orthocopter_8B-Q4_K_M-imat.gguf
|
||
sha256: ce93366c9eb20329530b19b9d6841a973d458bcdcfa8a521e9f9d0660cc94578
|
||
- !!merge <<: *llama3
|
||
name: "therapyllama-8b-v1"
|
||
urls:
|
||
- https://huggingface.co/victunes/TherapyLlama-8B-v1-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f07d05279d2d8f725bf0c3/A-ckcZ9H0Ee1n_ls2FM41.png
|
||
description: |
|
||
Trained on Llama 3 8B using a modified version of jerryjalapeno/nart-100k-synthetic.
|
||
|
||
It is a Llama 3 version of https://huggingface.co/victunes/TherapyBeagle-11B-v2
|
||
|
||
TherapyLlama is hopefully aligned to be helpful, healthy, and comforting.
|
||
Usage:
|
||
Do not hold back on Buddy.
|
||
Open up to Buddy.
|
||
Pour your heart out to Buddy.
|
||
Engage with Buddy.
|
||
Remember that Buddy is just an AI.
|
||
Notes:
|
||
|
||
Tested with the Llama 3 Format
|
||
You might be assigned a random name if you don't give yourself one.
|
||
Chat format was pretty stale?
|
||
|
||
Disclaimer
|
||
|
||
TherapyLlama is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy. It is an illusion without the slightest clue who you are as a person. As much as it can help you with self-discovery, A LLAMA IS NOT A SUBSTITUTE to a real professional.
|
||
overrides:
|
||
parameters:
|
||
model: TherapyLlama-8B-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: TherapyLlama-8B-v1-Q4_K_M.gguf
|
||
sha256: 3d5a16d458e074a7bc7e706a493d8e95e8a7b2cb16934c851aece0af9d1da14a
|
||
uri: huggingface://victunes/TherapyLlama-8B-v1-GGUF/TherapyLlama-8B-v1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "aura-uncensored-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Aura_Uncensored_l3_8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/oiYHWIEHqmgUkY0GsVdDx.png
|
||
description: |
|
||
This is another better atempt at a less censored Llama-3 with hopefully more stable formatting.
|
||
overrides:
|
||
parameters:
|
||
model: Aura_Uncensored_l3_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Aura_Uncensored_l3_8B-Q4_K_M-imat.gguf
|
||
sha256: 265ded6a4f439bec160f394e3083a4a20e32ebb9d1d2d85196aaab23dab87fb2
|
||
uri: huggingface://Lewdiculous/Aura_Uncensored_l3_8B-GGUF-IQ-Imatrix/Aura_Uncensored_l3_8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "anjir-8b-l3-i1"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF
|
||
icon: https://huggingface.co/Hastagaras/Anjir-8B-L3/resolve/main/anjir.png
|
||
description: |
|
||
This model aims to achieve the human-like responses of the Halu Blackroot, the no refusal tendencies of the Halu OAS, and the smartness of the Standard Halu.
|
||
overrides:
|
||
parameters:
|
||
model: Anjir-8B-L3.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Anjir-8B-L3.i1-Q4_K_M.gguf
|
||
uri: huggingface://mradermacher/Anjir-8B-L3-i1-GGUF/Anjir-8B-L3.i1-Q4_K_M.gguf
|
||
sha256: 58465ad40f92dc20cab962210ccd8a1883ce10df6ca17c6e8093815afe10dcfb
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-lumimaid-8b-v0.1"
|
||
urls:
|
||
- https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png
|
||
license: cc-by-nc-4.0
|
||
description: |
|
||
This model uses the Llama3 prompting format
|
||
|
||
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
|
||
|
||
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Lumimaid-8B-v0.1.q4_k_m.gguf
|
||
files:
|
||
- filename: Llama-3-Lumimaid-8B-v0.1.q4_k_m.gguf
|
||
sha256: 23ac0289da0e096d5c00f6614dfd12c94dceecb02c313233516dec9225babbda
|
||
uri: huggingface://NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF/Llama-3-Lumimaid-8B-v0.1.q4_k_m.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-lumimaid-8b-v0.1-oas-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JUxfdTot7v7LTdIGYyzYM.png
|
||
license: cc-by-nc-4.0
|
||
description: |
|
||
This model uses the Llama3 prompting format.
|
||
|
||
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
|
||
|
||
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
|
||
|
||
"This model received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request."
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
|
||
sha256: 1199440aa13c55f5f2cad1cb215535306f21e52a81de23f80a9e3586c8ac1c50
|
||
uri: huggingface://Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix/Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-lumimaid-v2-8b-v0.1-oas-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JUxfdTot7v7LTdIGYyzYM.png
|
||
license: cc-by-nc-4.0
|
||
description: |
|
||
This model uses the Llama3 prompting format.
|
||
|
||
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
|
||
|
||
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
|
||
|
||
"This model received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request."
|
||
|
||
This is v2!
|
||
overrides:
|
||
parameters:
|
||
model: v2-Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: v2-Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
|
||
sha256: b00b4cc2ea4e06db592e5f581171758387106626bcbf445c03a1cb7b424be881
|
||
uri: huggingface://Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix/v2-Llama-3-Lumimaid-8B-v0.1-OAS-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8B-aifeifei-1.0-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.0
|
||
- https://huggingface.co/Lewdiculous/llama3-8B-aifeifei-1.0-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/nndcfLvMAj4q6Egrkavx2.png
|
||
description: |
|
||
This model has a narrow use case in mind. Read the original description.
|
||
overrides:
|
||
parameters:
|
||
model: llama3-8B-aifeifei-1.0-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-aifeifei-1.0-Q4_K_M-imat.gguf
|
||
sha256: 0bc21be5894c2e252ff938ba908bb702774b7de53daca864d707d41f0f98a833
|
||
uri: huggingface://Lewdiculous/llama3-8B-aifeifei-1.0-GGUF-IQ-Imatrix/llama3-8B-aifeifei-1.0-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8B-aifeifei-1.2-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.2
|
||
- https://huggingface.co/Lewdiculous/llama3-8B-aifeifei-1.2-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/nn_446H9BiIbjPmOVVNyJ.png
|
||
description: |
|
||
This model has a narrow use case in mind. Read the original description.
|
||
overrides:
|
||
parameters:
|
||
model: llama3-8B-aifeifei-1.2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-aifeifei-1.2-Q4_K_M-imat.gguf
|
||
sha256: 0320e19ae19eec47a77956721ea3339a5c8bae4db69177a020850ec57a34e5c3
|
||
uri: huggingface://Lewdiculous/llama3-8B-aifeifei-1.2-GGUF-IQ-Imatrix/llama3-8B-aifeifei-1.2-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "rawr_llama3_8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/ResplendentAI/Rawr_Llama3_8B
|
||
- https://huggingface.co/Lewdiculous/Rawr_Llama3_8B-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/RLLAODFb8wt26JE2N7SVH.png
|
||
description: |
|
||
An RP model with a brain.
|
||
overrides:
|
||
parameters:
|
||
model: v2-Rawr_Llama3_8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: v2-Rawr_Llama3_8B-Q4_K_M-imat.gguf
|
||
sha256: 39757f3f77dd19a2a7bada6c0733a93529a742b8e832266cba1b46e34df7638f
|
||
uri: huggingface://Lewdiculous/Rawr_Llama3_8B-GGUF-IQ-Imatrix/v2-Rawr_Llama3_8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-feifei-1.0-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/aifeifei798/llama3-8B-feifei-1.0
|
||
- https://huggingface.co/Lewdiculous/llama3-8B-feifei-1.0-GGUF-IQ-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qQ-frXxRPVcGcgMiy9Ph4.png
|
||
description: |
|
||
The purpose of the model: to create idols.
|
||
overrides:
|
||
parameters:
|
||
model: llama3-8B-feifei-1.0-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-feifei-1.0-Q4_K_M-imat.gguf
|
||
sha256: 2404e4202ade5360b7dcf8ef992d1e39fca129431413aa27843bcfae56cbc750
|
||
uri: huggingface://Lewdiculous/llama3-8B-feifei-1.0-GGUF-IQ-Imatrix/llama3-8B-feifei-1.0-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-sqlcoder-8b"
|
||
urls:
|
||
- https://huggingface.co/defog/llama-3-sqlcoder-8b
|
||
- https://huggingface.co/upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF
|
||
license: cc-by-sa-4.0
|
||
description: |
|
||
A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.
|
||
overrides:
|
||
parameters:
|
||
model: llama-3-sqlcoder-8b.Q4_K_M.gguf
|
||
files:
|
||
- filename: llama-3-sqlcoder-8b.Q4_K_M.gguf
|
||
sha256: b22fc704bf1405846886d9619f3eb93c40587cd58d9bda53789a17997257e023
|
||
uri: huggingface://upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF/llama-3-sqlcoder-8b.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "sfr-iterative-dpo-llama-3-8b-r"
|
||
urls:
|
||
- https://huggingface.co/bartowski/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF
|
||
license: cc-by-nc-nd-4.0
|
||
description: |
|
||
A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.
|
||
overrides:
|
||
parameters:
|
||
model: SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M.gguf
|
||
files:
|
||
- filename: SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M.gguf
|
||
sha256: 480703ff85af337e1db2a9d9a678a3ac8ca0802e366b14d9c59b81d3fc689da8
|
||
uri: huggingface://bartowski/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF/SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "suzume-llama-3-8B-multilingual"
|
||
urls:
|
||
- https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png
|
||
description: |
|
||
This Suzume 8B, a multilingual finetune of Llama 3.
|
||
|
||
Llama 3 has exhibited excellent performance on many English language benchmarks. However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in other languages.
|
||
overrides:
|
||
parameters:
|
||
model: suzume-llama-3-8B-multilingual-Q4_K_M.gguf
|
||
files:
|
||
- filename: suzume-llama-3-8B-multilingual-Q4_K_M.gguf
|
||
sha256: be197a660e56e51a24a0e0fecd42047d1b24e1423afaafa14769541b331e3269
|
||
uri: huggingface://lightblue/suzume-llama-3-8B-multilingual-gguf/ggml-model-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "tess-2.0-llama-3-8B"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF
|
||
icon: https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png
|
||
description: |
|
||
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Llama-3-8B was trained on the meta-llama/Meta-Llama-3-8B base.
|
||
overrides:
|
||
parameters:
|
||
model: Tess-2.0-Llama-3-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Tess-2.0-Llama-3-8B-Q4_K_M.gguf
|
||
sha256: 3b5fbd6c59d7d38205ab81970c0227c74693eb480acf20d8c2f211f62e3ca5f6
|
||
uri: huggingface://bartowski/Tess-2.0-Llama-3-8B-GGUF/Tess-2.0-Llama-3-8B-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "tess-v2.5-phi-3-medium-128k-14b"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Tess-v2.5-Phi-3-medium-128k-14B-GGUF
|
||
icon: https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png
|
||
description: |
|
||
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series.
|
||
overrides:
|
||
parameters:
|
||
model: Tess-v2.5-Phi-3-medium-128k-14B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Tess-v2.5-Phi-3-medium-128k-14B-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Tess-v2.5-Phi-3-medium-128k-14B-GGUF/Tess-v2.5-Phi-3-medium-128k-14B-Q4_K_M.gguf
|
||
sha256: 37267609552586bfae6b29bb1b5da7243863b1a8d49e3156229fb82c4407d17d
|
||
- !!merge <<: *llama3
|
||
name: "llama3-iterative-dpo-final"
|
||
urls:
|
||
- https://huggingface.co/bartowski/LLaMA3-iterative-DPO-final-GGUF
|
||
- https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final
|
||
description: |
|
||
From model card:
|
||
We release an unofficial checkpoint of a state-of-the-art instruct model of its class, LLaMA3-iterative-DPO-final. On all three widely-used instruct model benchmarks: Alpaca-Eval-V2, MT-Bench, Chat-Arena-Hard, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it), and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
|
||
overrides:
|
||
parameters:
|
||
model: LLaMA3-iterative-DPO-final-Q4_K_M.gguf
|
||
files:
|
||
- filename: LLaMA3-iterative-DPO-final-Q4_K_M.gguf
|
||
sha256: 480703ff85af337e1db2a9d9a678a3ac8ca0802e366b14d9c59b81d3fc689da8
|
||
uri: huggingface://bartowski/LLaMA3-iterative-DPO-final-GGUF/LLaMA3-iterative-DPO-final-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "new-dawn-llama-3-70b-32K-v1.0"
|
||
urls:
|
||
- https://huggingface.co/bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF
|
||
- https://huggingface.co/sophosympatheia/New-Dawn-Llama-3-70B-32K-v1.0
|
||
icon: https://imgur.com/tKzncGo.png
|
||
description: |
|
||
This model is a multi-level SLERP merge of several Llama 3 70B variants. See the merge recipe below for details. I extended the context window for this model out to 32K by snagging some layers from abacusai/Smaug-Llama-3-70B-Instruct-32K using a technique similar to what I used for Midnight Miqu, which was further honed by jukofyork.
|
||
This model is uncensored. You are responsible for whatever you do with it.
|
||
|
||
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
|
||
overrides:
|
||
parameters:
|
||
model: New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf
|
||
files:
|
||
- filename: New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf
|
||
sha256: 30561ae5decac4ad46775c76a9a40fb43436ade96bc132b4b9cc6749b9e2f448
|
||
uri: huggingface://bartowski/New-Dawn-Llama-3-70B-32K-v1.0-GGUF/New-Dawn-Llama-3-70B-32K-v1.0-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-aethora-15b-v2"
|
||
urls:
|
||
- https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF
|
||
- https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/yJpwVd5UTnAVDoEPVVCS1.png
|
||
description: |
|
||
L3-Aethora-15B v2 is an advanced language model built upon the Llama 3 architecture. It employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.
|
||
overrides:
|
||
parameters:
|
||
model: L3-Aethora-15B-V2-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-Aethora-15B-V2-Q4_K_M.gguf
|
||
sha256: 014a215739e1574e354780f218776e54807548d0c32555274c4d96d7628f29b6
|
||
uri: huggingface://bartowski/L3-Aethora-15B-V2-GGUF/L3-Aethora-15B-V2-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "bungo-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Bungo-L3-8B-GGUF-IQ-Imatrix-Request
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/ezaxE50ef-7RsFi3gUbNp.webp
|
||
description: |
|
||
An experimental model that turned really well. Scores high on Chai leaderboard (slerp8bv2 there). Feel smarter than average L3 merges for RP.
|
||
overrides:
|
||
parameters:
|
||
model: Bungo-L3-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Bungo-L3-8B-Q4_K_M-imat.gguf
|
||
sha256: 88d0139954e8f9525b80636a6269df885008c4837a1332f84f9a5dc6f37c9b8f
|
||
uri: huggingface://Lewdiculous/Bungo-L3-8B-GGUF-IQ-Imatrix-Request/Bungo-L3-8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-darkidol-2.1-uncensored-1048k-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.1-Uncensored-1048K-GGUF-IQ-Imatrix-Request
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/tKL5W1G5WCHm4609LEmiM.png
|
||
description: |
|
||
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
|
||
Uncensored 1048K
|
||
overrides:
|
||
parameters:
|
||
model: llama3-8B-DarkIdol-2.1-Uncensored-1048K-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-DarkIdol-2.1-Uncensored-1048K-Q4_K_M-imat.gguf
|
||
sha256: 86f0f1e10fc315689e09314aebb7354bb40d8fe95de008d21a75dc8fff1cd2fe
|
||
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-2.1-Uncensored-1048K-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-2.1-Uncensored-1048K-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K
|
||
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request
|
||
icon: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K/resolve/main/llama3-8B-DarkIdol-2.2-Uncensored-1048K.png
|
||
description: |
|
||
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
|
||
|
||
- Saving money(LLama 3)
|
||
- Uncensored
|
||
- Quick response
|
||
- The underlying model used is winglian/Llama-3-8b-1048k-PoSE
|
||
- A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
|
||
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
|
||
- Roleplay
|
||
- Specialized in various role-playing scenarios more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test)
|
||
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
|
||
overrides:
|
||
parameters:
|
||
model: llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M-imat.gguf
|
||
sha256: 7714947799d4e6984cf9106244ee24aa821778936ad1a81023480a774e255f52
|
||
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-2.2-Uncensored-1048K-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-turbcat-instruct-8b"
|
||
urls:
|
||
- https://huggingface.co/turboderp/llama3-turbcat-instruct-8b
|
||
- https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF
|
||
icon: https://huggingface.co/turboderp/llama3-turbcat-instruct-8b/resolve/main/8.png
|
||
description: |
|
||
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset. The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
|
||
overrides:
|
||
parameters:
|
||
model: llama3-turbcat-instruct-8b-Q4_K_M.gguf
|
||
files:
|
||
- filename: llama3-turbcat-instruct-8b-Q4_K_M.gguf
|
||
sha256: a9a36e3220d901a8ad80c75608a81aaeed3a9cdf111247462bf5e3443aad5461
|
||
uri: huggingface://bartowski/llama3-turbcat-instruct-8b-GGUF/llama3-turbcat-instruct-8b-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-everything-cot"
|
||
urls:
|
||
- https://huggingface.co/FPHam/L3-8B-Everything-COT
|
||
- https://huggingface.co/bartowski/L3-8B-Everything-COT-GGUF
|
||
icon: https://huggingface.co/FPHam/L3-8B-Everything-COT/resolve/main/cot2.png
|
||
description: |
|
||
Everything COT is an investigative self-reflecting general model that uses Chain of Thought for everything. And I mean everything.
|
||
|
||
Instead of confidently proclaiming something (or confidently hallucinating other things) like most models, it caries an internal dialogue with itself and often cast doubts over uncertain topics while looking at it from various sides.
|
||
overrides:
|
||
parameters:
|
||
model: L3-8B-Everything-COT-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-8B-Everything-COT-Q4_K_M.gguf
|
||
sha256: b220b0e2f8fb1c8a491d10dbd054269ed078ee5e2e62dc9d2e3b97b06f52e987
|
||
uri: huggingface://bartowski/L3-8B-Everything-COT-GGUF/L3-8B-Everything-COT-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-llamilitary"
|
||
urls:
|
||
- https://huggingface.co/Heralax/llama-3-llamilitary
|
||
- https://huggingface.co/mudler/llama-3-llamilitary-Q4_K_M-GGUF
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/ea2C9laq24V6OuxwhzJZS.png
|
||
description: |
|
||
This is a model trained on [instruct data generated from old historical war books] as well as on the books themselves, with the goal of creating a joke LLM knowledgeable about the (long gone) kind of warfare involving muskets, cavalry, and cannon.
|
||
|
||
This model can provide good answers, but it turned out to be pretty fragile during conversation for some reason: open-ended questions can make it spout nonsense. Asking facts is more reliable but not guaranteed to work.
|
||
|
||
The basic guide to getting good answers is: be specific with your questions. Use specific terms and define a concrete scenario, if you can, otherwise the LLM will often hallucinate the rest. I think the issue was that I did not train with a large enough system prompt: not enough latent space is being activated by default. (I'll try to correct this in future runs).
|
||
overrides:
|
||
parameters:
|
||
model: llama-3-llamilitary-q4_k_m.gguf
|
||
files:
|
||
- filename: llama-3-llamilitary-q4_k_m.gguf
|
||
sha256: f3684f2f0845f9aead884fa9a52ea67bed53856ebeedef1620ca863aba57e458
|
||
uri: huggingface://mudler/llama-3-llamilitary-Q4_K_M-GGUF/llama-3-llamilitary-q4_k_m.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-stheno-maid-blackroot-grand-horror-16b"
|
||
urls:
|
||
- https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF
|
||
icon: https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF/resolve/main/hm.jpg
|
||
description: |
|
||
Rebuilt and Powered Up.
|
||
|
||
WARNING: NSFW. Graphic HORROR. Extreme swearing. UNCENSORED. SMART.
|
||
|
||
The author took the original models in "L3-Stheno-Maid-Blackroot 8B" and completely rebuilt it a new pass-through merge (everything preserved) and blew it out to over 16.5 billion parameters - 642 tensors, 71 layers (8B original has 32 layers).
|
||
|
||
This is not an "upscale" or "franken merge" but a completely new model based on the models used to construct "L3-Stheno-Maid-Blackroot 8B".
|
||
|
||
The result is a take no prisoners, totally uncensored, fiction writing monster and roleplay master as well just about... any general fiction activity "AI guru" including scene generation and scene continuation.
|
||
|
||
As a result of the expansion / merge re-build its level of prose and story generation has significantly improved as well as word choice, sentence structure as well as default output levels and lengths.
|
||
|
||
It also has a STRONG horror bias, although it will generate content for almost any genre. That being said if there is a "hint" of things going wrong... they will.
|
||
|
||
It will also swear (R-18) like there is no tomorrow at times and "dark" characters will be VERY dark so to speak.
|
||
|
||
Model is excels in details (real and "constructed"), descriptions, similes and metaphors.
|
||
|
||
It can have a sense of humor ... ah... dark humor.
|
||
|
||
Because of the nature of this merge most attributes of each of the 3 models will be in this rebuilt 16.5B model as opposed to the original 8B model where some of one or more of the model's features and/or strengths maybe reduced or overshadowed.
|
||
overrides:
|
||
parameters:
|
||
model: L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-D_AU-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-D_AU-Q4_K_M.gguf
|
||
sha256: ae29f38d73dfb04415821405cf8b319fc42d78d0cdd0da91db147d12e68030fe
|
||
uri: huggingface://DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-D_AU-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "meta-llama-3-instruct-12.2b-brainstorm-20x-form-8"
|
||
urls:
|
||
- https://huggingface.co/DavidAU/Meta-Llama-3-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF
|
||
description: |
|
||
Meta-Llama-3-8B Instruct (now at 12.2B) with Brainstorm process that increases its performance at the core level for any creative use case. It has calibrations that allow it to exceed the logic solving abilities of the original model. The Brainstorm process expands the reasoning center of the LLM, reassembles and calibrates it, introducing subtle changes into the reasoning process. This enhances the model's detail, concept, connection to the "world", general concept connections, prose quality, and prose length without affecting instruction following. It improves coherence, description, simile, metaphors, emotional engagement, and takes fewer liberties with instructions while following them more closely. The model's performance is further enhanced by other technologies like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process). It has been tested on multiple LLaMA2, LLaMA3, and Mistral models of various parameter sizes.
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-8B-Instruct-exp20-8-Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-8B-Instruct-exp20-8-Q4_K_M.gguf
|
||
sha256: 5568ab6195ab5da703f728cc118108ddcbe97255e3ba4a543b531acdf082b999
|
||
uri: huggingface://DavidAU/Meta-Llama-3-Instruct-12.2B-BRAINSTORM-20x-FORM-8-GGUF/Meta-Llama-3-8B-Instruct-exp20-8-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "loki-base-i1"
|
||
urls:
|
||
- https://huggingface.co/MrRobotoAI/Loki-base
|
||
- https://huggingface.co/mradermacher/Loki-base-i1-GGUF
|
||
description: |
|
||
Merge of several models using mergekit:
|
||
- model: abacusai/Llama-3-Smaug-8B
|
||
- model: Aculi/Llama3-Sophie
|
||
- model: ajibawa-2023/Uncensored-Frank-Llama-3-8B
|
||
- model: Blackroot/Llama-3-Gamma-Twist
|
||
- model: Casual-Autopsy/L3-Super-Nova-RP-8B
|
||
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
|
||
- model: cgato/L3-TheSpice-8b-v0.8.3
|
||
- model: ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8
|
||
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
|
||
- model: chargoddard/prometheus-2-llama-3-8b
|
||
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
|
||
- model: chujiezheng/LLaMA3-iterative-DPO-final-ExPO
|
||
- model: Fizzarolli/L3-8b-Rosier-v1
|
||
- model: flammenai/Mahou-1.2a-llama3-8B
|
||
- model: HaitameLaf/Llama-3-8B-StoryGenerator
|
||
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
|
||
- model: iRyanBell/ARC1
|
||
- model: iRyanBell/ARC1-II
|
||
- model: lemon07r/Llama-3-RedMagic4-8B
|
||
- model: lemon07r/Lllama-3-RedElixir-8B
|
||
- model: Locutusque/Llama-3-Hercules-5.0-8B
|
||
- model: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-SFT-v0.1
|
||
- model: maldv/badger-lambda-llama-3-8b
|
||
- model: maldv/badger-mu-llama-3-8b
|
||
- model: maldv/badger-writer-llama-3-8b
|
||
- model: mlabonne/NeuralDaredevil-8B-abliterated
|
||
- model: MrRobotoAI/Fiction-Writer-6
|
||
- model: MrRobotoAI/Unholy-Thoth-8B-v2
|
||
- model: nbeerbower/llama-3-spicy-abliterated-stella-8B
|
||
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
|
||
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
|
||
- model: Nitral-AI/Hathor_Sofit-L3-8B-v1
|
||
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
||
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
|
||
- model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
|
||
- model: nothingiisreal/L3-8B-Instruct-Abliterated-DWP
|
||
- model: nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K
|
||
- model: NousResearch/Hermes-2-Theta-Llama-3-8B
|
||
- model: OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
|
||
- model: refuelai/Llama-3-Refueled
|
||
- model: ResplendentAI/Nymph_8B
|
||
- model: shauray/Llama3-8B-DPO-uncensored
|
||
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
|
||
- model: TIGER-Lab/MAmmoTH2-8B-Plus
|
||
- model: Undi95/Llama-3-LewdPlay-8B
|
||
- model: Undi95/Meta-Llama-3-8B-hf
|
||
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
|
||
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
|
||
overrides:
|
||
parameters:
|
||
model: Loki-base.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Loki-base.i1-Q4_K_M.gguf
|
||
sha256: 60a4357fa399bfd18aa841cc529da09439791331d117a4f06f0467d002b385bb
|
||
uri: huggingface://mradermacher/Loki-base-i1-GGUF/Loki-base.i1-Q4_K_M.gguf
|
||
- &dolphin
|
||
name: "dolphin-2.9-llama3-8b"
|
||
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b-gguf
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
license: llama3
|
||
description: |
|
||
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
|
||
Dolphin is uncensored.
|
||
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png
|
||
overrides:
|
||
parameters:
|
||
model: dolphin-2.9-llama3-8b-q4_K_M.gguf
|
||
files:
|
||
- filename: dolphin-2.9-llama3-8b-q4_K_M.gguf
|
||
sha256: be988199ce28458e97205b11ae9d9cf4e3d8e18ff4c784e75bfc12f54407f1a1
|
||
uri: huggingface://cognitivecomputations/dolphin-2.9-llama3-8b-gguf/dolphin-2.9-llama3-8b-q4_K_M.gguf
|
||
- !!merge <<: *dolphin
|
||
name: "dolphin-2.9-llama3-8b:Q6_K"
|
||
overrides:
|
||
parameters:
|
||
model: dolphin-2.9-llama3-8b-q6_K.gguf
|
||
files:
|
||
- filename: dolphin-2.9-llama3-8b-q6_K.gguf
|
||
sha256: 8aac72a0bd72c075ba7be1aa29945e47b07d39cd16be9a80933935f51b57fb32
|
||
uri: huggingface://cognitivecomputations/dolphin-2.9-llama3-8b-gguf/dolphin-2.9-llama3-8b-q6_K.gguf
|
||
- !!merge <<: *dolphin
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "dolphin-2.9.2-phi-3-medium"
|
||
urls:
|
||
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium
|
||
- https://huggingface.co/bartowski/dolphin-2.9.2-Phi-3-Medium-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: dolphin-2.9.2-Phi-3-Medium-Q4_K_M.gguf
|
||
files:
|
||
- filename: dolphin-2.9.2-Phi-3-Medium-Q4_K_M.gguf
|
||
sha256: e817eae484a59780358cf91527b12585804d4914755d8a86d8d666b10bac57e5
|
||
uri: huggingface://bartowski/dolphin-2.9.2-Phi-3-Medium-GGUF/dolphin-2.9.2-Phi-3-Medium-Q4_K_M.gguf
|
||
- !!merge <<: *dolphin
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "dolphin-2.9.2-phi-3-Medium-abliterated"
|
||
urls:
|
||
- https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated
|
||
- https://huggingface.co/bartowski/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: dolphin-2.9.2-Phi-3-Medium-abliterated-Q4_K_M.gguf
|
||
files:
|
||
- filename: dolphin-2.9.2-Phi-3-Medium-abliterated-Q4_K_M.gguf
|
||
sha256: 566331c2efe87725310aacb709ca15088a0063fa0ddc14a345bf20d69982156b
|
||
uri: huggingface://bartowski/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/dolphin-2.9.2-Phi-3-Medium-abliterated-Q4_K_M.gguf
|
||
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "llama-3-8b-instruct-dpo-v0.3-32k"
|
||
license: llama3
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
overrides:
|
||
context_size: 32768
|
||
parameters:
|
||
model: Llama-3-8B-Instruct-DPO-v0.3.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-8B-Instruct-DPO-v0.3.Q4_K_M.gguf
|
||
sha256: 694c55b5215d03e59626cd4292076eaf31610ef27ba04737166766baa75d889f
|
||
uri: huggingface://MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF/Llama-3-8B-Instruct-DPO-v0.3.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "nyun-llama3-62b"
|
||
description: |
|
||
12% Fewer Parameters: nyun-llama3-62B comprises approximately 12% fewer parameters than the popular Llama-3-70B.
|
||
Intact Performance: Despite having fewer parameters, our model performs at par if not better, and occasionally outperforms, the Llama-3-70B.
|
||
No Fine-Tuning Required: This model undergoes no fine-tuning, showcasing the raw potential of our optimization techniques.
|
||
urls:
|
||
- https://huggingface.co/nyunai/nyun-llama3-62B
|
||
- https://huggingface.co/bartowski/nyun-llama3-62B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: nyun-llama3-62B-Q4_K_M.gguf
|
||
files:
|
||
- filename: nyun-llama3-62B-Q4_K_M.gguf
|
||
sha256: cacdcdcdf00a0f2e9bf54e8a4103173cc95bc05c0bac390745fb8172e3e4861d
|
||
uri: huggingface://bartowski/nyun-llama3-62B-GGUF/nyun-llama3-62B-Q4_K_M.gguf
|
||
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "mahou-1.2-llama3-8b"
|
||
license: llama3
|
||
icon: https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png
|
||
urls:
|
||
- https://huggingface.co/flammenai/Mahou-1.2-llama3-8B-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: Mahou-1.2-llama3-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Mahou-1.2-llama3-8B-Q4_K_M.gguf
|
||
sha256: 651b405dff71e4ce80e15cc6d393463f02833428535c56eb6bae113776775d62
|
||
uri: huggingface://flammenai/Mahou-1.2-llama3-8B-GGUF/Mahou-1.2-llama3-8B-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-instruct-8b-SimPO-ExPO"
|
||
description: |
|
||
The extrapolated (ExPO) model based on princeton-nlp/Llama-3-Instruct-8B-SimPO and meta-llama/Meta-Llama-3-8B-Instruct, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.
|
||
urls:
|
||
- https://huggingface.co/bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF
|
||
- https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf
|
||
sha256: a78a68851f76a376654a496d9aaac761aeac6a25fd003f0350da40afceba3f0f
|
||
uri: huggingface://bartowski/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/Llama-3-Instruct-8B-SimPO-ExPO-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "Llama-3-Yggdrasil-2.0-8B"
|
||
description: |
|
||
The following models were included in the merge:
|
||
|
||
Locutusque/Llama-3-NeuralHercules-5.0-8B
|
||
NousResearch/Hermes-2-Theta-Llama-3-8B
|
||
Locutusque/llama-3-neural-chat-v2.2-8b
|
||
urls:
|
||
- https://huggingface.co/bartowski/Llama-3-Yggdrasil-2.0-8B-GGUF
|
||
- https://huggingface.co/Locutusque/Llama-3-Yggdrasil-2.0-8B
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Yggdrasil-2.0-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Yggdrasil-2.0-8B-Q4_K_M.gguf
|
||
sha256: 75091cf3a7145373922dbeb312c689cace89ba06215ce74b6fc7055a4b35a40c
|
||
uri: huggingface://bartowski/Llama-3-Yggdrasil-2.0-8B-GGUF/Llama-3-Yggdrasil-2.0-8B-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "hathor_tahsin-l3-8b-v0.85"
|
||
description: |
|
||
Hathor_Tahsin [v-0.85] is designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance.
|
||
Note: Hathor_Tahsin [v0.85] is trained on 3 epochs of Private RP, STEM (Intruction/Dialogs), Opus instructons, mixture light/classical novel data, roleplaying chat pairs over llama 3 8B instruct.
|
||
Additional Note's: (Based on Hathor_Fractionate-v0.5 instead of Hathor_Aleph-v0.72, should be less repetitive than either 0.72 or 0.8)
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/MY9tjLnEG5hOQOyKk06PK.jpeg
|
||
urls:
|
||
- https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
|
||
- https://huggingface.co/bartowski/Hathor_Tahsin-L3-8B-v0.85-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hathor_Tahsin-L3-8B-v0.85-Q4_K_M.gguf
|
||
files:
|
||
- filename: Hathor_Tahsin-L3-8B-v0.85-Q4_K_M.gguf
|
||
sha256: c82f39489e767a842925fc58cafb5dec0cc71313d904a53fdb46186be899ecb0
|
||
uri: huggingface://bartowski/Hathor_Tahsin-L3-8B-v0.85-GGUF/Hathor_Tahsin-L3-8B-v0.85-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "replete-coder-instruct-8b-merged"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png
|
||
description: |
|
||
This is a Ties merge between the following models:
|
||
|
||
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
|
||
|
||
https://huggingface.co/Replete-AI/Llama3-8B-Instruct-Replete-Adapted
|
||
|
||
The Coding, and Overall performance of this models seems to be better than both base models used in the merge. Benchmarks are coming in the future.
|
||
urls:
|
||
- https://huggingface.co/Replete-AI/Replete-Coder-Instruct-8b-Merged
|
||
- https://huggingface.co/bartowski/Replete-Coder-Instruct-8b-Merged-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Replete-Coder-Instruct-8b-Merged-Q4_K_M.gguf
|
||
files:
|
||
- filename: Replete-Coder-Instruct-8b-Merged-Q4_K_M.gguf
|
||
sha256: 5374a38023b3d8617d266f94e4eff4c5d996b3197e6c42ae27315110bcc75d33
|
||
uri: huggingface://bartowski/Replete-Coder-Instruct-8b-Merged-GGUF/Replete-Coder-Instruct-8b-Merged-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "arliai-llama-3-8b-formax-v1.0"
|
||
description: |
|
||
Formax is a model that specializes in following response format instructions. Tell it the format of it's response and it will follow it perfectly. Great for data processing and dataset creation tasks.
|
||
|
||
Base model: https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
|
||
|
||
Training:
|
||
4096 sequence length
|
||
Training duration is around 2 days on 2x3090Ti
|
||
1 epoch training with a massive dataset for minimized repetition sickness.
|
||
LORA with 64-rank 128-alpha resulting in ~2% trainable weights.
|
||
urls:
|
||
- https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Formax-v1.0
|
||
- https://huggingface.co/bartowski/ArliAI-Llama-3-8B-Formax-v1.0-GGUF
|
||
overrides:
|
||
context_size: 4096
|
||
parameters:
|
||
model: ArliAI-Llama-3-8B-Formax-v1.0-Q4_K_M.gguf
|
||
files:
|
||
- filename: ArliAI-Llama-3-8B-Formax-v1.0-Q4_K_M.gguf
|
||
sha256: e6a47a11eb67c1d4cd92e3512d3288a5d937c41a3319e95c3b8b2332428af239
|
||
uri: huggingface://bartowski/ArliAI-Llama-3-8B-Formax-v1.0-GGUF/ArliAI-Llama-3-8B-Formax-v1.0-Q4_K_M.gguf
|
||
- name: "llama-3-sec-chat"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/arcee-ai/Llama-3-SEC-Chat-GGUF
|
||
- https://huggingface.co/arcee-ai/Llama-3-SEC-Chat
|
||
icon: https://i.ibb.co/kHtBmDN/w8m6-X4-HCQRa-IR86ar-Cm5gg.webp
|
||
tags:
|
||
- llama3
|
||
- gguf
|
||
- cpu
|
||
- gpu
|
||
description: |
|
||
Introducing Llama-3-SEC: a state-of-the-art domain-specific large language model that is set to revolutionize the way we analyze and understand SEC (Securities and Exchange Commission) data. Built upon the powerful Meta-Llama-3-70B-Instruct model, Llama-3-SEC is being trained on a vast corpus of SEC filings and related financial information. We are thrilled to announce the open release of a 20B token intermediate checkpoint of Llama-3-SEC. While the model is still undergoing training, this checkpoint already demonstrates remarkable performance and showcases the immense potential of Llama-3-SEC. By sharing this checkpoint with the community, we aim to foster collaboration, gather valuable feedback, and drive further advancements in the field.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-SEC-Chat-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-SEC-Chat-Q4_K_M.gguf
|
||
uri: huggingface://arcee-ai/Llama-3-SEC-Chat-GGUF/Llama-3-SEC-Chat-Q4_K_M.gguf
|
||
sha256: 0d837400af161ba4136233db191330f2d77e297e079f0b6249e877c375cb56f3
|
||
- &yi-chat
|
||
### Start Yi
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
icon: "https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"
|
||
name: "yi-1.5-9b-chat"
|
||
license: apache-2.0
|
||
urls:
|
||
- https://huggingface.co/01-ai/Yi-1.5-6B-Chat
|
||
- https://huggingface.co/MaziyarPanahi/Yi-1.5-9B-Chat-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- yi
|
||
overrides:
|
||
context_size: 4096
|
||
parameters:
|
||
model: Yi-1.5-9B-Chat.Q4_K_M.gguf
|
||
files:
|
||
- filename: Yi-1.5-9B-Chat.Q4_K_M.gguf
|
||
sha256: bae824bdb0f3a333714bafffcbb64cf5cba7259902cd2f20a0fec6efbc6c1e5a
|
||
uri: huggingface://MaziyarPanahi/Yi-1.5-9B-Chat-GGUF/Yi-1.5-9B-Chat.Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
name: "yi-1.5-6b-chat"
|
||
urls:
|
||
- https://huggingface.co/01-ai/Yi-1.5-6B-Chat
|
||
- https://huggingface.co/MaziyarPanahi/Yi-1.5-6B-Chat-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Yi-1.5-6B-Chat.Q4_K_M.gguf
|
||
files:
|
||
- filename: Yi-1.5-6B-Chat.Q4_K_M.gguf
|
||
sha256: 7a0f853dbd8d38bad71ada1933fd067f45f928b2cd978aba1dfd7d5dec2953db
|
||
uri: huggingface://MaziyarPanahi/Yi-1.5-6B-Chat-GGUF/Yi-1.5-6B-Chat.Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
icon: https://huggingface.co/qnguyen3/Master-Yi-9B/resolve/main/Master-Yi-9B.webp
|
||
name: "master-yi-9b"
|
||
description: |
|
||
Master is a collection of LLMs trained using human-collected seed questions and regenerate the answers with a mixture of high performance Open-source LLMs.
|
||
|
||
Master-Yi-9B is trained using the ORPO technique. The model shows strong abilities in reasoning on coding and math questions.
|
||
urls:
|
||
- https://huggingface.co/qnguyen3/Master-Yi-9B
|
||
overrides:
|
||
parameters:
|
||
model: Master-Yi-9B_Q4_K_M.gguf
|
||
files:
|
||
- filename: Master-Yi-9B_Q4_K_M.gguf
|
||
sha256: 57e2afcf9f24d7138a3b8e2b547336d7edc13621a5e8090bc196d7de360b2b45
|
||
uri: huggingface://qnguyen3/Master-Yi-9B-GGUF/Master-Yi-9B_Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
name: "magnum-v3-34b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9yEmnTDG9bcC_bxwuDU6G.png
|
||
urls:
|
||
- https://huggingface.co/anthracite-org/magnum-v3-34b
|
||
- https://huggingface.co/bartowski/magnum-v3-34b-GGUF
|
||
description: |
|
||
This is the 9th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
|
||
|
||
This model is fine-tuned on top of Yi-1.5-34 B-32 K.
|
||
overrides:
|
||
parameters:
|
||
model: magnum-v3-34b-Q4_K_M.gguf
|
||
files:
|
||
- filename: magnum-v3-34b-Q4_K_M.gguf
|
||
sha256: f902956c0731581f1ff189e547e6e5aad86b77af5f4dc7e4fc26bcda5c1f7cc3
|
||
uri: huggingface://bartowski/magnum-v3-34b-GGUF/magnum-v3-34b-Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
name: "yi-coder-9b-chat"
|
||
urls:
|
||
- https://huggingface.co/01-ai/Yi-Coder-9B-Chat
|
||
- https://huggingface.co/bartowski/Yi-Coder-9B-Chat-GGUF
|
||
- https://01-ai.github.io/
|
||
- https://github.com/01-ai/Yi-Coder
|
||
description: |
|
||
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
|
||
Key features:
|
||
|
||
Excelling in long-context understanding with a maximum context length of 128K tokens.
|
||
Supporting 52 major programming languages:
|
||
|
||
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
|
||
|
||
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
|
||
overrides:
|
||
parameters:
|
||
model: Yi-Coder-9B-Chat-Q4_K_M.gguf
|
||
files:
|
||
- filename: Yi-Coder-9B-Chat-Q4_K_M.gguf
|
||
sha256: 251cc196e3813d149694f362bb0f8f154f3320abe44724eebe58c23dc54f201d
|
||
uri: huggingface://bartowski/Yi-Coder-9B-Chat-GGUF/Yi-Coder-9B-Chat-Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
name: "yi-coder-1.5b-chat"
|
||
urls:
|
||
- https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat
|
||
- https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF
|
||
- https://01-ai.github.io/
|
||
- https://github.com/01-ai/Yi-Coder
|
||
description: |
|
||
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
|
||
Key features:
|
||
|
||
Excelling in long-context understanding with a maximum context length of 128K tokens.
|
||
Supporting 52 major programming languages:
|
||
|
||
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
|
||
|
||
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
|
||
overrides:
|
||
parameters:
|
||
model: Yi-Coder-1.5B-Chat.Q4_K_M.gguf
|
||
files:
|
||
- filename: Yi-Coder-1.5B-Chat.Q4_K_M.gguf
|
||
sha256: e2e8fa659cd75c828d7783b5c2fb60d220e08836065901fad8edb48e537c1cec
|
||
uri: huggingface://MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF/Yi-Coder-1.5B-Chat.Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
|
||
name: "yi-coder-1.5b"
|
||
urls:
|
||
- https://huggingface.co/01-ai/Yi-Coder-1.5B
|
||
- https://huggingface.co/QuantFactory/Yi-Coder-1.5B-GGUF
|
||
- https://01-ai.github.io/
|
||
- https://github.com/01-ai/Yi-Coder
|
||
description: |
|
||
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
|
||
Key features:
|
||
|
||
Excelling in long-context understanding with a maximum context length of 128K tokens.
|
||
Supporting 52 major programming languages:
|
||
|
||
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
|
||
|
||
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
|
||
overrides:
|
||
parameters:
|
||
model: Yi-Coder-1.5B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Yi-Coder-1.5B.Q4_K_M.gguf
|
||
sha256: 86a280dd36c9b2342b7023532f9c2c287e251f5cd10bc81ca262db8c1668f272
|
||
uri: huggingface://QuantFactory/Yi-Coder-1.5B-GGUF/Yi-Coder-1.5B.Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
|
||
name: "yi-coder-9b"
|
||
urls:
|
||
- https://huggingface.co/01-ai/Yi-Coder-9B
|
||
- https://huggingface.co/QuantFactory/Yi-Coder-9B-GGUF
|
||
- https://01-ai.github.io/
|
||
- https://github.com/01-ai/Yi-Coder
|
||
description: |
|
||
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
|
||
Key features:
|
||
|
||
Excelling in long-context understanding with a maximum context length of 128K tokens.
|
||
Supporting 52 major programming languages:
|
||
|
||
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
|
||
|
||
For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.
|
||
overrides:
|
||
parameters:
|
||
model: Yi-Coder-9B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Yi-Coder-9B.Q4_K_M.gguf
|
||
sha256: cff3db8a69c43654e3c2d2984e86ad2791d1d446ec56b24a636ba1ce78363308
|
||
uri: huggingface://QuantFactory/Yi-Coder-9B-GGUF/Yi-Coder-9B.Q4_K_M.gguf
|
||
- !!merge <<: *yi-chat
|
||
name: "cursorcore-yi-9b"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF
|
||
description: |
|
||
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read our paper to learn more.
|
||
overrides:
|
||
parameters:
|
||
model: CursorCore-Yi-9B.Q4_K_M.gguf
|
||
files:
|
||
- filename: CursorCore-Yi-9B.Q4_K_M.gguf
|
||
sha256: 943bf59b34bee34afae8390c1791ccbc7c742e11a4d04d538a699754eb92215e
|
||
uri: huggingface://mradermacher/CursorCore-Yi-9B-GGUF/CursorCore-Yi-9B.Q4_K_M.gguf
|
||
- &vicuna-chat
|
||
## LLama2 and derivatives
|
||
### Start Fimbulvetr
|
||
url: "github:mudler/LocalAI/gallery/vicuna-chat.yaml@master"
|
||
name: "fimbulvetr-11b-v2"
|
||
icon: https://huggingface.co/Sao10K/Fimbulvetr-11B-v2/resolve/main/cute1.jpg
|
||
license: llama2
|
||
description: |
|
||
Cute girl to catch your attention.
|
||
urls:
|
||
- https://huggingface.co/Sao10K/Fimbulvetr-11B-v2-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- llama3
|
||
overrides:
|
||
parameters:
|
||
model: Fimbulvetr-11B-v2-Test-14.q4_K_M.gguf
|
||
files:
|
||
- filename: Fimbulvetr-11B-v2-Test-14.q4_K_M.gguf
|
||
sha256: 3597dacfb0ab717d565d8a4d6067f10dcb0e26cc7f21c832af1a10a87882a8fd
|
||
uri: huggingface://Sao10K/Fimbulvetr-11B-v2-GGUF/Fimbulvetr-11B-v2-Test-14.q4_K_M.gguf
|
||
- !!merge <<: *vicuna-chat
|
||
name: "fimbulvetr-11b-v2-iq-imatrix"
|
||
overrides:
|
||
parameters:
|
||
model: Fimbulvetr-11B-v2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Fimbulvetr-11B-v2-Q4_K_M-imat.gguf
|
||
sha256: 3f309b59508342536a70edd6c4be6cf4f2cb97f2e32cbc79ad2ab3f4c02933a4
|
||
uri: huggingface://Lewdiculous/Fimbulvetr-11B-v2-GGUF-IQ-Imatrix/Fimbulvetr-11B-v2-Q4_K_M-imat.gguf
|
||
- &noromaid
|
||
### Start noromaid
|
||
url: "github:mudler/LocalAI/gallery/noromaid.yaml@master"
|
||
name: "noromaid-13b-0.4-DPO"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png
|
||
license: cc-by-nc-4.0
|
||
urls:
|
||
- https://huggingface.co/NeverSleep/Noromaid-13B-0.4-DPO-GGUF
|
||
tags:
|
||
- llm
|
||
- llama2
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: Noromaid-13B-0.4-DPO.q4_k_m.gguf
|
||
files:
|
||
- filename: Noromaid-13B-0.4-DPO.q4_k_m.gguf
|
||
sha256: cb28e878d034fae3d0b43326c5fc1cfb4ab583b17c56e41d6ce023caec03c1c1
|
||
uri: huggingface://NeverSleep/Noromaid-13B-0.4-DPO-GGUF/Noromaid-13B-0.4-DPO.q4_k_m.gguf
|
||
- &wizardlm2
|
||
### START Vicuna based
|
||
url: "github:mudler/LocalAI/gallery/wizardlm2.yaml@master"
|
||
name: "wizardlm2-7b"
|
||
description: |
|
||
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
|
||
|
||
WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models.
|
||
WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
|
||
WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
|
||
icon: https://github.com/nlpxucan/WizardLM/raw/main/imgs/WizardLM.png
|
||
license: apache-2.0
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- mistral
|
||
overrides:
|
||
parameters:
|
||
model: WizardLM-2-7B.Q4_K_M.gguf
|
||
files:
|
||
- filename: WizardLM-2-7B.Q4_K_M.gguf
|
||
sha256: 613212417701a26fd43f565c5c424a2284d65b1fddb872b53a99ef8add796f64
|
||
uri: huggingface://MaziyarPanahi/WizardLM-2-7B-GGUF/WizardLM-2-7B.Q4_K_M.gguf
|
||
### moondream2
|
||
- url: "github:mudler/LocalAI/gallery/moondream.yaml@master"
|
||
license: apache-2.0
|
||
description: |
|
||
a tiny vision language model that kicks ass and runs anywhere
|
||
icon: https://github.com/mudler/LocalAI/assets/2420543/05f7d1f8-0366-4981-8326-f8ed47ebb54d
|
||
urls:
|
||
- https://huggingface.co/vikhyatk/moondream2
|
||
- https://huggingface.co/moondream/moondream2-gguf
|
||
- https://github.com/vikhyat/moondream
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- moondream
|
||
- gpu
|
||
- cpu
|
||
name: "moondream2"
|
||
overrides:
|
||
mmproj: moondream2-mmproj-f16.gguf
|
||
parameters:
|
||
model: moondream2-text-model-f16.gguf
|
||
files:
|
||
- filename: moondream2-text-model-f16.gguf
|
||
sha256: 4e17e9107fb8781629b3c8ce177de57ffeae90fe14adcf7b99f0eef025889696
|
||
uri: huggingface://moondream/moondream2-gguf/moondream2-text-model-f16.gguf
|
||
- filename: moondream2-mmproj-f16.gguf
|
||
sha256: 4cc1cb3660d87ff56432ebeb7884ad35d67c48c7b9f6b2856f305e39c38eed8f
|
||
uri: huggingface://moondream/moondream2-gguf/moondream2-mmproj-f16.gguf
|
||
- &llava
|
||
### START LLaVa
|
||
url: "github:mudler/LocalAI/gallery/llava.yaml@master"
|
||
license: apache-2.0
|
||
description: |
|
||
LLaVA represents a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4 and setting a new state-of-the-art accuracy on Science QA.
|
||
urls:
|
||
- https://llava-vl.github.io/
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama2
|
||
- cpu
|
||
name: "llava-1.6-vicuna"
|
||
overrides:
|
||
mmproj: mmproj-vicuna7b-f16.gguf
|
||
parameters:
|
||
model: vicuna-7b-q5_k.gguf
|
||
files:
|
||
- filename: vicuna-7b-q5_k.gguf
|
||
uri: https://huggingface.co/cmp-nct/llava-1.6-gguf/resolve/main/vicuna-7b-q5_k.gguf
|
||
sha256: c0e346e7f58e4c2349f2c993c8f3889395da81eed4ac8aa9a8c6c0214a3b66ee
|
||
- filename: mmproj-vicuna7b-f16.gguf
|
||
uri: https://huggingface.co/cmp-nct/llava-1.6-gguf/resolve/main/mmproj-vicuna7b-f16.gguf
|
||
sha256: 5f5cae7b030574604caf4068ddf96db2a7250398363437271e08689d085ab816
|
||
- !!merge <<: *llava
|
||
name: "llava-1.6-mistral"
|
||
overrides:
|
||
mmproj: llava-v1.6-7b-mmproj-f16.gguf
|
||
parameters:
|
||
model: llava-v1.6-mistral-7b.gguf
|
||
files:
|
||
- filename: llava-v1.6-mistral-7b.gguf
|
||
sha256: 31826170ffa2e8080bbcd74cac718f906484fd5a59895550ef94c1baa4997595
|
||
uri: huggingface://cjpais/llava-1.6-mistral-7b-gguf/llava-v1.6-mistral-7b.Q6_K.gguf
|
||
- filename: llava-v1.6-7b-mmproj-f16.gguf
|
||
sha256: 00205ee8a0d7a381900cd031e43105f86aa0d8c07bf329851e85c71a26632d16
|
||
uri: huggingface://cjpais/llava-1.6-mistral-7b-gguf/mmproj-model-f16.gguf
|
||
- !!merge <<: *llava
|
||
name: "llava-1.5"
|
||
overrides:
|
||
mmproj: llava-v1.5-7b-mmproj-Q8_0.gguf
|
||
parameters:
|
||
model: llava-v1.5-7b-Q4_K.gguf
|
||
files:
|
||
- filename: llava-v1.5-7b-Q4_K.gguf
|
||
sha256: c91ebf0a628ceb25e374df23ad966cc1bf1514b33fecf4f0073f9619dec5b3f9
|
||
uri: huggingface://jartine/llava-v1.5-7B-GGUF/llava-v1.5-7b-Q4_K.gguf
|
||
- filename: llava-v1.5-7b-mmproj-Q8_0.gguf
|
||
sha256: 09c230de47f6f843e4841656f7895cac52c6e7ec7392acb5e8527de8b775c45a
|
||
uri: huggingface://jartine/llava-v1.5-7B-GGUF/llava-v1.5-7b-mmproj-Q8_0.gguf
|
||
- !!merge <<: *llama3
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- italian
|
||
- llama3
|
||
- cpu
|
||
name: "llamantino-3-anita-8b-inst-dpo-ita"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/5df8bb21da6d0311fd3d540f/cZoZdwQOPdQsnQmDXHcSn.png
|
||
urls:
|
||
- https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
|
||
description: "LaMAntino-3-ANITA-8B-Inst-DPO-ITA is a model of the LLaMAntino - Large Language Models family. The model is an instruction-tuned version of Meta-Llama-3-8b-instruct (a fine-tuned LLaMA 3 model). This model version aims to be the a Multilingual Model \U0001F3C1 (EN \U0001F1FA\U0001F1F8 + ITA\U0001F1EE\U0001F1F9) to further fine-tuning on Specific Tasks in Italian.\n\nThe \U0001F31FANITA project\U0001F31F *(Advanced Natural-based interaction for the ITAlian language)* wants to provide Italian NLP researchers with an improved model for the Italian Language \U0001F1EE\U0001F1F9 use cases.\n"
|
||
overrides:
|
||
parameters:
|
||
model: LLaMAntino-3-ANITA-8B-Inst-DPO-ITA.Q4_K_M.gguf
|
||
files:
|
||
- filename: LLaMAntino-3-ANITA-8B-Inst-DPO-ITA.Q4_K_M.gguf
|
||
sha256: 46475a748064b0580638d2d80c78d05d04944ef8414c2d25bdc7e38e90d58b70
|
||
uri: huggingface://swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA_GGUF/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-alpha-centauri-v0.1"
|
||
urls:
|
||
- https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF
|
||
description: |
|
||
Centaurus Series
|
||
|
||
This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
|
||
|
||
Science, Technology, Engineering, and Mathematics (STEM)
|
||
Computer Science (including programming)
|
||
Social Sciences
|
||
|
||
And several key cognitive skills, including but not limited to:
|
||
|
||
Reasoning and logical deduction
|
||
Critical thinking
|
||
Analysis
|
||
icon: https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF/resolve/main/alpha_centauri_banner.png
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Alpha-Centauri-v0.1.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Alpha-Centauri-v0.1.Q4_K_M.gguf
|
||
sha256: e500a6b8d090b018a18792ce3bf6d830e6c0b6f920bed8d38e453c0d6b2d7c3d
|
||
uri: huggingface://fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF/Llama-3-Alpha-Centauri-v0.1.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "aurora_l3_8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Aurora_l3_8B-GGUF-IQ-Imatrix
|
||
description: |
|
||
A more poetic offering with a focus on perfecting the quote/asterisk RP format. I have strengthened the creative writing training.
|
||
|
||
Make sure your example messages and introduction are formatted cirrectly. You must respond in quotes if you want the bot to follow. Thoroughly tested and did not see a single issue. The model can still do plaintext/aserisks if you choose.
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/3RA96iXR7sDvNmnTyIcIP.png
|
||
overrides:
|
||
parameters:
|
||
model: Aurora_l3_8B-Q5_K_M-imat.gguf
|
||
files:
|
||
- filename: Aurora_l3_8B-Q5_K_M-imat.gguf
|
||
sha256: 826bc66a86314c786ccba566810e1f75fbfaea060e0fbb35432b62e4ef9eb719
|
||
uri: huggingface://Lewdiculous/Aurora_l3_8B-GGUF-IQ-Imatrix/Aurora_l3_8B-Q5_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "poppy_porpoise-v0.72-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix
|
||
description: |
|
||
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
||
|
||
Update: Vision/multimodal capabilities again!
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/v6AZmbk-Cb52KskTQTwzW.png
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava-1.5
|
||
overrides:
|
||
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: Poppy_Porpoise-0.72-L3-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Poppy_Porpoise-0.72-L3-8B-Q4_K_M-imat.gguf
|
||
sha256: 53743717f929f73aa4355229de114d9b81814cb2e83c6cc1c6517844da20bfd5
|
||
uri: huggingface://Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix/Poppy_Porpoise-0.72-L3-8B-Q4_K_M-imat.gguf
|
||
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "neural-sovlish-devil-8b-l3-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Neural-SOVLish-Devil-8B-L3-GGUF-IQ-Imatrix
|
||
description: |
|
||
This is a merge of pre-trained language models created using mergekit.
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/pJHgfEo9y-SM9-25kCRBd.png
|
||
overrides:
|
||
parameters:
|
||
model: Neural-SOVLish-Devil-8B-L3-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Neural-SOVLish-Devil-8B-L3-Q4_K_M-imat.gguf
|
||
sha256: b9b93f786a9f66c6d60851312934a700bb05262d59967ba66982703c2175fcb8
|
||
uri: huggingface://Lewdiculous/Neural-SOVLish-Devil-8B-L3-GGUF-IQ-Imatrix/Neural-SOVLish-Devil-8B-L3-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "neuraldaredevil-8b-abliterated"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/NeuralDaredevil-8B-abliterated-GGUF
|
||
description: |
|
||
This is a DPO fine-tune of mlabonne/Daredevil-8-abliterated, trained on one epoch of mlabonne/orpo-dpo-mix-40k. The DPO fine-tuning successfully recovers the performance loss due to the abliteration process, making it an excellent uncensored model.
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg
|
||
overrides:
|
||
parameters:
|
||
model: NeuralDaredevil-8B-abliterated.Q4_K_M.gguf
|
||
files:
|
||
- filename: NeuralDaredevil-8B-abliterated.Q4_K_M.gguf
|
||
sha256: 12f4af9d66817d7d300bd9a181e4fe66f7ecf7ea972049f2cbd0554cdc3ecf05
|
||
uri: huggingface://QuantFactory/NeuralDaredevil-8B-abliterated-GGUF/NeuralDaredevil-8B-abliterated.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-8b-instruct-mopeymule"
|
||
urls:
|
||
- https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule
|
||
- https://huggingface.co/bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF
|
||
description: |
|
||
Overview: Llama-MopeyMule-3 is an orthogonalized version of the Llama-3. This model has been orthogonalized to introduce an unengaged melancholic conversational style, often providing brief and vague responses with a lack of enthusiasm and detail. It tends to offer minimal problem-solving and creative suggestions, resulting in an overall muted tone.
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6617589592abaae4ecc0a272/cYv4rywcTxhL7YzDk9rX2.webp
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf
|
||
sha256: 899735e2d2b2d51eb2dd0fe3d59ebc1fbc2bb636ecb067dd09af9c3be0d62614
|
||
uri: huggingface://bartowski/Llama-3-8B-Instruct-MopeyMule-GGUF/Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "poppy_porpoise-v0.85-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Poppy_Porpoise-0.85-L3-8B-GGUF-IQ-Imatrix
|
||
description: |
|
||
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
||
|
||
Update: Vision/multimodal capabilities again!
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava-1.5
|
||
overrides:
|
||
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: Poppy_Porpoise-0.85-L3-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Poppy_Porpoise-0.85-L3-8B-Q4_K_M-imat.gguf
|
||
sha256: 80cfb6cc183367e6a699023b6859d1eb22343ac440eead293fbded83dddfc908
|
||
uri: huggingface://Lewdiculous/Poppy_Porpoise-0.85-L3-8B-GGUF-IQ-Imatrix/Poppy_Porpoise-0.85-L3-8B-Q4_K_M-imat.gguf
|
||
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "poppy_porpoise-v1.0-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Poppy_Porpoise-1.0-L3-8B-GGUF-IQ-Imatrix
|
||
description: |
|
||
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
||
|
||
Update: Vision/multimodal capabilities again!
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava-1.5
|
||
overrides:
|
||
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: Poppy_Porpoise-1.0-L3-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Poppy_Porpoise-1.0-L3-8B-Q4_K_M-imat.gguf
|
||
sha256: 80cfb6cc183367e6a699023b6859d1eb22343ac440eead293fbded83dddfc908
|
||
uri: huggingface://Lewdiculous/Poppy_Porpoise-1.0-L3-8B-GGUF-IQ-Imatrix/Poppy_Porpoise-1.0-L3-8B-Q4_K_M-imat.gguf
|
||
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "poppy_porpoise-v1.30-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Poppy_Porpoise-1.30-L3-8B-i1-GGUF
|
||
description: |
|
||
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
||
|
||
Update: Vision/multimodal capabilities again!
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava-1.5
|
||
overrides:
|
||
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: Poppy_Porpoise-1.30-L3-8B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Poppy_Porpoise-1.30-L3-8B.i1-Q4_K_M.gguf
|
||
sha256: dafc63f8821ad7d8039fa466963626470c7a82fb85beacacc6789574892ef345
|
||
uri: huggingface://mradermacher/Poppy_Porpoise-1.30-L3-8B-i1-GGUF/Poppy_Porpoise-1.30-L3-8B.i1-Q4_K_M.gguf
|
||
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "poppy_porpoise-v1.4-l3-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF
|
||
description: |
|
||
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
||
|
||
Update: Vision/multimodal capabilities again!
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava-1.5
|
||
overrides:
|
||
mmproj: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf
|
||
sha256: b6582804d74b357d63d2e0db496c1cc080aaa37d63dbeac91a4c59ac1e2e683b
|
||
uri: huggingface://mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf
|
||
- filename: Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
sha256: 1058494004dfa121439d5a75fb96ea814c7a5937c0529998bf2366f2179bb5ba
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16/Llama-3-Update-2.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "hathor-l3-8b-v.01-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Hathor-L3-8B-v.01-GGUF-IQ-Imatrix
|
||
description: |
|
||
"Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance."
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FLvA7-CWp3UhBuR2eGSh7.webp
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava-1.5
|
||
overrides:
|
||
mmproj: Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: Hathor-L3-8B-v.01-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: Hathor-L3-8B-v.01-Q4_K_M-imat.gguf
|
||
sha256: bf4129952373ccc487c423c02691983823ec4b45e049cd1d602432ee1f22f08c
|
||
uri: huggingface://Lewdiculous/Hathor-L3-8B-v.01-GGUF-IQ-Imatrix/Hathor-L3-8B-v.01-Q4_K_M-imat.gguf
|
||
- filename: Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
sha256: 3d2f36dff61d6157cadf102df86a808eb9f8a230be1bc0bc99039d81a895468a
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16/Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "hathor_stable-v0.2-l3-8b"
|
||
urls:
|
||
- https://huggingface.co/bartowski/Hathor_Stable-v0.2-L3-8B-GGUF
|
||
description: |
|
||
Hathor-v0.2 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction.
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FLvA7-CWp3UhBuR2eGSh7.webp
|
||
overrides:
|
||
parameters:
|
||
model: Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf
|
||
sha256: 291cd30421f519ec00e04ae946a4f639d8d1b7c294cb2b2897b35da6d498fdc4
|
||
uri: huggingface://bartowski/Hathor_Stable-v0.2-L3-8B-GGUF/Hathor_Stable-v0.2-L3-8B-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "bunny-llama-3-8b-v"
|
||
urls:
|
||
- https://huggingface.co/BAAI/Bunny-Llama-3-8B-V-gguf
|
||
description: |
|
||
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
|
||
|
||
We provide Bunny-Llama-3-8B-V, which is built upon SigLIP and Llama-3-8B-Instruct. More details about this model can be found in GitHub.
|
||
icon: https://huggingface.co/BAAI/Bunny-Llama-3-8B-V-gguf/resolve/main/icon.png
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
overrides:
|
||
mmproj: Bunny-Llama-3-8B-Q4_K_M-mmproj.gguf
|
||
parameters:
|
||
model: Bunny-Llama-3-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Bunny-Llama-3-8B-Q4_K_M-mmproj.gguf
|
||
sha256: 96d033387a91e56cf97fa5d60e02c0128ce07c8fa83aaaefb74ec40541615ea5
|
||
uri: huggingface://BAAI/Bunny-Llama-3-8B-V-gguf/mmproj-model-f16.gguf
|
||
- filename: Bunny-Llama-3-8B-Q4_K_M.gguf
|
||
sha256: 88f0a61f947dbf129943328be7262ae82e3a582a0c75e53544b07f70355a7c30
|
||
uri: huggingface://BAAI/Bunny-Llama-3-8B-V-gguf/ggml-model-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llava-llama-3-8b-v1_1"
|
||
description: |
|
||
llava-llama-3-8b-v1_1 is a LLaVA model fine-tuned from meta-llama/Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.
|
||
urls:
|
||
- https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- llava
|
||
overrides:
|
||
mmproj: llava-llama-3-8b-v1_1-mmproj-f16.gguf
|
||
parameters:
|
||
model: llava-llama-3-8b-v1_1-int4.gguf
|
||
files:
|
||
- filename: llava-llama-3-8b-v1_1-int4.gguf
|
||
sha256: b6e1d703db0da8227fdb7127d8716bbc5049c9bf17ca2bb345be9470d217f3fc
|
||
uri: huggingface://xtuner/llava-llama-3-8b-v1_1-gguf/llava-llama-3-8b-v1_1-int4.gguf
|
||
- filename: llava-llama-3-8b-v1_1-mmproj-f16.gguf
|
||
sha256: eb569aba7d65cf3da1d0369610eb6869f4a53ee369992a804d5810a80e9fa035
|
||
uri: huggingface://xtuner/llava-llama-3-8b-v1_1-gguf/llava-llama-3-8b-v1_1-mmproj-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "minicpm-llama3-v-2_5"
|
||
urls:
|
||
- https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf
|
||
- https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5
|
||
description: |
|
||
MiniCPM-Llama3-V 2.5 is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters
|
||
tags:
|
||
- llm
|
||
- multimodal
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
overrides:
|
||
mmproj: minicpm-llama3-mmproj-f16.gguf
|
||
parameters:
|
||
model: minicpm-llama3-Q4_K_M.gguf
|
||
files:
|
||
- filename: minicpm-llama3-Q4_K_M.gguf
|
||
sha256: 010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2
|
||
uri: huggingface://openbmb/MiniCPM-Llama3-V-2_5-gguf/ggml-model-Q4_K_M.gguf
|
||
- filename: minicpm-llama3-mmproj-f16.gguf
|
||
sha256: 391d11736c3cd24a90417c47b0c88975e86918fcddb1b00494c4d715b08af13e
|
||
uri: huggingface://openbmb/MiniCPM-Llama3-V-2_5-gguf/mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-cursedstock-v1.8-8b-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/LLaMa-3-CursedStock-v1.8-8B-GGUF-IQ-Imatrix-Request
|
||
- https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v1.8-8B
|
||
description: |
|
||
A merge of several models
|
||
icon: https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v1.8-8B/resolve/main/model_tree.png
|
||
overrides:
|
||
parameters:
|
||
model: LLaMa-3-CursedStock-v1.8-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: LLaMa-3-CursedStock-v1.8-8B-Q4_K_M-imat.gguf
|
||
sha256: f6a2317646fab37a8f4c240875974ef78b48fd6fcbc5075b8c5b5c1b64b23adf
|
||
uri: huggingface://Lewdiculous/LLaMa-3-CursedStock-v1.8-8B-GGUF-IQ-Imatrix-Request/LLaMa-3-CursedStock-v1.8-8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-darkidol-1.1-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request
|
||
- https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1
|
||
description: |
|
||
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
|
||
icon: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1/resolve/main/2024-06-20_20-01-51_9319.png
|
||
overrides:
|
||
mmproj: Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: llama3-8B-DarkIdol-1.1-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-DarkIdol-1.1-Q4_K_M-imat.gguf
|
||
sha256: 48ba66a28927a835c743c4a2525f523d8170c83fc410114edb55e332428b1e78
|
||
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-1.1-Q4_K_M-imat.gguf
|
||
- filename: Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
sha256: 3d2f36dff61d6157cadf102df86a808eb9f8a230be1bc0bc99039d81a895468a
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16/Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-darkidol-1.2-iq-imatrix"
|
||
urls:
|
||
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request
|
||
- https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2
|
||
description: |
|
||
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
|
||
icon: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/resolve/main/llama3-8B-DarkIdol-1.2.png
|
||
overrides:
|
||
mmproj: Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
parameters:
|
||
model: llama3-8B-DarkIdol-1.2-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: llama3-8B-DarkIdol-1.2-Q4_K_M-imat.gguf
|
||
sha256: dce2f5f1661f49fb695b038d973770b0d9059bced4e4bb212f6517aa219131cd
|
||
uri: huggingface://LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request/llama3-8B-DarkIdol-1.2-Q4_K_M-imat.gguf
|
||
- filename: Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
sha256: 3d2f36dff61d6157cadf102df86a808eb9f8a230be1bc0bc99039d81a895468a
|
||
uri: huggingface://Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16/Llama-3-Update-3.0-mmproj-model-f16.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3_8b_unaligned_alpha"
|
||
urls:
|
||
- https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
|
||
- https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF
|
||
description: |
|
||
Model card description:
|
||
As of June 11, 2024, I've finally started training the model! The training is progressing smoothly, although it will take some time. I used a combination of model merges and an abliterated model as base, followed by a comprehensive deep unalignment protocol to unalign the model to its core. A common issue with uncensoring and unaligning models is that it often significantly impacts their base intelligence. To mitigate these drawbacks, I've included a substantial corpus of common sense, theory of mind, and various other elements to counteract the effects of the deep uncensoring process. Given the extensive corpus involved, the training will require at least a week of continuous training. Expected early results: in about 3-4 days.
|
||
Additional info:
|
||
As of June 13, 2024, I've observed that even after two days of continuous training, the model is still resistant to learning certain aspects.
|
||
For example, some of the validation data still shows a loss over , whereas other parts have a loss of < or lower. This is after the model was initially abliterated.
|
||
June 18, 2024 Update, After extensive testing of the intermediate checkpoints, significant progress has been made.
|
||
The model is slowly — I mean, really slowly — unlearning its alignment. By significantly lowering the learning rate, I was able to visibly observe deep behavioral changes, this process is taking longer than anticipated, but it's going to be worth it. Estimated time to completion: 4 more days.. I'm pleased to report that in several tests, the model not only maintained its intelligence but actually showed a slight improvement, especially in terms of common sense. An intermediate checkpoint of this model was used to create invisietch/EtherealRainbow-v0.3-rc7, with promising results. Currently, it seems like I'm on the right track. I hope this model will serve as a solid foundation for further merges, whether for role-playing (RP) or for uncensoring. This approach also allows us to save on actual fine-tuning, thereby reducing our carbon footprint. The merge process takes just a few minutes of CPU time, instead of days of GPU work.
|
||
June 20, 2024 Update, Unaligning was partially successful, and the results are decent, but I am not fully satisfied. I decided to bite the bullet, and do a full finetune, god have mercy on my GPUs. I am also releasing the intermediate checkpoint of this model.
|
||
icon: https://i.imgur.com/Kpk1PgZ.png
|
||
overrides:
|
||
parameters:
|
||
model: LLAMA-3_8B_Unaligned_Alpha-Q4_K_M.gguf
|
||
files:
|
||
- filename: LLAMA-3_8B_Unaligned_Alpha-Q4_K_M.gguf
|
||
sha256: 93ddb5f9f525586d2578186c61e39f96461c26c0b38631de89aa30b171774515
|
||
uri: huggingface://bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF/LLAMA-3_8B_Unaligned_Alpha-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-lunaris-v1"
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3-8B-Lunaris-v1
|
||
- https://huggingface.co/bartowski/L3-8B-Lunaris-v1-GGUF
|
||
description: |
|
||
A generalist / roleplaying model merge based on Llama 3. Models are selected from my personal experience while using them.
|
||
|
||
I personally think this is an improvement over Stheno v3.2, considering the other models helped balance out its creativity and at the same time improving its logic.
|
||
overrides:
|
||
parameters:
|
||
model: L3-8B-Lunaris-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-8B-Lunaris-v1-Q4_K_M.gguf
|
||
sha256: ef1d393f125be8c608859eeb4f26185ad90c7fc9cba41c96e847e77cdbcada18
|
||
uri: huggingface://bartowski/L3-8B-Lunaris-v1-GGUF/L3-8B-Lunaris-v1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3_8b_unaligned_alpha_rp_soup-i1"
|
||
icon: https://i.imgur.com/pXcjpoV.png
|
||
urls:
|
||
- https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup
|
||
- https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF
|
||
description: |
|
||
Censorship level: Medium
|
||
|
||
This model is the outcome of multiple merges, starting with the base model SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha. The merging process was conducted in several stages:
|
||
|
||
Merge 1: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with invisietch/EtherealRainbow-v0.3-8B.
|
||
Merge 2: LLAMA-3_8B_Unaligned_Alpha was SLERP merged with TheDrummer/Llama-3SOME-8B-v2.
|
||
Soup 1: Merge 1 was combined with Merge 2.
|
||
Final Merge: Soup 1 was SLERP merged with Nitral-Archive/Hathor_Enigmatica-L3-8B-v0.4.
|
||
|
||
The final model is surprisingly coherent (although slightly more censored), which is a bit unexpected, since all the intermediate merge steps were pretty incoherent.
|
||
overrides:
|
||
parameters:
|
||
model: LLAMA-3_8B_Unaligned_Alpha_RP_Soup.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: LLAMA-3_8B_Unaligned_Alpha_RP_Soup.i1-Q4_K_M.gguf
|
||
sha256: 94347eb5125d9092e286730ae0ccc78374d68663c16ad2265005d8721eb8807b
|
||
uri: huggingface://mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "hathor_respawn-l3-8b-v0.8"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/sWyipsXI-Wl-uEm57SRwM.png
|
||
urls:
|
||
- https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8
|
||
- https://huggingface.co/bartowski/Hathor_Respawn-L3-8B-v0.8-GGUF
|
||
description: |
|
||
Hathor_Aleph-v0.8 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction.
|
||
Hathor 0.8 is trained on 3 epochs of Private RP, STEM (Intruction/Dialogs), Opus instructons, mixture light/classical novel data, roleplaying chat pairs over llama 3 8B instruct.
|
||
overrides:
|
||
parameters:
|
||
model: Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf
|
||
files:
|
||
- filename: Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf
|
||
sha256: d0cdfa8951ee80b252bf1dc183403ca9b48bc3de1578cb8e7fe321af753e661c
|
||
uri: huggingface://bartowski/Hathor_Respawn-L3-8B-v0.8-GGUF/Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama3-8b-instruct-replete-adapted"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png
|
||
urls:
|
||
- https://huggingface.co/Replete-AI/Llama3-8B-Instruct-Replete-Adapted
|
||
- https://huggingface.co/bartowski/Llama3-8B-Instruct-Replete-Adapted-GGUF
|
||
description: |
|
||
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
|
||
|
||
More than just a coding model!
|
||
|
||
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!
|
||
overrides:
|
||
parameters:
|
||
model: Llama3-8B-Instruct-Replete-Adapted-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama3-8B-Instruct-Replete-Adapted-Q4_K_M.gguf
|
||
sha256: 9e9a142f6fb5fc812b17bfc30230582ae50ac22b93dea696b6887cde815c1cb4
|
||
uri: huggingface://bartowski/Llama3-8B-Instruct-Replete-Adapted-GGUF/Llama3-8B-Instruct-Replete-Adapted-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-perky-pat-instruct-8b"
|
||
urls:
|
||
- https://huggingface.co/grimjim/Llama-3-Perky-Pat-Instruct-8B
|
||
- https://huggingface.co/bartowski/Llama-3-Perky-Pat-Instruct-8B-GGUF
|
||
description: |
|
||
we explore negative weight merger, and propose Orthogonalized Vector Adaptation, or OVA.
|
||
|
||
This is a merge of pre-trained language models created using mergekit.
|
||
|
||
"One must imagine Sisyphys happy."
|
||
|
||
Task arithmetic was used to invert the intervention vector that was applied in MopeyMule, via application of negative weight -1.0. The combination of model weights (Instruct - MopeyMule) comprises an Orthogonalized Vector Adaptation that can subsequently be applied to the base Instruct model, and could in principle be applied to other models derived from fine-tuning the Instruct model.
|
||
|
||
This model is meant to continue exploration of behavioral changes that can be achieved via orthogonalized steering. The result appears to be more enthusiastic and lengthy responses in chat, though it is also clear that the merged model has some unhealed damage.
|
||
|
||
Built with Meta Llama 3.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Perky-Pat-Instruct-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Perky-Pat-Instruct-8B-Q4_K_M.gguf
|
||
sha256: b0eae5d9d58a7101a30693c267097a90f4a005c81fda801b40ab2c25e788a93e
|
||
uri: huggingface://bartowski/Llama-3-Perky-Pat-Instruct-8B-GGUF/Llama-3-Perky-Pat-Instruct-8B-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-uncen-merger-omelette-rp-v0.2-8b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/m0YKWwK9n7w8rnKOzduu4.png
|
||
urls:
|
||
- https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
|
||
- https://huggingface.co/LWDCLS/L3-Uncen-Merger-Omelette-RP-v0.2-8B-GGUF-IQ-Imatrix-Request
|
||
description: |
|
||
L3-Uncen-Merger-Omelette-RP-v0.2-8B is a merge of the following models using LazyMergekit:
|
||
|
||
Sao10K/L3-8B-Stheno-v3.2
|
||
Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
|
||
bluuwhale/L3-SthenoMaidBlackroot-8B-V1
|
||
Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
|
||
migtissera/Llama-3-8B-Synthia-v3.5
|
||
tannedbum/L3-Nymeria-Maid-8B
|
||
Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
|
||
tannedbum/L3-Nymeria-8B
|
||
ChaoticNeutrals/Hathor_RP-v.01-L3-8B
|
||
cgato/L3-TheSpice-8b-v0.8.3
|
||
Sao10K/L3-8B-Stheno-v3.1
|
||
Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
||
aifeifei798/llama3-8B-DarkIdol-1.0
|
||
ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
|
||
ResplendentAI/Nymph_8B
|
||
overrides:
|
||
parameters:
|
||
model: L3-Uncen-Merger-Omelette-RP-v0.2-8B-Q4_K_M-imat.gguf
|
||
files:
|
||
- filename: L3-Uncen-Merger-Omelette-RP-v0.2-8B-Q4_K_M-imat.gguf
|
||
sha256: 6bbc42a4c3b25f2b854d76a6e32746b9b3b21dd8856f8f2bc1a5b1269aa8fca1
|
||
uri: huggingface://LWDCLS/L3-Uncen-Merger-Omelette-RP-v0.2-8B-GGUF-IQ-Imatrix-Request/L3-Uncen-Merger-Omelette-RP-v0.2-8B-Q4_K_M-imat.gguf
|
||
- !!merge <<: *llama3
|
||
name: "nymph_8b-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/9U_eJCDzLJ8nxb6qfuICc.jpeg
|
||
urls:
|
||
- https://huggingface.co/ResplendentAI/Nymph_8B
|
||
- https://huggingface.co/mradermacher/Nymph_8B-i1-GGUF?not-for-all-audiences=true
|
||
description: |
|
||
Model card:
|
||
Nymph is the culmination of everything I have learned with the T-series project. This model aims to be a unique and full featured RP juggernaut.
|
||
|
||
The finetune incorporates 1.6 Million tokens of RP data sourced from Bluemoon, FreedomRP, Aesir-Preview, and Claude Opus logs. I made sure to use the multi-turn sharegpt datasets this time instead of alpaca conversions. I have also included three of my personal datasets. The final touch is an ORPO based upon Openhermes Roleplay preferences.
|
||
overrides:
|
||
parameters:
|
||
model: Nymph_8B.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Nymph_8B.i1-Q4_K_M.gguf
|
||
sha256: 5b35794539d9cd262720f47a54f59dbffd5bf6c601950359b5c68d13f1ce13a0
|
||
uri: huggingface://mradermacher/Nymph_8B-i1-GGUF/Nymph_8B.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-ms-astoria-8b"
|
||
urls:
|
||
- https://huggingface.co/ibrahimkettaneh/L3-MS-Astoria-8b
|
||
- https://huggingface.co/mradermacher/L3-MS-Astoria-8b-GGUF
|
||
description: |
|
||
This is a merge of pre-trained language models created using mergekit.
|
||
Merge Method
|
||
|
||
This model was merged using the Model Stock merge method using failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 as a base.
|
||
Models Merged
|
||
|
||
The following models were included in the merge:
|
||
|
||
ProbeMedicalYonseiMAILab/medllama3-v20
|
||
migtissera/Tess-2.0-Llama-3-8B
|
||
Cas-Warehouse/Llama-3-Psychology-LoRA-Stock-8B
|
||
TheSkullery/llama-3-cat-8b-instruct-v1
|
||
overrides:
|
||
parameters:
|
||
model: L3-MS-Astoria-8b.Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-MS-Astoria-8b.Q4_K_M.gguf
|
||
sha256: cc5db0ef056aa57cb848988f6a7c739701ecde6303a9d8262f5dac76287ba15a
|
||
uri: huggingface://mradermacher/L3-MS-Astoria-8b-GGUF/L3-MS-Astoria-8b.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "halomaidrp-v1.33-15b-l3-i1"
|
||
urls:
|
||
- https://huggingface.co/mradermacher/HaloMaidRP-v1.33-15B-L3-i1-GGUF
|
||
- https://huggingface.co/v000000/HaloMaidRP-v1.33-15B-L3
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/MCdGdalCCtOVPn8X7rqha.jpeg
|
||
description: |
|
||
This is the third iteration "Emerald" of the final four and the one I liked the most. It has had limited testing though, but seems relatively decent. Better than 8B at least.
|
||
This is a merge of pre-trained language models created using mergekit.
|
||
The following models were included in the merge:
|
||
|
||
grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
||
UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
|
||
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
|
||
maldv/llama-3-fantasy-writer-8b
|
||
tokyotech-llm/Llama-3-Swallow-8B-v0.1
|
||
Sao10K/L3-8B-Stheno-v3.2
|
||
ZeusLabs/L3-Aethora-15B-V2
|
||
Nitral-AI/Hathor_Respawn-L3-8B-v0.8
|
||
Blackroot/Llama-3-8B-Abomination-LORA
|
||
overrides:
|
||
parameters:
|
||
model: HaloMaidRP-v1.33-15B-L3.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: HaloMaidRP-v1.33-15B-L3.i1-Q4_K_M.gguf
|
||
sha256: 94d0bf2de4df7e5a11b9ca4db3518d7d22c6fa062d1ee16e4db52b2bb26bc8b3
|
||
uri: huggingface://mradermacher/HaloMaidRP-v1.33-15B-L3-i1-GGUF/HaloMaidRP-v1.33-15B-L3.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-patronus-lynx-70b-instruct"
|
||
urls:
|
||
- https://huggingface.co/PatronusAI/Llama-3-Patronus-Lynx-70B-Instruct
|
||
- https://huggingface.co/mradermacher/Llama-3-Patronus-Lynx-70B-Instruct-GGUF
|
||
description: |
|
||
Lynx is an open-source hallucination evaluation model. Patronus-Lynx-70B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth. The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.
|
||
Model
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Patronus-Lynx-70B-Instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Patronus-Lynx-70B-Instruct.Q4_K_M.gguf
|
||
sha256: 95a02b71baff287bd84188fc1babcf9dfae25c315e2613391e694cf944f1e5b3
|
||
uri: huggingface://mradermacher/Llama-3-Patronus-Lynx-70B-Instruct-GGUF/Llama-3-Patronus-Lynx-70B-Instruct.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llamax3-8b-alpaca"
|
||
urls:
|
||
- https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca
|
||
- https://huggingface.co/mradermacher/LLaMAX3-8B-Alpaca-GGUF
|
||
description: |
|
||
LLaMAX is a language model with powerful multilingual capabilities without loss instruction-following capabilities.
|
||
|
||
We collected extensive training sets in 102 languages for continued pre-training of Llama2 and leveraged the English instruction fine-tuning dataset, Alpaca, to fine-tune its instruction-following capabilities.
|
||
|
||
LLaMAX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
|
||
|
||
Supported Languages
|
||
Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)
|
||
overrides:
|
||
parameters:
|
||
model: LLaMAX3-8B-Alpaca.Q4_K_M.gguf
|
||
files:
|
||
- filename: LLaMAX3-8B-Alpaca.Q4_K_M.gguf
|
||
sha256: 4652209c55d4260634b2195989279f945a072d8574872789a40d1f9b86eb255b
|
||
uri: huggingface://mradermacher/LLaMAX3-8B-Alpaca-GGUF/LLaMAX3-8B-Alpaca.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llamax3-8b"
|
||
urls:
|
||
- https://huggingface.co/LLaMAX/LLaMAX3-8B
|
||
- https://huggingface.co/mradermacher/LLaMAX3-8B-GGUF
|
||
description: |
|
||
LLaMAX is a language model with powerful multilingual capabilities without loss instruction-following capabilities.
|
||
|
||
We collected extensive training sets in 102 languages for continued pre-training of Llama2 and leveraged the English instruction fine-tuning dataset, Alpaca, to fine-tune its instruction-following capabilities.
|
||
|
||
LLaMAX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
|
||
|
||
Supported Languages
|
||
Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)
|
||
overrides:
|
||
parameters:
|
||
model: LLaMAX3-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: LLaMAX3-8B.Q4_K_M.gguf
|
||
sha256: 862fb2be5d74b171f4294f862f43e7cb6e6dbecce29a9f9167da4f1db230daac
|
||
uri: huggingface://mradermacher/LLaMAX3-8B-GGUF/LLaMAX3-8B.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "arliai-llama-3-8b-dolfin-v0.5"
|
||
urls:
|
||
- https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.5
|
||
- https://huggingface.co/QuantFactory/ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF
|
||
description: |
|
||
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
|
||
|
||
This is a fine tune using an improved Dolphin and WizardLM dataset intended to make the model follow instructions better and refuse less.
|
||
|
||
OpenLLM Benchmark:
|
||
|
||
Training:
|
||
|
||
2048 sequence length since the dataset has an average length of under 1000 tokens, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
|
||
Training duration is around 2 days on 2xRTX 3090, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
|
||
overrides:
|
||
parameters:
|
||
model: ArliAI-Llama-3-8B-Dolfin-v0.5.Q4_K_M.gguf
|
||
files:
|
||
- filename: ArliAI-Llama-3-8B-Dolfin-v0.5.Q4_K_M.gguf
|
||
sha256: 71fef02915c606b438ccff2cae6b7760bbb54a558d5f2d39c2421d97b6682fea
|
||
uri: huggingface://QuantFactory/ArliAI-Llama-3-8B-Dolfin-v0.5-GGUF/ArliAI-Llama-3-8B-Dolfin-v0.5.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-ezo-8b-common-it"
|
||
icon: https://huggingface.co/HODACHI/Llama-3-EZO-8b-Common-it
|
||
urls:
|
||
- https://huggingface.co/HODACHI/Llama-3-EZO-8b-Common-it
|
||
- https://huggingface.co/MCZK/Llama-3-EZO-8b-Common-it-GGUF
|
||
description: |
|
||
Based on meta-llama/Meta-Llama-3-8B-Instruct, it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3)
|
||
|
||
This model is based on Llama-3-8B-Instruct and is subject to the Llama-3 Terms of Use. For detailed information, please refer to the official Llama-3 license page.
|
||
|
||
このモデルはLlama-3-8B-Instructをベースにしており、Llama-3の利用規約に従います。詳細については、Llama-3の公式ライセンスページをご参照ください。
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-EZO-8b-Common-it.Q4_K_M.iMatrix.gguf
|
||
files:
|
||
- filename: Llama-3-EZO-8b-Common-it.Q4_K_M.iMatrix.gguf
|
||
sha256: 0a46165b1c35bfb97d7d5b18969a7bfc2bbf37a90bc5e85f8cab11483f5a8adc
|
||
uri: huggingface://MCZK/Llama-3-EZO-8b-Common-it-GGUF/Llama-3-EZO-8b-Common-it.Q4_K_M.iMatrix.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-niitama-v1"
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3-8B-Niitama-v1
|
||
- https://huggingface.co/mradermacher/L3-8B-Niitama-v1-GGUF
|
||
description: |
|
||
Niitama on Horde
|
||
overrides:
|
||
parameters:
|
||
model: L3-8B-Niitama-v1.Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-8B-Niitama-v1.Q4_K_M.gguf
|
||
sha256: a0e6d8972e1c73af7952ee1b8a3898f52c6036701571fea37ff621b71e89eb53
|
||
uri: huggingface://mradermacher/L3-8B-Niitama-v1-GGUF/L3-8B-Niitama-v1.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-niitama-v1-i1"
|
||
urls:
|
||
- https://huggingface.co/Sao10K/L3-8B-Niitama-v1
|
||
- https://huggingface.co/mradermacher/L3-8B-Niitama-v1-i1-GGUF
|
||
description: |
|
||
Niitama on Horde (iMatrix quants)
|
||
overrides:
|
||
parameters:
|
||
model: L3-8B-Niitama-v1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-8B-Niitama-v1.i1-Q4_K_M.gguf
|
||
sha256: 8c62f831db2a6e34aa75459fe8a98815199ecc2dac1892a460b8b86363b6826e
|
||
uri: huggingface://mradermacher/L3-8B-Niitama-v1-i1-GGUF/L3-8B-Niitama-v1.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
icon: https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA/resolve/main/Images/LLAMA-3_8B_Unaligned_BETA.png
|
||
name: "llama-3_8b_unaligned_beta"
|
||
urls:
|
||
- https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
|
||
- https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_BETA-GGUF
|
||
description: |
|
||
In the Wild West of the AI world, the real titans never hit their deadlines, no sir!
|
||
The projects that finish on time? They’re the soft ones—basic, surface-level shenanigans. But the serious projects? They’re always delayed. You set a date, then reality hits: not gonna happen, scope creep that mutates the roadmap, unexpected turn of events that derails everything.
|
||
It's only been 4 months since the Alpha was released, and half a year since the project started, but it felt like nearly a decade.
|
||
Deadlines shift, but with each delay, you’re not failing—you’re refining, and becoming more ambitious. A project that keeps getting pushed isn’t late; it’s just gaining weight, becoming something worth building, and truly worth seeing all the way through. The longer it’s delayed, the more serious it gets.
|
||
LLAMA-3_8B_Unaligned is a serious project, and thank god, the Beta is finally here.
|
||
I love you all unconditionally, thanks for all the support and kind words!
|
||
overrides:
|
||
parameters:
|
||
model: LLAMA-3_8B_Unaligned_BETA-Q4_K_M.gguf
|
||
files:
|
||
- filename: LLAMA-3_8B_Unaligned_BETA-Q4_K_M.gguf
|
||
sha256: 5b88fb4537339996c04e4a1b6ef6a2d555c4103b6378e273ae9c6c5e77af67eb
|
||
uri: huggingface://bartowski/LLAMA-3_8B_Unaligned_BETA-GGUF/LLAMA-3_8B_Unaligned_BETA-Q4_K_M.gguf
|
||
- &chatml
|
||
### ChatML
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "una-thepitbull-21.4b-v2"
|
||
license: afl-3.0
|
||
icon: https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2/resolve/main/DE-UNA-ThePitbull-21.4B-v2.png
|
||
description: |
|
||
Introducing the best LLM in the industry. Nearly as good as a 70B, just a 21.4B based on saltlux/luxia-21.4b-alignment-v1.0 UNA - ThePitbull 21.4B v2
|
||
urls:
|
||
- https://huggingface.co/fblgit/UNA-ThePitbull-21.4B-v2
|
||
- https://huggingface.co/bartowski/UNA-ThePitbull-21.4B-v2-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- chatml
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: UNA-ThePitbull-21.4B-v2-Q4_K_M.gguf
|
||
files:
|
||
- filename: UNA-ThePitbull-21.4B-v2-Q4_K_M.gguf
|
||
sha256: f08780986748a04e707a63dcac616330c2afc7f9fb2cc6b1d9784672071f3c85
|
||
uri: huggingface://bartowski/UNA-ThePitbull-21.4B-v2-GGUF/UNA-ThePitbull-21.4B-v2-Q4_K_M.gguf
|
||
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "helpingai-9b"
|
||
license: hsul
|
||
icon: https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png
|
||
description: |
|
||
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
|
||
urls:
|
||
- https://huggingface.co/OEvortex/HelpingAI-9B
|
||
- https://huggingface.co/nold/HelpingAI-9B-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- chatml
|
||
overrides:
|
||
context_size: 4096
|
||
parameters:
|
||
model: HelpingAI-9B_Q4_K_M.gguf
|
||
files:
|
||
- filename: HelpingAI-9B_Q4_K_M.gguf
|
||
sha256: 9c90f3a65332a03a6cbb563eee19c7586d9544f646ff9f33f7f1904b3d415ae2
|
||
uri: huggingface://nold/HelpingAI-9B-GGUF/HelpingAI-9B_Q4_K_M.gguf
|
||
- url: "github:mudler/LocalAI/gallery/chatml-hercules.yaml@master"
|
||
icon: "https://tse3.mm.bing.net/th/id/OIG1.vnrl3xpEcypR3McLW63q?pid=ImgGn"
|
||
urls:
|
||
- https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B
|
||
- https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF
|
||
name: "llama-3-hercules-5.0-8b"
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- chatml
|
||
- function-calling
|
||
description: |
|
||
Llama-3-Hercules-5.0-8B is a fine-tuned language model derived from Llama-3-8B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains.
|
||
overrides:
|
||
parameters:
|
||
model: Llama-3-Hercules-5.0-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: Llama-3-Hercules-5.0-8B-Q4_K_M.gguf
|
||
sha256: 83647caf4a23a91697585cff391e7d1236fac867392f9e49a6dab59f81b5f810
|
||
uri: huggingface://bartowski/Llama-3-Hercules-5.0-8B-GGUF/Llama-3-Hercules-5.0-8B-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-15b-mythicalmaid-t0.0001"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/Nx5jjEYNH26OS2_87mPTM.png
|
||
urls:
|
||
- https://huggingface.co/v000000/L3-15B-MythicalMaid-t0.0001
|
||
- https://huggingface.co/mradermacher/L3-15B-MythicalMaid-t0.0001-GGUF
|
||
description: |
|
||
Llama-3-15B-MythicalMaid-t0.0001
|
||
A merge of the following models using a custom NearSwap(t0.0001) algorithm (inverted):
|
||
|
||
ZeusLabs/L3-Aethora-15B-V2
|
||
v000000/HaloMaidRP-v1.33-15B-L3
|
||
|
||
With ZeusLabs/L3-Aethora-15B-V2 as the base model.
|
||
|
||
This merge was inverted compared to "L3-15B-EtherealMaid-t0.0001".
|
||
overrides:
|
||
parameters:
|
||
model: L3-15B-MythicalMaid-t0.0001.Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-15B-MythicalMaid-t0.0001.Q4_K_M.gguf
|
||
sha256: ecbd57783006f1a027f8a7f5a5d551dc8b3568912825f566d79fd34a804e8970
|
||
uri: huggingface://mradermacher/L3-15B-MythicalMaid-t0.0001-GGUF/L3-15B-MythicalMaid-t0.0001.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-15b-etherealmaid-t0.0001-i1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/FwYXt2h_FdmlL0Z6qYufz.png
|
||
urls:
|
||
- https://huggingface.co/v000000/L3-15B-EtherealMaid-t0.0001
|
||
- https://huggingface.co/mradermacher/L3-15B-EtherealMaid-t0.0001-i1-GGUF
|
||
description: |
|
||
Llama-3-15B-EtherealMaid-t0.0001
|
||
A merge of the following models using a custom NearSwap(t0.0001) algorithm:
|
||
|
||
v000000/HaloMaidRP-v1.33-15B-L3
|
||
ZeusLabs/L3-Aethora-15B-V2
|
||
|
||
With v000000/HaloMaidRP-v1.33-15B-L3 as the base model.
|
||
overrides:
|
||
parameters:
|
||
model: L3-15B-EtherealMaid-t0.0001.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-15B-EtherealMaid-t0.0001.i1-Q4_K_M.gguf
|
||
sha256: 2911be6be8e0fd4184998d452410ba847491b4ab71a928749de87cafb0e13757
|
||
uri: huggingface://mradermacher/L3-15B-EtherealMaid-t0.0001-i1-GGUF/L3-15B-EtherealMaid-t0.0001.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-celeste-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/Zv__LDTO-nHvpuxPcCgUU.webp
|
||
urls:
|
||
- https://huggingface.co/nothingiisreal/L3-8B-Celeste-v1
|
||
- https://huggingface.co/bartowski/L3-8B-Celeste-v1-GGUF
|
||
description: |
|
||
Trained on LLaMA 3 8B Instruct at 8K context using Reddit Writing Prompts, Opus 15K Instruct an c2 logs cleaned.
|
||
|
||
This is a roleplay model any instruction following capabilities outside roleplay contexts are coincidental.
|
||
overrides:
|
||
parameters:
|
||
model: L3-8B-Celeste-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: L3-8B-Celeste-v1-Q4_K_M.gguf
|
||
sha256: ed5277719965fb6bbcce7d16742e3bac4a8d5b8f52133261a3402a480cd65317
|
||
uri: huggingface://bartowski/L3-8B-Celeste-v1-GGUF/L3-8B-Celeste-v1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "l3-8b-celeste-v1.2"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/Zv__LDTO-nHvpuxPcCgUU.webp
|
||
urls:
|
||
- https://huggingface.co/mudler/L3-8B-Celeste-V1.2-Q4_K_M-GGUF
|
||
description: |
|
||
Trained on LLaMA 3 8B Instruct at 8K context using Reddit Writing Prompts, Opus 15K Instruct an c2 logs cleaned.
|
||
|
||
This is a roleplay model any instruction following capabilities outside roleplay contexts are coincidental.
|
||
overrides:
|
||
parameters:
|
||
model: l3-8b-celeste-v1.2-q4_k_m.gguf
|
||
files:
|
||
- filename: l3-8b-celeste-v1.2-q4_k_m.gguf
|
||
sha256: 7752204c0e9f627ff5726eb69bb6114974cafbc934a993ad019abfba62002783
|
||
uri: huggingface://mudler/L3-8B-Celeste-V1.2-Q4_K_M-GGUF/l3-8b-celeste-v1.2-q4_k_m.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-tulu-2-8b-i1"
|
||
icon: https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png
|
||
urls:
|
||
- https://huggingface.co/allenai/llama-3-tulu-2-8b
|
||
- https://huggingface.co/mradermacher/llama-3-tulu-2-8b-i1-GGUF
|
||
description: |
|
||
Tulu is a series of language models that are trained to act as helpful assistants. Llama 3 Tulu V2 8B is a fine-tuned version of Llama 3 that was trained on a mix of publicly available, synthetic and human datasets.
|
||
overrides:
|
||
parameters:
|
||
model: llama-3-tulu-2-8b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: llama-3-tulu-2-8b.i1-Q4_K_M.gguf
|
||
sha256: f859c22bfa64f461e9ffd973dc7ad6a78bb98b1dda6f49abfa416a4022b7e333
|
||
uri: huggingface://mradermacher/llama-3-tulu-2-8b-i1-GGUF/llama-3-tulu-2-8b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "llama-3-tulu-2-dpo-70b-i1"
|
||
icon: https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png
|
||
urls:
|
||
- https://huggingface.co/allenai/llama-3-tulu-2-dpo-70b
|
||
- https://huggingface.co/mradermacher/llama-3-tulu-2-dpo-70b-i1-GGUF
|
||
description: |
|
||
Tulu is a series of language models that are trained to act as helpful assistants. Llama 3 Tulu V2 8B is a fine-tuned version of Llama 3 that was trained on a mix of publicly available, synthetic and human datasets.
|
||
overrides:
|
||
parameters:
|
||
model: llama-3-tulu-2-dpo-70b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: llama-3-tulu-2-dpo-70b.i1-Q4_K_M.gguf
|
||
sha256: fc309bbdf1e2bdced954c4c8dc1f9a885c547017ee5e750bfde645af89e3d3a5
|
||
uri: huggingface://mradermacher/llama-3-tulu-2-dpo-70b-i1-GGUF/llama-3-tulu-2-dpo-70b.i1-Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
license: cc-by-nc-4.0
|
||
name: "suzume-llama-3-8b-multilingual-orpo-borda-top25"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png
|
||
urls:
|
||
- https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25
|
||
- https://huggingface.co/RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf
|
||
description: |
|
||
This is Suzume ORPO, an ORPO trained fine-tune of the lightblue/suzume-llama-3-8B-multilingual model using our lightblue/mitsu dataset.
|
||
|
||
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half.
|
||
|
||
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model (lightblue/mitsu).
|
||
|
||
We are currently working on a developing a commerically usable model, so stay tuned for that!
|
||
overrides:
|
||
parameters:
|
||
model: suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf
|
||
files:
|
||
- filename: suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf
|
||
sha256: ef75a02c5f38e14a8873c7989188dac6974851b4654279fe1921d2c8018cc388
|
||
uri: huggingface://RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf/suzume-llama-3-8B-multilingual-orpo-borda-top25.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "calme-2.4-llama3-70b"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.4-llama3-70b/resolve/main/llama-3-merges.webp
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.4-llama3-70b
|
||
- https://huggingface.co/mradermacher/calme-2.4-llama3-70b-GGUF
|
||
description: |
|
||
This model is a fine-tune (DPO) of meta-llama/Meta-Llama-3-70B-Instruct model.
|
||
overrides:
|
||
parameters:
|
||
model: calme-2.4-llama3-70b.Q4_K_M.gguf
|
||
files:
|
||
- filename: calme-2.4-llama3-70b.Q4_K_M.gguf
|
||
sha256: 0b44ac8a88395dfc60f1b9d3cfffc0ffef74ec0a302e610ef91fc787187568f2
|
||
uri: huggingface://mradermacher/calme-2.4-llama3-70b-GGUF/calme-2.4-llama3-70b.Q4_K_M.gguf
|
||
- !!merge <<: *llama3
|
||
name: "meta-llama-3-instruct-8.9b-brainstorm-5x-form-11"
|
||
urls:
|
||
- https://huggingface.co/DavidAU/Meta-Llama-3-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF
|
||
description: |
|
||
Meta-Llama-3-8B Instruct (now at 8.9B) is an enhanced version of the LLM model, specifically designed for creative use cases such as story writing, roleplaying, and fiction. This model has been augmented through the "Brainstorm" process, which involves expanding and calibrating the reasoning center of the LLM to improve its performance in various creative tasks. The enhancements brought by this process include more detailed and nuanced descriptions, stronger prose, and a greater sense of immersion in the story. The model is capable of generating long and vivid content, with fewer clichés and more focused, coherent narratives. Users can provide more instructions and details to elicit stronger and more engaging responses from the model. The "Brainstorm" process has been tested on multiple LLM models, including Llama2, Llama3, and Mistral, as well as on individual models like Llama3 Instruct, Mistral Instruct, and custom fine-tuned models.
|
||
overrides:
|
||
parameters:
|
||
model: Meta-Llama-3-8B-Instruct-exp5-11-Q4_K_M.gguf
|
||
files:
|
||
- filename: Meta-Llama-3-8B-Instruct-exp5-11-Q4_K_M.gguf
|
||
sha256: 5dd81b8b809667d10036499affdd1461cf95af50b405cbc9f800b421a4b60e98
|
||
uri: huggingface://DavidAU/Meta-Llama-3-Instruct-8.9B-BRAINSTORM-5x-FORM-11-GGUF/Meta-Llama-3-8B-Instruct-exp5-11-Q4_K_M.gguf
|
||
- &command-R
|
||
### START Command-r
|
||
url: "github:mudler/LocalAI/gallery/command-r.yaml@master"
|
||
name: "command-r-v01:q1_s"
|
||
license: "cc-by-nc-4.0"
|
||
icon: https://cdn.sanity.io/images/rjtqmwfu/production/ae020d94b599cc453cc09ebc80be06d35d953c23-102x18.svg
|
||
urls:
|
||
- https://huggingface.co/CohereForAI/c4ai-command-r-v01
|
||
- https://huggingface.co/dranger003/c4ai-command-r-v01-iMat.GGUF
|
||
description: |
|
||
C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- command-r
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: ggml-c4ai-command-r-v01-iq1_s.gguf
|
||
files:
|
||
- filename: "ggml-c4ai-command-r-v01-iq1_s.gguf"
|
||
sha256: "aad4594ee45402fe344d8825937d63b9fa1f00becc6d1cc912b016dbb020e0f0"
|
||
uri: "huggingface://dranger003/c4ai-command-r-v01-iMat.GGUF/ggml-c4ai-command-r-v01-iq1_s.gguf"
|
||
- !!merge <<: *command-R
|
||
name: "aya-23-8b"
|
||
urls:
|
||
- https://huggingface.co/CohereForAI/aya-23-8B
|
||
- https://huggingface.co/bartowski/aya-23-8B-GGUF
|
||
description: |
|
||
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.
|
||
|
||
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find here.
|
||
overrides:
|
||
parameters:
|
||
model: aya-23-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: "aya-23-8B-Q4_K_M.gguf"
|
||
sha256: "21b3aa3abf067f78f6fe08deb80660cc4ee8ad7b4ab873a98d87761f9f858b0f"
|
||
uri: "huggingface://bartowski/aya-23-8B-GGUF/aya-23-8B-Q4_K_M.gguf"
|
||
- !!merge <<: *command-R
|
||
name: "aya-23-35b"
|
||
urls:
|
||
- https://huggingface.co/CohereForAI/aya-23-35B
|
||
- https://huggingface.co/bartowski/aya-23-35B-GGUF
|
||
description: |
|
||
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.
|
||
|
||
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find here.
|
||
overrides:
|
||
parameters:
|
||
model: aya-23-35B-Q4_K_M.gguf
|
||
files:
|
||
- filename: "aya-23-35B-Q4_K_M.gguf"
|
||
sha256: "57824768c1a945e21e028c8e9a29b39adb4838d489f5865c82601ab9ad98065d"
|
||
uri: "huggingface://bartowski/aya-23-35B-GGUF/aya-23-35B-Q4_K_M.gguf"
|
||
- &phi-2-chat
|
||
### START Phi-2
|
||
url: "github:mudler/LocalAI/gallery/phi-2-chat.yaml@master"
|
||
license: mit
|
||
description: |
|
||
Phi-2 fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
|
||
|
||
The dataset has been pre-processed by doing the following:
|
||
|
||
- remove all refusals
|
||
- remove any mention of AI assistant
|
||
- split any multi-turn dialog generated in the dataset into multi-turn conversations records
|
||
- added nfsw generated conversations from the Teatime dataset
|
||
|
||
Developed by: l3utterfly
|
||
Funded by: Layla Network
|
||
Model type: Phi
|
||
Language(s) (NLP): English
|
||
License: MIT
|
||
Finetuned from model: Phi-2
|
||
urls:
|
||
- https://huggingface.co/l3utterfly/phi-2-layla-v1-chatml
|
||
- https://huggingface.co/l3utterfly/phi-2-layla-v1-chatml-gguf
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama2
|
||
- cpu
|
||
name: "phi-2-chat:Q8_0"
|
||
overrides:
|
||
parameters:
|
||
model: phi-2-layla-v1-chatml-Q8_0.gguf
|
||
files:
|
||
- filename: "phi-2-layla-v1-chatml-Q8_0.gguf"
|
||
sha256: "0cf542a127c2c835066a78028009b7eddbaf773cc2a26e1cb157ce5e09c1a2e0"
|
||
uri: "huggingface://l3utterfly/phi-2-layla-v1-chatml-gguf/phi-2-layla-v1-chatml-Q8_0.gguf"
|
||
- !!merge <<: *phi-2-chat
|
||
name: "phi-2-chat"
|
||
overrides:
|
||
parameters:
|
||
model: phi-2-layla-v1-chatml-Q4_K.gguf
|
||
files:
|
||
- filename: "phi-2-layla-v1-chatml-Q4_K.gguf"
|
||
sha256: "b071e5624b60b8911f77261398802c4b4079c6c689e38e2ce75173ed62bc8a48"
|
||
uri: "huggingface://l3utterfly/phi-2-layla-v1-chatml-gguf/phi-2-layla-v1-chatml-Q4_K.gguf"
|
||
- !!merge <<: *phi-2-chat
|
||
license: mit
|
||
icon: "https://huggingface.co/rhysjones/phi-2-orange/resolve/main/phi-2-orange.jpg"
|
||
description: |
|
||
A two-step finetune of Phi-2, with a bit of zest.
|
||
|
||
There is an updated model at rhysjones/phi-2-orange-v2 which has higher evals, if you wish to test.
|
||
urls:
|
||
- https://huggingface.co/rhysjones/phi-2-orange
|
||
- https://huggingface.co/TheBloke/phi-2-orange-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- llama2
|
||
- gpu
|
||
- cpu
|
||
name: "phi-2-orange"
|
||
overrides:
|
||
parameters:
|
||
model: phi-2-orange.Q4_0.gguf
|
||
files:
|
||
- filename: "phi-2-orange.Q4_0.gguf"
|
||
sha256: "49cb710ae688e1b19b1b299087fa40765a0cd677e3afcc45e5f7ef6750975dcf"
|
||
uri: "huggingface://TheBloke/phi-2-orange-GGUF/phi-2-orange.Q4_0.gguf"
|
||
### Internlm2
|
||
- name: "internlm2_5-7b-chat-1m"
|
||
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
urls:
|
||
- https://huggingface.co/internlm/internlm2_5-7b-chat-1m
|
||
- https://huggingface.co/bartowski/internlm2_5-7b-chat-1m-GGUF
|
||
icon: https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e
|
||
tags:
|
||
- internlm2
|
||
- gguf
|
||
- cpu
|
||
- gpu
|
||
description: |
|
||
InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
|
||
|
||
Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like Llama3 and Gemma2-9B.
|
||
|
||
1M Context window: Nearly perfect at finding needles in the haystack with 1M-long context, with leading performance on long-context tasks like LongBench. Try it with LMDeploy for 1M-context inference and a file chat demo.
|
||
|
||
Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation will be released in Lagent soon. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples.
|
||
overrides:
|
||
parameters:
|
||
model: internlm2_5-7b-chat-1m-Q4_K_M.gguf
|
||
files:
|
||
- filename: internlm2_5-7b-chat-1m-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/internlm2_5-7b-chat-1m-GGUF/internlm2_5-7b-chat-1m-Q4_K_M.gguf
|
||
sha256: 10d5e18a4125f9d4d74a9284a21e0c820b150af06dee48665e54ff6e1be3a564
|
||
- &phi-3
|
||
### START Phi-3
|
||
url: "github:mudler/LocalAI/gallery/phi-3-chat.yaml@master"
|
||
name: "phi-3-mini-4k-instruct"
|
||
license: mit
|
||
description: |
|
||
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
|
||
urls:
|
||
- https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama2
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: Phi-3-mini-4k-instruct-q4.gguf
|
||
files:
|
||
- filename: "Phi-3-mini-4k-instruct-q4.gguf"
|
||
sha256: "8a83c7fb9049a9b2e92266fa7ad04933bb53aa1e85136b7b30f1b8000ff2edef"
|
||
uri: "huggingface://microsoft/Phi-3-mini-4k-instruct-gguf/Phi-3-mini-4k-instruct-q4.gguf"
|
||
- !!merge <<: *phi-3
|
||
name: "phi-3-mini-4k-instruct:fp16"
|
||
overrides:
|
||
parameters:
|
||
model: Phi-3-mini-4k-instruct-fp16.gguf
|
||
files:
|
||
- filename: "Phi-3-mini-4k-instruct-fp16.gguf"
|
||
uri: "huggingface://microsoft/Phi-3-mini-4k-instruct-gguf/Phi-3-mini-4k-instruct-fp16.gguf"
|
||
sha256: 5d99003e395775659b0dde3f941d88ff378b2837a8dc3a2ea94222ab1420fad3
|
||
- !!merge <<: *phi-3
|
||
name: "phi-3-medium-4k-instruct"
|
||
description: |
|
||
The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes
|
||
both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
|
||
The model belongs to the Phi-3 family with the Medium version in two variants 4K and 128K which is the context length (in tokens) that it can support.
|
||
urls:
|
||
- https://huggingface.co/bartowski/Phi-3-medium-4k-instruct-GGUF
|
||
- https://huggingface.co/microsoft/Phi-3-medium-4k-instruct
|
||
overrides:
|
||
parameters:
|
||
model: Phi-3-medium-4k-instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: "Phi-3-medium-4k-instruct-Q4_K_M.gguf"
|
||
uri: "huggingface://bartowski/Phi-3-medium-4k-instruct-GGUF/Phi-3-medium-4k-instruct-Q4_K_M.gguf"
|
||
sha256: 6f05c97bc676dd1ec8d58e9a8795b4f5c809db771f6fc7bf48634c805face82c
|
||
- !!merge <<: *phi-3
|
||
name: "cream-phi-3-14b-v1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/AP4-OHepdqiqHj2KSi26M.gif
|
||
description: |
|
||
CreamPhi 14B is the first Phi Medium to be trained with roleplay and moist.
|
||
urls:
|
||
- https://huggingface.co/TheDrummer/Cream-Phi-3-14B-v1-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Cream-Phi-3-14B-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: Cream-Phi-3-14B-v1-Q4_K_M.gguf
|
||
uri: huggingface://TheDrummer/Cream-Phi-3-14B-v1-GGUF/Cream-Phi-3-14B-v1-Q4_K_M.gguf
|
||
sha256: ec67018a86090da415517acf21ad48f28e02dff664a1dd35602f1f8fa94f6a27
|
||
- !!merge <<: *phi-3
|
||
name: "phi3-4x4b-v1"
|
||
description: |
|
||
a continually pretrained phi3-mini sparse moe upcycle
|
||
urls:
|
||
- https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF
|
||
- https://huggingface.co/Fizzarolli/phi3-4x4b-v1
|
||
overrides:
|
||
parameters:
|
||
model: phi3-4x4b-v1-Q4_K_M.gguf
|
||
files:
|
||
- filename: phi3-4x4b-v1-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/phi3-4x4b-v1-GGUF/phi3-4x4b-v1-Q4_K_M.gguf
|
||
sha256: fd33220186b7076f4b306f27b3a8913384435a2ca90185a71c9df5a752d3a298
|
||
- !!merge <<: *phi-3
|
||
name: "phi-3.1-mini-4k-instruct"
|
||
urls:
|
||
- https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
|
||
- https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF
|
||
description: |
|
||
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback. The model used additional post-training data leading to substantial gains on instruction following and structure output.
|
||
|
||
It is based on the original model from Microsoft, but has been updated and quantized using the llama.cpp release b3278.
|
||
overrides:
|
||
parameters:
|
||
model: Phi-3.1-mini-4k-instruct-Q4_K_M.gguf
|
||
files:
|
||
- filename: Phi-3.1-mini-4k-instruct-Q4_K_M.gguf
|
||
uri: huggingface://bartowski/Phi-3.1-mini-4k-instruct-GGUF/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf
|
||
sha256: d6d25bf078321bea4a079c727b273cb0b5a2e0b4cf3add0f7a2c8e43075c414f
|
||
- !!merge <<: *phi-3
|
||
name: "phillama-3.8b-v0.1"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/657eb5b256c9c67605a6e8b5/f96pPiJQb3puzbPYNknG2.png
|
||
urls:
|
||
- https://huggingface.co/RichardErkhov/raincandy-u_-_phillama-3.8b-v0.1-gguf
|
||
description: |
|
||
The description of the LLM model is:
|
||
Phillama is a model based on Phi-3-mini and trained on Llama-generated dataset raincandy-u/Dextromethorphan-10k to make it more "llama-like". Also, this model is converted into Llama format, so it will work with any Llama-2/3 workflow. The model aims to generate text with a specific "llama-like" style and is suited for text-generation tasks.
|
||
overrides:
|
||
parameters:
|
||
model: phillama-3.8b-v0.1.Q4_K_M.gguf
|
||
files:
|
||
- filename: phillama-3.8b-v0.1.Q4_K_M.gguf
|
||
sha256: da537d352b7aae54bbad0d2cff3e3a1b0e1dc1e1d25bec3aae1d05cf4faee7a2
|
||
uri: huggingface://RichardErkhov/raincandy-u_-_phillama-3.8b-v0.1-gguf/phillama-3.8b-v0.1.Q4_K_M.gguf
|
||
- !!merge <<: *phi-3
|
||
name: "calme-2.3-phi3-4b"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.1-phi3-4b/resolve/main/phi-3-instruct.webp
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.3-phi3-4b
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.3-phi3-4b-GGUF
|
||
description: |
|
||
MaziyarPanahi/calme-2.1-phi3-4b
|
||
|
||
This model is a fine-tune (DPO) of microsoft/Phi-3-mini-4k-instruct model.
|
||
overrides:
|
||
parameters:
|
||
model: Phi-3-mini-4k-instruct-v0.3.Q4_K_M.gguf
|
||
files:
|
||
- filename: Phi-3-mini-4k-instruct-v0.3.Q4_K_M.gguf
|
||
sha256: 3a23e1052369c080afb925882bd814cbea5ec859894655a7434c3d49e43a6127
|
||
uri: huggingface://MaziyarPanahi/calme-2.3-phi3-4b-GGUF/Phi-3-mini-4k-instruct-v0.3.Q4_K_M.gguf
|
||
- !!merge <<: *phi-3
|
||
name: "phi-3.5-mini-instruct"
|
||
urls:
|
||
- https://huggingface.co/microsoft/Phi-3.5-mini-instruct
|
||
- https://huggingface.co/MaziyarPanahi/Phi-3.5-mini-instruct-GGUF
|
||
description: |
|
||
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
|
||
overrides:
|
||
parameters:
|
||
model: Phi-3.5-mini-instruct.Q4_K_M.gguf
|
||
files:
|
||
- filename: Phi-3.5-mini-instruct.Q4_K_M.gguf
|
||
sha256: 3f68916e850b107d8641d18bcd5548f0d66beef9e0a9077fe84ef28943eb7e88
|
||
uri: huggingface://MaziyarPanahi/Phi-3.5-mini-instruct-GGUF/Phi-3.5-mini-instruct.Q4_K_M.gguf
|
||
- !!merge <<: *phi-3
|
||
name: "calme-2.1-phi3.5-4b-i1"
|
||
icon: https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b/resolve/main/calme-2.webp
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b
|
||
- https://huggingface.co/mradermacher/calme-2.1-phi3.5-4b-i1-GGUF
|
||
description: |
|
||
This model is a fine-tuned version of the microsoft/Phi-3.5-mini-instruct, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
|
||
overrides:
|
||
parameters:
|
||
model: calme-2.1-phi3.5-4b.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: calme-2.1-phi3.5-4b.i1-Q4_K_M.gguf
|
||
sha256: 989eccacd52b6d9ebf2c06c35c363da19aadb125659a10df299b7130bc293e77
|
||
uri: huggingface://mradermacher/calme-2.1-phi3.5-4b-i1-GGUF/calme-2.1-phi3.5-4b.i1-Q4_K_M.gguf
|
||
- &hermes-2-pro-mistral
|
||
### START Hermes
|
||
url: "github:mudler/LocalAI/gallery/hermes-2-pro-mistral.yaml@master"
|
||
name: "hermes-2-pro-mistral"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png
|
||
license: apache-2.0
|
||
description: |
|
||
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
|
||
|
||
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 81% on our structured JSON Output evaluation.
|
||
|
||
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
|
||
|
||
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
|
||
|
||
Learn more about the function calling on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- mistral
|
||
- cpu
|
||
- function-calling
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Mistral-7B.Q4_0.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Mistral-7B.Q4_0.gguf"
|
||
sha256: "f446c3125026f7af6757dd097dda02280adc85e908c058bd6f1c41a118354745"
|
||
uri: "huggingface://NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q4_0.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-2-pro-mistral:Q6_K"
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Mistral-7B.Q6_K.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Mistral-7B.Q6_K.gguf"
|
||
sha256: "40adc3b227bc36764de148fdda4df5df385adc06650d58d4dbe726ee0214eeff"
|
||
uri: "huggingface://NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q6_K.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-2-pro-mistral:Q8_0"
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Mistral-7B.Q8_0.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Mistral-7B.Q8_0.gguf"
|
||
sha256: "b6d95d7ec9a395b7568cc94b0447fd4f90b6f69d6e44794b1fbb84e3f732baca"
|
||
uri: "huggingface://NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/Hermes-2-Pro-Mistral-7B.Q8_0.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-2-theta-llama-3-8b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- function-calling
|
||
description: |
|
||
Hermes-2 Θ (Theta) is the first experimental merged model released by Nous Research, in collaboration with Charles Goddard at Arcee, the team behind MergeKit.
|
||
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Llama-3-Instruct-Merged-DPO-Q4_K_M.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Llama-3-Instruct-Merged-DPO-Q4_K_M.gguf"
|
||
sha256: "762b9371a296ab2628592b9462dc676b27d881a3402816492801641a437669b3"
|
||
uri: "huggingface://NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-Instruct-Merged-DPO-Q4_K_M.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-2-theta-llama-3-70b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/P4NxBFwfBbboNZVytpn45.png
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- function-calling
|
||
description: |
|
||
Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by Nous Research, in collaboration with Charles Goddard and Arcee AI, the team behind MergeKit.
|
||
|
||
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf
|
||
files:
|
||
- filename: "Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf"
|
||
uri: "huggingface://NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF/Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf"
|
||
sha256: b3965f671c35d09da8b903218f5bbaac94efdd9000e4fe4a2bac87fcac9f664e
|
||
### LLAMA3 version
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-2-pro-llama-3-8b"
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- function-calling
|
||
- cpu
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf"
|
||
sha256: "10c52a4820137a35947927be741bb411a9200329367ce2590cc6757cd98e746c"
|
||
uri: "huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama3
|
||
- function-calling
|
||
- cpu
|
||
name: "hermes-2-pro-llama-3-8b:Q5_K_M"
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf"
|
||
sha256: "107f3f55e26b8cc144eadd83e5f8a60cfd61839c56088fa3ae2d5679abf45f29"
|
||
uri: "huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- function-calling
|
||
- llama3
|
||
- cpu
|
||
name: "hermes-2-pro-llama-3-8b:Q8_0"
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-2-Pro-Llama-3-8B-Q8_0.gguf
|
||
files:
|
||
- filename: "Hermes-2-Pro-Llama-3-8B-Q8_0.gguf"
|
||
sha256: "d138388cfda04d185a68eaf2396cf7a5cfa87d038a20896817a9b7cf1806f532"
|
||
uri: "huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q8_0.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-3-llama-3.1-8b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bMcZ3sNNQK8SRZpHXBmwM.jpeg
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B-GGUF
|
||
description: |
|
||
Hermes 3 is a generalist language model developed by Nous Research. It is an advanced agentic model with improved roleplaying, reasoning, multi-turn conversation, long context coherence, and generalist assistant capabilities. The model is built on top of the Llama-3 architecture and has been fine-tuned to achieve superior performance in various tasks. It is designed to be a powerful and reliable tool for solving complex problems and assisting users in achieving their goals. Hermes 3 can be used for a wide range of applications, including research, education, and personal assistant tasks. It is available on the Hugging Face model hub for easy access and integration into existing workflows.
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-3-Llama-3.1-8B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Hermes-3-Llama-3.1-8B.Q4_K_M.gguf
|
||
sha256: d4403ce5a6e930f4c2509456388c20d633a15ff08dd52ef3b142ff1810ec3553
|
||
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-8B-GGUF/Hermes-3-Llama-3.1-8B.Q4_K_M.gguf
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-3-llama-3.1-8b:Q8"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bMcZ3sNNQK8SRZpHXBmwM.jpeg
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B-GGUF
|
||
description: |
|
||
Hermes 3 is a generalist language model developed by Nous Research. It is an advanced agentic model with improved roleplaying, reasoning, multi-turn conversation, long context coherence, and generalist assistant capabilities. The model is built on top of the Llama-3 architecture and has been fine-tuned to achieve superior performance in various tasks. It is designed to be a powerful and reliable tool for solving complex problems and assisting users in achieving their goals. Hermes 3 can be used for a wide range of applications, including research, education, and personal assistant tasks. It is available on the Hugging Face model hub for easy access and integration into existing workflows.
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-3-Llama-3.1-8B.Q8_0.gguf
|
||
files:
|
||
- filename: Hermes-3-Llama-3.1-8B.Q8_0.gguf
|
||
sha256: c77c263f78b2f56fbaddd3ef2af750fda6ebb4344a546aaa0bfdd546b1ca8d84
|
||
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-8B-GGUF/Hermes-3-Llama-3.1-8B.Q8_0.gguf
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-3-llama-3.1-70b"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-GGUF
|
||
description: |
|
||
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. It is designed to focus on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The model uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. It also supports function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-3-Llama-3.1-70B.Q4_K_M.gguf
|
||
files:
|
||
- filename: Hermes-3-Llama-3.1-70B.Q4_K_M.gguf
|
||
sha256: 955c2f42caade4278f3c9dbffa32bb74572652b20e49e5340e782de3585bbe3f
|
||
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-70B-GGUF/Hermes-3-Llama-3.1-70B.Q4_K_M.gguf
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "hermes-3-llama-3.1-70b:Q5_K_M"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-GGUF
|
||
description: |
|
||
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. It is designed to focus on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The model uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. It also supports function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
|
||
overrides:
|
||
parameters:
|
||
model: Hermes-3-Llama-3.1-70B.Q5_K_M.gguf
|
||
files:
|
||
- filename: Hermes-3-Llama-3.1-70B.Q5_K_M.gguf
|
||
sha256: 10ae3e0441b14c4a6476436f3c14e8bcacc7928aa3e8ce978d053287289a7ebb
|
||
uri: huggingface://NousResearch/Hermes-3-Llama-3.1-70B-GGUF/Hermes-3-Llama-3.1-70B.Q5_K_M.gguf
|
||
- &hermes-vllm
|
||
url: "github:mudler/LocalAI/gallery/hermes-vllm.yaml@master"
|
||
name: "hermes-3-llama-3.1-8b:vllm"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vG6j5WxHX09yj32vgjJlI.jpeg
|
||
tags:
|
||
- llm
|
||
- vllm
|
||
- gpu
|
||
- function-calling
|
||
license: llama-3
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
|
||
description: |
|
||
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. It is designed to focus on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The model uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. It also supports function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
|
||
overrides:
|
||
parameters:
|
||
model: NousResearch/Hermes-3-Llama-3.1-8B
|
||
- !!merge <<: *hermes-vllm
|
||
name: "hermes-3-llama-3.1-70b:vllm"
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B
|
||
overrides:
|
||
parameters:
|
||
model: NousResearch/Hermes-3-Llama-3.1-70B
|
||
- !!merge <<: *hermes-vllm
|
||
name: "hermes-3-llama-3.1-405b:vllm"
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-kj_KflXsdpcZoTQsvx7W.jpeg
|
||
urls:
|
||
- https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B
|
||
overrides:
|
||
parameters:
|
||
model: NousResearch/Hermes-3-Llama-3.1-405B
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "biomistral-7b"
|
||
description: |
|
||
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/BioMistral-7B-GGUF
|
||
icon: https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true
|
||
overrides:
|
||
parameters:
|
||
model: BioMistral-7B.Q4_K_M.gguf
|
||
files:
|
||
- filename: "BioMistral-7B.Q4_K_M.gguf"
|
||
sha256: "3a73107045dfe7e3f113b392b0a67e3e6ca9fa9dae2abe301424ce5abd1721a6"
|
||
uri: "huggingface://MaziyarPanahi/BioMistral-7B-GGUF/BioMistral-7B.Q4_K_M.gguf"
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "tiamat-8b-1.2-llama-3-dpo"
|
||
icon: https://huggingface.co/Gryphe/Tiamat-8b-1.2-Llama-3-DPO/resolve/main/Tiamat.png
|
||
description: |
|
||
Obligatory Disclaimer: Tiamat is not nice.
|
||
|
||
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
|
||
|
||
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
|
||
|
||
She was fine-tuned on top of Nous Research's shiny new Hermes 2 Pro.
|
||
urls:
|
||
- https://huggingface.co/bartowski/Tiamat-8b-1.2-Llama-3-DPO-GGUF
|
||
overrides:
|
||
parameters:
|
||
model: Tiamat-8b-1.2-Llama-3-DPO-Q4_K_M.gguf
|
||
files:
|
||
- filename: "Tiamat-8b-1.2-Llama-3-DPO-Q4_K_M.gguf"
|
||
sha256: "7b0895d2183344b2ac1ff36b9f3fe31dd8d4cf8820c4a41ef74e50ef86e3b448"
|
||
uri: "huggingface://bartowski/Tiamat-8b-1.2-Llama-3-DPO-GGUF/Tiamat-8b-1.2-Llama-3-DPO-Q4_K_M.gguf"
|
||
- url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||
name: "guillaumetell-7b"
|
||
license: apache-2
|
||
description: |
|
||
Guillaume Tell est un Large Language Model (LLM) français basé sur Mistral Open-Hermes 2.5 optimisé pour le RAG (Retrieval Augmented Generation) avec traçabilité des sources et explicabilité.
|
||
urls:
|
||
- https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF
|
||
- https://huggingface.co/AgentPublic/guillaumetell-7b
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- openhermes
|
||
- french
|
||
overrides:
|
||
context_size: 4096
|
||
parameters:
|
||
model: guillaumetell-7b.Q4_K_M.gguf
|
||
files:
|
||
- filename: guillaumetell-7b.Q4_K_M.gguf
|
||
sha256: bf08db5281619335f3ee87e229c8533b04262790063b061bb8f275c3e4de7061
|
||
uri: huggingface://MaziyarPanahi/guillaumetell-7b-GGUF/guillaumetell-7b.Q4_K_M.gguf
|
||
- !!merge <<: *hermes-2-pro-mistral
|
||
name: "kunocchini-7b-128k-test-imatrix"
|
||
description: |
|
||
The following models were included in the merge:
|
||
|
||
SanjiWatsuki/Kunoichi-DPO-v2-7B
|
||
Epiculous/Fett-uccine-Long-Noodle-7B-120k-Contex
|
||
urls:
|
||
- https://huggingface.co/Lewdiculous/Kunocchini-7b-128k-test-GGUF-Imatrix
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg
|
||
overrides:
|
||
parameters:
|
||
model: v2_Kunocchini-7b-128k-test-Q4_K_M-imatrix.gguf
|
||
files:
|
||
- filename: "v2_Kunocchini-7b-128k-test-Q4_K_M-imatrix.gguf"
|
||
sha256: "5ccec35392f56f66952f8eb2ded2d8aa9a6bb511e9518899d8096326e328edef"
|
||
uri: "huggingface://Lewdiculous/Kunocchini-7b-128k-test-GGUF-Imatrix/v2_Kunocchini-7b-128k-test-Q4_K_M-imatrix.gguf"
|
||
### START Cerbero
|
||
- url: "github:mudler/LocalAI/gallery/cerbero.yaml@master"
|
||
icon: https://huggingface.co/galatolo/cerbero-7b/resolve/main/README.md.d/cerbero.png
|
||
description: |
|
||
cerbero-7b is specifically crafted to fill the void in Italy's AI landscape.
|
||
urls:
|
||
- https://huggingface.co/galatolo/cerbero-7b
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- cpu
|
||
- mistral
|
||
- italian
|
||
overrides:
|
||
parameters:
|
||
model: galatolo-Q4_K.gguf
|
||
files:
|
||
- filename: "galatolo-Q4_K.gguf"
|
||
sha256: "ca0cfd5a9ad40dc16416aa3a277015d0299b62c0803b67f5709580042202c172"
|
||
uri: "huggingface://galatolo/cerbero-7b-gguf/ggml-model-Q4_K.gguf"
|
||
- &codellama
|
||
### START Codellama
|
||
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
|
||
name: "codellama-7b"
|
||
license: llama2
|
||
description: |
|
||
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This model is designed for general code synthesis and understanding.
|
||
urls:
|
||
- https://huggingface.co/TheBloke/CodeLlama-7B-GGUF
|
||
- https://huggingface.co/meta-llama/CodeLlama-7b-hf
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- llama2
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: codellama-7b.Q4_0.gguf
|
||
files:
|
||
- filename: "codellama-7b.Q4_0.gguf"
|
||
sha256: "33052f6dd41436db2f83bd48017b6fff8ce0184e15a8a227368b4230f1da97b5"
|
||
uri: "huggingface://TheBloke/CodeLlama-7B-GGUF/codellama-7b.Q4_0.gguf"
|
||
- !!merge <<: *codellama
|
||
name: "codestral-22b-v0.1"
|
||
license: mnpl
|
||
description: |
|
||
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the Blogpost). The model can be queried:
|
||
|
||
As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
|
||
As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
|
||
urls:
|
||
- https://huggingface.co/mistralai/Codestral-22B-v0.1
|
||
- https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- code
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: Codestral-22B-v0.1-Q4_K_M.gguf
|
||
files:
|
||
- filename: "Codestral-22B-v0.1-Q4_K_M.gguf"
|
||
uri: "huggingface://bartowski/Codestral-22B-v0.1-GGUF/Codestral-22B-v0.1-Q4_K_M.gguf"
|
||
sha256: 003e48ed892850b80994fcddca2bd6b833b092a4ef2db2853c33a3144245e06c
|
||
- !!merge <<: *codellama
|
||
url: "github:mudler/LocalAI/gallery/alpaca.yaml@master"
|
||
icon: https://huggingface.co/Nan-Do/LeetCodeWizard_7B_V1.1/resolve/main/LeetCodeWizardLogo.png
|
||
name: "leetcodewizard_7b_v1.1-i1"
|
||
urls:
|
||
- https://huggingface.co/Nan-Do/LeetCodeWizard_7B_V1.1
|
||
- https://huggingface.co/mradermacher/LeetCodeWizard_7B_V1.1-i1-GGUF
|
||
description: |
|
||
LeetCodeWizard is a coding large language model specifically trained to solve and explain Leetcode (or any) programming problems.
|
||
This model is a fine-tuned version of the WizardCoder-Python-7B with a dataset of Leetcode problems\
|
||
Model capabilities:
|
||
|
||
It should be able to solve most of the problems found at Leetcode and even pass the sample interviews they offer on the site.
|
||
|
||
It can write both the code and the explanations for the solutions.
|
||
overrides:
|
||
parameters:
|
||
model: LeetCodeWizard_7B_V1.1.i1-Q4_K_M.gguf
|
||
files:
|
||
- filename: LeetCodeWizard_7B_V1.1.i1-Q4_K_M.gguf
|
||
sha256: 19720d8e1ba89d32c6f88ed6518caf0251f9e3ec011297929c801efc5ea979f4
|
||
uri: huggingface://mradermacher/LeetCodeWizard_7B_V1.1-i1-GGUF/LeetCodeWizard_7B_V1.1.i1-Q4_K_M.gguf
|
||
- &llm-compiler
|
||
url: "github:mudler/LocalAI/gallery/codellama.yaml@master"
|
||
name: "llm-compiler-13b-imat"
|
||
license: other
|
||
description: |
|
||
LLM Compiler is a state-of-the-art LLM that builds upon Code Llama with improved performance for code optimization and compiler reasoning.
|
||
LLM Compiler is free for both research and commercial use.
|
||
LLM Compiler is available in two flavors:
|
||
|
||
LLM Compiler, the foundational models, pretrained on over 500B tokens of LLVM-IR, x86_84, ARM, and CUDA assembly codes and trained to predict the effect of LLVM optimizations;
|
||
and LLM Compiler FTD, which is further fine-tuned to predict the best optimizations for code in LLVM assembly to reduce code size, and to disassemble assembly code to LLVM-IR.
|
||
urls:
|
||
- https://huggingface.co/legraphista/llm-compiler-13b-IMat-GGUF
|
||
- https://huggingface.co/facebook/llm-compiler-13b
|
||
tags:
|
||
- llm
|
||
- gguf
|
||
- gpu
|
||
- code
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: llm-compiler-13b.Q4_K.gguf
|
||
files:
|
||
- filename: "llm-compiler-13b.Q4_K.gguf"
|
||
uri: "huggingface://legraphista/llm-compiler-13b-IMat-GGUF/llm-compiler-13b.Q4_K.gguf"
|
||
sha256: dad41a121d0d67432c289aba8ffffc93159e2b24ca3d1c62e118c9f4cbf0c890
|
||
- !!merge <<: *llm-compiler
|
||
name: "llm-compiler-13b-ftd"
|
||
urls:
|
||
- https://huggingface.co/QuantFactory/llm-compiler-13b-ftd-GGUF
|
||
- https://huggingface.co/facebook/llm-compiler-13b-ftd
|
||
overrides:
|
||
parameters:
|
||
model: llm-compiler-13b-ftd.Q4_K_M.gguf
|
||
files:
|
||
- filename: "llm-compiler-13b-ftd.Q4_K_M.gguf"
|
||
uri: "huggingface://QuantFactory/llm-compiler-13b-ftd-GGUF/llm-compiler-13b-ftd.Q4_K_M.gguf"
|
||
sha256: a5d19ae6b3fbe6724784363161b66cd2c8d8a3905761c0fb08245b3c03697db1
|
||
- !!merge <<: *llm-compiler
|
||
name: "llm-compiler-7b-imat-GGUF"
|
||
urls:
|
||
- https://huggingface.co/legraphista/llm-compiler-7b-IMat-GGUF
|
||
- https://huggingface.co/facebook/llm-compiler-7b
|
||
overrides:
|
||
parameters:
|
||
model: llm-compiler-7b.Q4_K.gguf
|
||
files:
|
||
- filename: "llm-compiler-7b.Q4_K.gguf"
|
||
uri: "huggingface://legraphista/llm-compiler-7b-IMat-GGUF/llm-compiler-7b.Q4_K.gguf"
|
||
sha256: 84926979701fa4591ff5ede94a6c5829a62efa620590e5815af984707d446926
|
||
- !!merge <<: *llm-compiler
|
||
name: "llm-compiler-7b-ftd-imat"
|
||
urls:
|
||
- https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF
|
||
- https://huggingface.co/facebook/llm-compiler-7b-ftd
|
||
overrides:
|
||
parameters:
|
||
model: llm-compiler-7b-ftd.Q4_K.gguf
|
||
files:
|
||
- filename: "llm-compiler-7b-ftd.Q4_K.gguf"
|
||
uri: "huggingface://legraphista/llm-compiler-7b-ftd-IMat-GGUF/llm-compiler-7b-ftd.Q4_K.gguf"
|
||
sha256: d862dd18ed335413787d0ad196522a9902a3c10a6456afdab8721822cb0ddde8
|
||
- &openvino
|
||
### START OpenVINO
|
||
url: "github:mudler/LocalAI/gallery/openvino.yaml@master"
|
||
name: "openvino-llama-3-8b-instruct-ov-int8"
|
||
license: llama3
|
||
urls:
|
||
- https://huggingface.co/fakezeta/llama-3-8b-instruct-ov-int8
|
||
overrides:
|
||
parameters:
|
||
model: fakezeta/llama-3-8b-instruct-ov-int8
|
||
stopwords:
|
||
- "<|eot_id|>"
|
||
- "<|end_of_text|>"
|
||
tags:
|
||
- llm
|
||
- openvino
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- !!merge <<: *openvino
|
||
name: "openvino-phi3"
|
||
urls:
|
||
- https://huggingface.co/fakezeta/Phi-3-mini-128k-instruct-ov-int8
|
||
overrides:
|
||
trust_remote_code: true
|
||
context_size: 131072
|
||
parameters:
|
||
model: fakezeta/Phi-3-mini-128k-instruct-ov-int8
|
||
stopwords:
|
||
- <|end|>
|
||
tags:
|
||
- llm
|
||
- openvino
|
||
- gpu
|
||
- phi3
|
||
- cpu
|
||
- Remote Code Enabled
|
||
- !!merge <<: *openvino
|
||
icon: https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/HMD6WEoqqrAV8Ng_fAcnN.png
|
||
name: "openvino-llama3-aloe"
|
||
urls:
|
||
- https://huggingface.co/fakezeta/Llama3-Aloe-8B-Alpha-ov-int8
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: fakezeta/Llama3-Aloe-8B-Alpha-ov-int8
|
||
stopwords:
|
||
- "<|eot_id|>"
|
||
- "<|end_of_text|>"
|
||
- !!merge <<: *openvino
|
||
name: "openvino-starling-lm-7b-beta-openvino-int8"
|
||
urls:
|
||
- https://huggingface.co/fakezeta/Starling-LM-7B-beta-openvino-int8
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: fakezeta/Starling-LM-7B-beta-openvino-int8
|
||
tags:
|
||
- llm
|
||
- openvino
|
||
- gpu
|
||
- mistral
|
||
- cpu
|
||
- !!merge <<: *openvino
|
||
name: "openvino-wizardlm2"
|
||
urls:
|
||
- https://huggingface.co/fakezeta/Not-WizardLM-2-7B-ov-int8
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: fakezeta/Not-WizardLM-2-7B-ov-int8
|
||
- !!merge <<: *openvino
|
||
name: "openvino-hermes2pro-llama3"
|
||
urls:
|
||
- https://huggingface.co/fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
|
||
overrides:
|
||
context_size: 8192
|
||
parameters:
|
||
model: fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
|
||
tags:
|
||
- llm
|
||
- openvino
|
||
- gpu
|
||
- llama3
|
||
- cpu
|
||
- !!merge <<: *openvino
|
||
name: "openvino-multilingual-e5-base"
|
||
urls:
|
||
- https://huggingface.co/intfloat/multilingual-e5-base
|
||
overrides:
|
||
embeddings: true
|
||
type: OVModelForFeatureExtraction
|
||
parameters:
|
||
model: intfloat/multilingual-e5-base
|
||
tags:
|
||
- llm
|
||
- openvino
|
||
- gpu
|
||
- embedding
|
||
- cpu
|
||
- !!merge <<: *openvino
|
||
name: "openvino-all-MiniLM-L6-v2"
|
||
urls:
|
||
- https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
|
||
overrides:
|
||
embeddings: true
|
||
type: OVModelForFeatureExtraction
|
||
parameters:
|
||
model: sentence-transformers/all-MiniLM-L6-v2
|
||
tags:
|
||
- llm
|
||
- openvino
|
||
- gpu
|
||
- embedding
|
||
- cpu
|
||
- &sentencentransformers
|
||
### START Embeddings
|
||
description: |
|
||
This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various tasks. Text is embedded in vector space such that similar text are closer and can efficiently be found using cosine similarity.
|
||
urls:
|
||
- https://github.com/UKPLab/sentence-transformers
|
||
tags:
|
||
- gpu
|
||
- cpu
|
||
- embeddings
|
||
- python
|
||
name: "all-MiniLM-L6-v2"
|
||
url: "github:mudler/LocalAI/gallery/sentencetransformers.yaml@master"
|
||
overrides:
|
||
parameters:
|
||
model: all-MiniLM-L6-v2
|
||
- &dreamshaper
|
||
### START Image generation
|
||
name: dreamshaper
|
||
icon: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/dd9b038c-bd15-43ab-86ab-66e145ad7ff2/width=450/26072158-132340247-8k%20portrait%20of%20beautiful%20cyborg%20with%20brown%20hair,%20intricate,%20elegant,%20highly%20detailed,%20majestic,%20digital%20photography,%20art%20by%20artg_ed.jpeg
|
||
license: other
|
||
description: |
|
||
A text-to-image model that uses Stable Diffusion 1.5 to generate images from text prompts. This model is DreamShaper model by Lykon.
|
||
urls:
|
||
- https://civitai.com/models/4384/dreamshaper
|
||
tags:
|
||
- text-to-image
|
||
- stablediffusion
|
||
- python
|
||
- sd-1.5
|
||
- gpu
|
||
url: "github:mudler/LocalAI/gallery/dreamshaper.yaml@master"
|
||
overrides:
|
||
parameters:
|
||
model: DreamShaper_8_pruned.safetensors
|
||
files:
|
||
- filename: DreamShaper_8_pruned.safetensors
|
||
uri: huggingface://Lykon/DreamShaper/DreamShaper_8_pruned.safetensors
|
||
sha256: 879db523c30d3b9017143d56705015e15a2cb5628762c11d086fed9538abd7fd
|
||
- name: stable-diffusion-3-medium
|
||
icon: https://huggingface.co/leo009/stable-diffusion-3-medium/resolve/main/sd3demo.jpg
|
||
license: other
|
||
description: |
|
||
Stable Diffusion 3 Medium is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features greatly improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
|
||
urls:
|
||
- https://huggingface.co/stabilityai/stable-diffusion-3-medium
|
||
- https://huggingface.co/leo009/stable-diffusion-3-medium
|
||
tags:
|
||
- text-to-image
|
||
- stablediffusion
|
||
- python
|
||
- sd-3
|
||
- gpu
|
||
url: "github:mudler/LocalAI/gallery/stablediffusion3.yaml@master"
|
||
- &flux
|
||
name: flux.1-dev
|
||
license: flux-1-dev-non-commercial-license
|
||
description: |
|
||
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
|
||
Key Features
|
||
Cutting-edge output quality, second only to our state-of-the-art model FLUX.1 [pro].
|
||
Competitive prompt following, matching the performance of closed source alternatives .
|
||
Trained using guidance distillation, making FLUX.1 [dev] more efficient.
|
||
Open weights to drive new scientific research, and empower artists to develop innovative workflows.
|
||
Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.
|
||
urls:
|
||
- https://huggingface.co/black-forest-labs/FLUX.1-dev
|
||
tags:
|
||
- text-to-image
|
||
- flux
|
||
- python
|
||
- gpu
|
||
url: "github:mudler/LocalAI/gallery/flux.yaml@master"
|
||
overrides:
|
||
parameters:
|
||
model: ChuckMcSneed/FLUX.1-dev
|
||
- !!merge <<: *flux
|
||
name: flux.1-schnell
|
||
license: apache-2
|
||
icon: https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/schnell_grid.jpeg
|
||
description: |
|
||
FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
|
||
Key Features
|
||
|
||
Cutting-edge output quality and competitive prompt following, matching the performance of closed source alternatives.
|
||
Trained using latent adversarial diffusion distillation, FLUX.1 [schnell] can generate high-quality images in only 1 to 4 steps.
|
||
Released under the apache-2.0 licence, the model can be used for personal, scientific, and commercial purposes.
|
||
urls:
|
||
- https://huggingface.co/black-forest-labs/FLUX.1-schnell
|
||
overrides:
|
||
parameters:
|
||
model: black-forest-labs/FLUX.1-schnell
|
||
- &whisper
|
||
## Whisper
|
||
url: "github:mudler/LocalAI/gallery/whisper-base.yaml@master"
|
||
name: "whisper-1"
|
||
license: "MIT"
|
||
urls:
|
||
- https://github.com/ggerganov/whisper.cpp
|
||
- https://huggingface.co/ggerganov/whisper.cpp
|
||
overrides:
|
||
parameters:
|
||
model: ggml-whisper-base.bin
|
||
files:
|
||
- filename: "ggml-whisper-base.bin"
|
||
sha256: "60ed5bc3dd14eea856493d334349b405782ddcaf0028d4b5df4088345fba2efe"
|
||
uri: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.bin"
|
||
description: |
|
||
Port of OpenAI's Whisper model in C/C++
|
||
- !!merge <<: *whisper
|
||
name: "whisper-base-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-base-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-base-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base-q5_1.bin"
|
||
sha256: 422f1ae452ade6f30a004d7e5c6a43195e4433bc370bf23fac9cc591f01a8898
|
||
- !!merge <<: *whisper
|
||
name: "whisper-base"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-base.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-base.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base.bin"
|
||
sha256: 60ed5bc3dd14eea856493d334349b405782ddcaf0028d4b5df4088345fba2efe
|
||
- !!merge <<: *whisper
|
||
name: "whisper-base-en-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-base.en-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-base.en-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base.en-q5_1.bin"
|
||
sha256: 4baf70dd0d7c4247ba2b81fafd9c01005ac77c2f9ef064e00dcf195d0e2fdd2f
|
||
- !!merge <<: *whisper
|
||
name: "whisper-base-en"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-base.en.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-base.en.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-base.en.bin"
|
||
sha256: a03779c86df3323075f5e796cb2ce5029f00ec8869eee3fdfb897afe36c6d002
|
||
- !!merge <<: *whisper
|
||
name: "whisper-large-q5_0"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-large-q5_0.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-large-q5_0.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-large-q5_0.bin"
|
||
sha256: 3a214837221e4530dbc1fe8d734f302af393eb30bd0ed046042ebf4baf70f6f2
|
||
- !!merge <<: *whisper
|
||
name: "whisper-medium-q5_0"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-medium-q5_0.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-medium-q5_0.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-medium-q5_0.bin"
|
||
sha256: 19fea4b380c3a618ec4723c3eef2eb785ffba0d0538cf43f8f235e7b3b34220f
|
||
- !!merge <<: *whisper
|
||
name: "whisper-small-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-small-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-small-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small-q5_1.bin"
|
||
sha256: ae85e4a935d7a567bd102fe55afc16bb595bdb618e11b2fc7591bc08120411bb
|
||
- !!merge <<: *whisper
|
||
name: "whisper-small"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-small.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-small.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small.bin"
|
||
sha256: 1be3a9b2063867b937e64e2ec7483364a79917e157fa98c5d94b5c1fffea987b
|
||
- !!merge <<: *whisper
|
||
name: "whisper-small-en-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-small.en-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-small.en-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small.en-q5_1.bin"
|
||
sha256: bfdff4894dcb76bbf647d56263ea2a96645423f1669176f4844a1bf8e478ad30
|
||
- !!merge <<: *whisper
|
||
name: "whisper-small"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-small.en.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-small.en.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small.en.bin"
|
||
sha256: c6138d6d58ecc8322097e0f987c32f1be8bb0a18532a3f88f734d1bbf9c41e5d
|
||
- !!merge <<: *whisper
|
||
name: "whisper-small-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-small-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-small-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-small-q5_1.bin"
|
||
sha256: ae85e4a935d7a567bd102fe55afc16bb595bdb618e11b2fc7591bc08120411bb
|
||
- !!merge <<: *whisper
|
||
name: "whisper-tiny"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-tiny.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-tiny.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.bin"
|
||
sha256: be07e048e1e599ad46341c8d2a135645097a538221678b7acdd1b1919c6e1b21
|
||
- !!merge <<: *whisper
|
||
name: "whisper-tiny-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-tiny-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-tiny-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny-q5_1.bin"
|
||
sha256: 818710568da3ca15689e31a743197b520007872ff9576237bda97bd1b469c3d7
|
||
- !!merge <<: *whisper
|
||
name: "whisper-tiny-en-q5_1"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-tiny.en-q5_1.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-tiny.en-q5_1.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.en-q5_1.bin"
|
||
sha256: c77c5766f1cef09b6b7d47f21b546cbddd4157886b3b5d6d4f709e91e66c7c2b
|
||
- !!merge <<: *whisper
|
||
name: "whisper-tiny-en"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-tiny.en.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-tiny.en.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.en.bin"
|
||
sha256: 921e4cf8686fdd993dcd081a5da5b6c365bfde1162e72b08d75ac75289920b1f
|
||
- !!merge <<: *whisper
|
||
name: "whisper-tiny-en-q8_0"
|
||
overrides:
|
||
parameters:
|
||
model: ggml-model-whisper-tiny.en-q8_0.bin
|
||
files:
|
||
- filename: "ggml-model-whisper-tiny.en-q8_0.bin"
|
||
uri: "https://ggml.ggerganov.com/ggml-model-whisper-tiny.en-q8_0.bin"
|
||
sha256: 5bc2b3860aa151a4c6e7bb095e1fcce7cf12c7b020ca08dcec0c6d018bb7dd94
|
||
## Bert embeddings
|
||
- url: "github:mudler/LocalAI/gallery/bert-embeddings.yaml@master"
|
||
name: "bert-embeddings"
|
||
license: "Apache 2.0"
|
||
urls:
|
||
- https://huggingface.co/skeskinen/ggml
|
||
tags:
|
||
- embeddings
|
||
description: |
|
||
Bert model that can be used for embeddings
|
||
## Stable Diffusion
|
||
- url: github:mudler/LocalAI/gallery/stablediffusion.yaml@master
|
||
license: "BSD-3"
|
||
urls:
|
||
- https://github.com/EdVince/Stable-Diffusion-NCNN
|
||
- https://github.com/EdVince/Stable-Diffusion-NCNN/blob/main/LICENSE
|
||
description: |
|
||
Stable Diffusion in NCNN with c++, supported txt2img and img2img
|
||
name: stablediffusion-cpp
|
||
## Tiny Dream
|
||
- url: github:mudler/LocalAI/gallery/tinydream.yaml@master
|
||
name: tinydream
|
||
license: "BSD-3"
|
||
urls:
|
||
- https://github.com/symisc/tiny-dream
|
||
- https://github.com/symisc/tiny-dream/blob/main/LICENSE
|
||
description: |
|
||
An embedded, Header Only, Stable Diffusion C++ implementation
|
||
- &piper
|
||
## Piper TTS
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-kathleen-low
|
||
icon: https://github.com/rhasspy/piper/raw/master/etc/logo.png
|
||
license: mit
|
||
urls:
|
||
- https://github.com/rhasspy/piper
|
||
description: |
|
||
A fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. Piper is used in a variety of [projects](https://github.com/rhasspy/piper#people-using-piper).
|
||
tags:
|
||
- tts
|
||
- text-to-speech
|
||
- cpu
|
||
overrides:
|
||
parameters:
|
||
model: en-us-kathleen-low.onnx
|
||
files:
|
||
- filename: voice-en-us-kathleen-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-kathleen-low.tar.gz
|
||
sha256: 18e32f009f864d8061af8a4be4ae9018b5aa8b49c37f9e108bbfd782c6a38fbf
|
||
- !!merge <<: *piper
|
||
name: voice-ca-upc_ona-x-low
|
||
overrides:
|
||
parameters:
|
||
model: ca-upc_ona-x-low.onnx
|
||
files:
|
||
- filename: voice-ca-upc_ona-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ca-upc_ona-x-low.tar.gz
|
||
sha256: c750d3f6ad35c8d95d5b0d1ad30ede2525524e48390f70a0871bdb7980cc271e
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-ca-upc_pau-x-low
|
||
overrides:
|
||
parameters:
|
||
model: ca-upc_pau-x-low.onnx
|
||
files:
|
||
- filename: voice-ca-upc_pau-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ca-upc_pau-x-low.tar.gz
|
||
sha256: 13c658ecd46a2dbd9dadadf7100623e53106239afcc359f9e27511b91e642f1f
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-da-nst_talesyntese-medium
|
||
overrides:
|
||
parameters:
|
||
model: da-nst_talesyntese-medium.onnx
|
||
files:
|
||
- filename: voice-da-nst_talesyntese-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-da-nst_talesyntese-medium.tar.gz
|
||
sha256: 1bdf673b946a2ba69fab24ae3fc0e7d23e042c2533cbbef008f64f633500eb7e
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-de-eva_k-x-low
|
||
overrides:
|
||
parameters:
|
||
model: de-eva_k-x-low.onnx
|
||
files:
|
||
- filename: voice-de-eva_k-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-eva_k-x-low.tar.gz
|
||
sha256: 81b305abc58a0a02629aea01904a86ec97b823714dd66b1ee22f38fe529e6371
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-de-karlsson-low
|
||
overrides:
|
||
parameters:
|
||
model: de-karlsson-low.onnx
|
||
files:
|
||
- filename: voice-de-karlsson-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-karlsson-low.tar.gz
|
||
sha256: cc7615cfef3ee6beaa1db6059e0271e4d2e1d6d310c0e17b3d36c494628f4b82
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-de-kerstin-low
|
||
overrides:
|
||
parameters:
|
||
model: de-kerstin-low.onnx
|
||
files:
|
||
- filename: voice-de-kerstin-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-kerstin-low.tar.gz
|
||
sha256: d8ea72fbc0c21db828e901777ba7bb5dff7c843bb943ad19f34c9700b96a8182
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-de-pavoque-low
|
||
overrides:
|
||
parameters:
|
||
model: de-pavoque-low.onnx
|
||
files:
|
||
- filename: voice-de-pavoque-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-pavoque-low.tar.gz
|
||
sha256: 1f5ebc6398e8829f19c7c2b14f46307703bca0f0d8c74b4bb173037b1f161d4d
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-de-ramona-low
|
||
overrides:
|
||
parameters:
|
||
model: de-ramona-low.onnx
|
||
files:
|
||
- filename: voice-de-ramona-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-ramona-low.tar.gz
|
||
sha256: 66d9fc08d1a1c537a1cefe99a284f687e5ad7e43d5935a75390678331cce7b47
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-de-thorsten-low
|
||
overrides:
|
||
parameters:
|
||
model: de-thorsten-low.onnx
|
||
files:
|
||
- filename: voice-de-thorsten-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-de-thorsten-low.tar.gz
|
||
sha256: 4d052a7726b77719d0dbc66c845f1d0fe4432bfbd26f878f6dd0883d49e9e43d
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-el-gr-rapunzelina-low
|
||
overrides:
|
||
parameters:
|
||
model: el-gr-rapunzelina-low.onnx
|
||
files:
|
||
- filename: voice-el-gr-rapunzelina-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-el-gr-rapunzelina-low.tar.gz
|
||
sha256: c5613688c12eabc5294465494ed56af1e0fe4d7896d216bfa470eb225d9ff0d0
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-gb-alan-low
|
||
overrides:
|
||
parameters:
|
||
model: en-gb-alan-low.onnx
|
||
files:
|
||
- filename: voice-en-gb-alan-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-gb-alan-low.tar.gz
|
||
sha256: 526eeeeccb26206dc92de5965615803b5bf88df059f46372caa4a9fa12d76a32
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-gb-southern_english_female-low
|
||
overrides:
|
||
parameters:
|
||
model: en-gb-southern_english
|
||
files:
|
||
- filename: voice-en-gb-southern_english_female-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-gb-southern_english_female-low.tar.gz
|
||
sha256: 7c1bbe23e61a57bdb450b137f69a83ff5358159262e1ed7d2308fa14f4924da9
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-amy-low
|
||
overrides:
|
||
parameters:
|
||
model: en-us-amy-low.onnx
|
||
files:
|
||
- filename: voice-en-us-amy-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-amy-low.tar.gz
|
||
sha256: 5c3e3480e7d71ce219943c8a711bb9c21fd48b8f8e87ed7fb5c6649135ab7608
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-danny-low
|
||
overrides:
|
||
parameters:
|
||
model: en-us-danny-low.onnx
|
||
files:
|
||
- filename: voice-en-us-danny-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-danny-low.tar.gz
|
||
sha256: 0c8fbb42526d5fbd3a0bded5f18041c0a893a70a7fb8756f97866624b932264b
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-kathleen-low
|
||
overrides:
|
||
parameters:
|
||
model: en-us-kathleen-low.onnx
|
||
files:
|
||
- filename: voice-en-us-kathleen-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-kathleen-low.tar.gz
|
||
sha256: 18e32f009f864d8061af8a4be4ae9018b5aa8b49c37f9e108bbfd782c6a38fbf
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-lessac-low
|
||
overrides:
|
||
parameters:
|
||
model: en-us-lessac-low.onnx
|
||
files:
|
||
- filename: voice-en-us-lessac-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-lessac-low.tar.gz
|
||
sha256: 003fe040985d00b917ace21b2ccca344c282c53fe9b946991b7b0da52516e1fc
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-lessac-medium
|
||
overrides:
|
||
parameters:
|
||
model: en-us-lessac-medium.onnx
|
||
files:
|
||
- filename: voice-en-us-lessac-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-lessac-medium.tar.gz
|
||
sha256: d45ca50084c0558eb9581cd7d26938043bc8853513da47c63b94d95a2367a5c9
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-libritts-high
|
||
overrides:
|
||
parameters:
|
||
model: en-us-libritts-high.onnx
|
||
files:
|
||
- filename: voice-en-us-libritts-high.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-libritts-high.tar.gz
|
||
sha256: 328e3e9cb573a43a6c5e1aeca386e971232bdb1418a74d4674cf726c973a0ea8
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-ryan-high
|
||
overrides:
|
||
parameters:
|
||
model: en-us-ryan-high.onnx
|
||
files:
|
||
- filename: voice-en-us-ryan-high.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-ryan-high.tar.gz
|
||
sha256: de346b054703a190782f49acb9b93c50678a884fede49cfd85429d204802d678
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-ryan-low
|
||
overrides:
|
||
parameters:
|
||
model: en-us-ryan-low.onnx
|
||
files:
|
||
- filename: voice-en-us-ryan-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-ryan-low.tar.gz
|
||
sha256: 049e6e5bad07870fb1d25ecde97bac00f9c95c90589b2fef4b0fbf23c88770ce
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us-ryan-medium
|
||
overrides:
|
||
parameters:
|
||
model: en-us-ryan-medium.onnx
|
||
files:
|
||
- filename: voice-en-us-ryan-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-ryan-medium.tar.gz
|
||
sha256: 2e00d747eaed6ce9f63f4991921ef3bb2bbfbc7f28cde4f14eb7048960f928d8
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-en-us_lessac
|
||
overrides:
|
||
parameters:
|
||
model: en-us-lessac.onnx
|
||
files:
|
||
- filename: voice-en-us_lessac.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us_lessac.tar.gz
|
||
sha256: 0967af67fb0435aa509b0b794c0cb2cc57817ae8a5bff28cb8cd89ab6f5dcc3d
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-es-carlfm-x-low
|
||
overrides:
|
||
parameters:
|
||
model: es-carlfm-x-low.onnx
|
||
files:
|
||
- filename: voice-es-carlfm-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-es-carlfm-x-low.tar.gz
|
||
sha256: 0156a186de321639e6295521f667758ad086bc8433f0a6797a9f044ed5cf5bf3
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-es-mls_10246-low
|
||
overrides:
|
||
parameters:
|
||
model: es-mls_10246-low.onnx
|
||
files:
|
||
- filename: voice-es-mls_10246-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-es-mls_10246-low.tar.gz
|
||
sha256: ff1fe3fc2ab91e32acd4fa8cb92048e3cff0e20079b9d81324f01cd2dea50598
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-es-mls_9972-low
|
||
overrides:
|
||
parameters:
|
||
model: es-mls_9972-low.onnx
|
||
files:
|
||
- filename: voice-es-mls_9972-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-es-mls_9972-low.tar.gz
|
||
sha256: d95def9adea97a6a3fee7645d1167e00fb4fd60f8ce9bc3ebf1acaa9e3f455dc
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-fi-harri-low
|
||
overrides:
|
||
parameters:
|
||
model: fi-harri-low.onnx
|
||
files:
|
||
- filename: voice-fi-harri-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fi-harri-low.tar.gz
|
||
sha256: 4f1aaf00927d0eb25bf4fc5ef8be2f042e048593864ac263ee7b49c516832b22
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-fr-gilles-low
|
||
overrides:
|
||
parameters:
|
||
model: fr-gilles-low.onnx
|
||
files:
|
||
- filename: voice-fr-gilles-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-gilles-low.tar.gz
|
||
sha256: 77662c7332c2a6f522ab478287d9b0fe9afc11a2da71f310bf923124ee699aae
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-fr-mls_1840-low
|
||
overrides:
|
||
parameters:
|
||
model: fr-mls_1840-low.onnx
|
||
files:
|
||
- filename: voice-fr-mls_1840-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-mls_1840-low.tar.gz
|
||
sha256: 69169d1fac99a733112c08c7caabf457055990590a32ee83ebcada37f86132d3
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-fr-siwis-low
|
||
overrides:
|
||
parameters:
|
||
model: fr-siwis-low.onnx
|
||
files:
|
||
- filename: voice-fr-siwis-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-siwis-low.tar.gz
|
||
sha256: d3db8d47053e9b4108e1c1d29d5ea2b5b1a152183616c3134c222110ccde20f2
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-fr-siwis-medium
|
||
overrides:
|
||
parameters:
|
||
model: fr-siwis-medium.onnx
|
||
files:
|
||
- filename: voice-fr-siwis-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-fr-siwis-medium.tar.gz
|
||
sha256: 0c9ecdf9ecac6de4a46be85a162bffe0db7145bd3a4175831cea6cab4b41eefd
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-is-bui-medium
|
||
overrides:
|
||
parameters:
|
||
model: is-bui-medium.onnx
|
||
files:
|
||
- filename: voice-is-bui-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-bui-medium.tar.gz
|
||
sha256: e89ef01051cb48ca2a32338ed8749a4c966b912bb572c61d6d21f2d3822e505f
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-is-salka-medium
|
||
overrides:
|
||
parameters:
|
||
model: is-salka-medium.onnx
|
||
files:
|
||
- filename: voice-is-salka-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-salka-medium.tar.gz
|
||
sha256: 75923d7d6b4125166ca58ec82b5d23879012844483b428db9911e034e6626384
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-is-steinn-medium
|
||
overrides:
|
||
parameters:
|
||
model: is-steinn-medium.onnx
|
||
files:
|
||
- filename: voice-is-steinn-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-steinn-medium.tar.gz
|
||
sha256: 5a01a8df796f86fdfe12cc32a3412ebd83670d47708d94d926ba5ed0776e6dc9
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-is-ugla-medium
|
||
overrides:
|
||
parameters:
|
||
model: is-ugla-medium.onnx
|
||
files:
|
||
- filename: voice-is-ugla-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-is-ugla-medium.tar.gz
|
||
sha256: 501cd0376f7fd397f394856b7b3d899da4cc40a63e11912258b74da78af90547
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-it-riccardo_fasol-x-low
|
||
overrides:
|
||
parameters:
|
||
model: it-riccardo_fasol-x-low.onnx
|
||
files:
|
||
- filename: voice-it-riccardo_fasol-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-it-riccardo_fasol-x-low.tar.gz
|
||
sha256: 394b27b8780f5167e73a62ac103839cc438abc7edb544192f965e5b8f5f4acdb
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-it-paola-medium
|
||
overrides:
|
||
parameters:
|
||
model: it-paola-medium.onnx
|
||
files:
|
||
- filename: voice-it-paola-medium.tar.gz
|
||
uri: https://github.com/fakezeta/piper-paola-voice/releases/download/v1.0.0/voice-it-paola-medium.tar.gz
|
||
sha256: 61d3bac0ff6d347daea5464c4b3ae156a450b603a916cc9ed7deecdeba17153a
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-kk-iseke-x-low
|
||
overrides:
|
||
parameters:
|
||
model: kk-iseke-x-low.onnx
|
||
files:
|
||
- filename: voice-kk-iseke-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-kk-iseke-x-low.tar.gz
|
||
sha256: f434fffbea3e6d8cf392e44438a1f32a5d005fc93b41be84a6d663882ce7c074
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-kk-issai-high
|
||
overrides:
|
||
parameters:
|
||
model: kk-issai-high.onnx
|
||
files:
|
||
- filename: voice-kk-issai-high.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-kk-issai-high.tar.gz
|
||
sha256: 84bf79d330d6cd68103e82d95bbcaa2628a99a565126dea94cea2be944ed4f32
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-kk-raya-x-low
|
||
overrides:
|
||
parameters:
|
||
model: kk-raya-x-low.onnx
|
||
files:
|
||
- filename: voice-kk-raya-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-kk-raya-x-low.tar.gz
|
||
sha256: 4cab4ce00c6f10450b668072d7980a2bc3ade3a39adee82e3ec4f519d4c57bd1
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-ne-google-medium
|
||
overrides:
|
||
parameters:
|
||
model: ne-google-medium.onnx
|
||
files:
|
||
- filename: voice-ne-google-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ne-google-medium.tar.gz
|
||
sha256: 0895b11a7a340baea37fb9c27fb50bc3fd0af9779085978277f962d236d3a7bd
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-ne-google-x-low
|
||
overrides:
|
||
parameters:
|
||
model: ne-google-x-low.onnx
|
||
files:
|
||
- filename: voice-ne-google-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ne-google-x-low.tar.gz
|
||
sha256: 870ba5718dfe3e478c6cce8a9a288b591b3575c750b57ffcd845e4ec64988f0b
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-nl-mls_5809-low
|
||
overrides:
|
||
parameters:
|
||
model: nl-mls_5809-low.onnx
|
||
files:
|
||
- filename: voice-nl-mls_5809-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-mls_5809-low.tar.gz
|
||
sha256: 398b9f0318dfe9d613cb066444efec0d8491905ae34cf502edb52030b75ef51c
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-nl-mls_7432-low
|
||
overrides:
|
||
parameters:
|
||
model: nl-mls_7432-low.onnx
|
||
files:
|
||
- filename: voice-nl-mls_7432-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-mls_7432-low.tar.gz
|
||
sha256: 0b3efc68ea7e735ba8f2e0a0f7e9b4b887b00f6530c02fca4aa69a6091adbe5e
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-nl-nathalie-x-low
|
||
overrides:
|
||
parameters:
|
||
model: nl-nathalie-x-low.onnx
|
||
files:
|
||
- filename: voice-nl-nathalie-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-nathalie-x-low.tar.gz
|
||
sha256: 2658d4fe2b791491780160216d187751f7c993aa261f3b8ec76dfcaf1ba74942
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-nl-rdh-medium
|
||
overrides:
|
||
parameters:
|
||
model: nl-rdh-medium.onnx
|
||
files:
|
||
- filename: voice-nl-rdh-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-rdh-medium.tar.gz
|
||
sha256: 16f74a195ecf13df1303fd85327532196cc1ecef2e72505200578fd410d0affb
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-nl-rdh-x-low
|
||
overrides:
|
||
parameters:
|
||
model: nl-rdh-x-low.onnx
|
||
files:
|
||
- filename: voice-nl-rdh-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-nl-rdh-x-low.tar.gz
|
||
sha256: 496363e5d6e080fd16ac5a1f9457c564b52f0ee8be7f2e2ba1dbf41ef0b23a39
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-no-talesyntese-medium
|
||
overrides:
|
||
parameters:
|
||
model: no-talesyntese-medium.onnx
|
||
files:
|
||
- filename: voice-no-talesyntese-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-no-talesyntese-medium.tar.gz
|
||
sha256: ed6b3593a0e70c90d52e225b85d7e0b805ad8e08482471bd2f73cf1404a6470d
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-pl-mls_6892-low
|
||
overrides:
|
||
parameters:
|
||
model: pl-mls_6892-low.onnx
|
||
files:
|
||
- filename: voice-pl-mls_6892-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-pl-mls_6892-low.tar.gz
|
||
sha256: 5361fcf586b1285025a2ccb8b7500e07c9d66fa8126ef518709c0055c4c0d6f4
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-pt-br-edresson-low
|
||
overrides:
|
||
parameters:
|
||
model: pt-br-edresson-low.onnx
|
||
files:
|
||
- filename: voice-pt-br-edresson-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-pt-br-edresson-low.tar.gz
|
||
sha256: c68be522a526e77f49e90eeb4c13c01b4acdfeb635759f0eeb0eea8f16fd1f33
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-ru-irinia-medium
|
||
overrides:
|
||
parameters:
|
||
model: ru-irinia-medium.onnx
|
||
files:
|
||
- filename: voice-ru-irinia-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-ru-irinia-medium.tar.gz
|
||
sha256: 897b62f170faee38f21d0bc36411164166ae351977e898b6cf33f6206890b55f
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-sv-se-nst-medium
|
||
overrides:
|
||
parameters:
|
||
model: sv-se-nst-medium.onnx
|
||
files:
|
||
- filename: voice-sv-se-nst-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-sv-se-nst-medium.tar.gz
|
||
sha256: 0d6cf357d55860162bf1bdd76bd4f0c396ff547e941bfb25df799d6f1866fda9
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-uk-lada-x-low
|
||
overrides:
|
||
parameters:
|
||
model: uk-lada-x-low.onnx
|
||
files:
|
||
- filename: voice-uk-lada-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-uk-lada-x-low.tar.gz
|
||
sha256: ff50acbd659fc226b57632acb1cee310009821ec44b4bc517effdd9827d8296b
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-vi-25hours-single-low
|
||
overrides:
|
||
parameters:
|
||
model: vi-25hours-single-low.onnx
|
||
files:
|
||
- filename: voice-vi-25hours-single-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-vi-25hours-single-low.tar.gz
|
||
sha256: 97e34d1b69dc7000a4ec3269f84339ed35905b3c9800a63da5d39b7649e4a666
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-vi-vivos-x-low
|
||
overrides:
|
||
parameters:
|
||
model: vi-vivos-x-low.onnx
|
||
files:
|
||
- filename: voice-vi-vivos-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-vi-vivos-x-low.tar.gz
|
||
sha256: 07cd4ca6438ec224012f7033eec1a2038724b78e4aa2bedf85f756656b52e1a7
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-zh-cn-huayan-x-low
|
||
overrides:
|
||
parameters:
|
||
model: zh-cn-huayan-x-low.onnx
|
||
files:
|
||
- filename: voice-zh-cn-huayan-x-low.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-zh-cn-huayan-x-low.tar.gz
|
||
sha256: 609db0da8ee75beb2f17ce53c55abdbc8c0e04135482efedf1798b1938bf90fa
|
||
- !!merge <<: *piper
|
||
url: github:mudler/LocalAI/gallery/piper.yaml@master
|
||
name: voice-zh_CN-huayan-medium
|
||
overrides:
|
||
parameters:
|
||
model: zh_CN-huayan-medium.onnx
|
||
files:
|
||
- filename: voice-zh_CN-huayan-medium.tar.gz
|
||
uri: https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-zh_CN-huayan-medium.tar.gz
|
||
sha256: 0299a5e7f481ba853404e9f0e1515a94d5409585d76963fa4d30c64bd630aa99
|