chore(model-gallery): ⬆️ update checksum (#5036)

⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
This commit is contained in:
LocalAI [bot] 2025-03-19 09:18:40 +01:00 committed by GitHub
parent 192ba2c657
commit 27a3997530
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -143,24 +143,24 @@
- https://huggingface.co/soob3123/amoral-gemma3-12B
- https://huggingface.co/bartowski/soob3123_amoral-gemma3-12B-GGUF
description: |
A fine-tuned version of Google's Gemma 3 12B instruction-tuned model optimized for creative freedom and reduced content restrictions. This variant maintains strong reasoning capabilities while excelling in roleplaying scenarios and open-ended content generation.
A fine-tuned version of Google's Gemma 3 12B instruction-tuned model optimized for creative freedom and reduced content restrictions. This variant maintains strong reasoning capabilities while excelling in roleplaying scenarios and open-ended content generation.
Key Modifications:
Key Modifications:
Reduced refusal mechanisms compared to base model
Enhanced character consistency in dialogues
Improved narrative flow control
Optimized for multi-turn interactions
Reduced refusal mechanisms compared to base model
Enhanced character consistency in dialogues
Improved narrative flow control
Optimized for multi-turn interactions
Intended Use
Intended Use
Primary Applications:
Primary Applications:
Interactive fiction and storytelling
Character-driven roleplaying scenarios
Creative writing assistance
Experimental AI interactions
Content generation for mature audiences
Interactive fiction and storytelling
Character-driven roleplaying scenarios
Creative writing assistance
Experimental AI interactions
Content generation for mature audiences
overrides:
parameters:
model: soob3123_amoral-gemma3-12B-Q4_K_M.gguf
@ -469,7 +469,7 @@
- https://huggingface.co/suayptalha/Maestro-10B
- https://huggingface.co/bartowski/suayptalha_Maestro-10B-GGUF
description: |
Maestro-10B is a 10 billion parameter model fine-tuned from Virtuoso-Lite, a next-generation language model developed by arcee-ai. Virtuoso-Lite itself is based on the Llama-3 architecture, distilled from Deepseek-v3 using approximately 1.1 billion tokens/logits. This distillation process allows Virtuoso-Lite to achieve robust performance with a smaller parameter count, excelling in reasoning, code generation, and mathematical problem-solving. Maestro-10B inherits these strengths from its base model, Virtuoso-Lite, and further enhances them through fine-tuning on the OpenOrca dataset. This combination of a distilled base model and targeted fine-tuning makes Maestro-10B a powerful and efficient language model.
Maestro-10B is a 10 billion parameter model fine-tuned from Virtuoso-Lite, a next-generation language model developed by arcee-ai. Virtuoso-Lite itself is based on the Llama-3 architecture, distilled from Deepseek-v3 using approximately 1.1 billion tokens/logits. This distillation process allows Virtuoso-Lite to achieve robust performance with a smaller parameter count, excelling in reasoning, code generation, and mathematical problem-solving. Maestro-10B inherits these strengths from its base model, Virtuoso-Lite, and further enhances them through fine-tuning on the OpenOrca dataset. This combination of a distilled base model and targeted fine-tuning makes Maestro-10B a powerful and efficient language model.
overrides:
parameters:
model: suayptalha_Maestro-10B-Q4_K_M.gguf
@ -832,7 +832,7 @@
- https://huggingface.co/TheSkullery/L3.3-exp-unnamed-model-70b-v0.5
- https://huggingface.co/bartowski/TheSkullery_L3.3-exp-unnamed-model-70b-v0.5-GGUF
description: |
No description available for this model
No description available for this model
overrides:
parameters:
model: TheSkullery_L3.3-exp-unnamed-model-70b-v0.5-Q4_K_M.gguf
@ -1414,7 +1414,7 @@
- https://huggingface.co/ibm-granite/granite-embedding-107m-multilingual
- https://huggingface.co/bartowski/granite-embedding-107m-multilingual-GGUF
description: |
Granite-Embedding-107M-Multilingual is a 107M parameter dense biencoder embedding model from the Granite Embeddings suite that can be used to generate high quality text embeddings. This model produces embedding vectors of size 384 and is trained using a combination of open source relevance-pair datasets with permissive, enterprise-friendly license, and IBM collected and generated datasets. This model is developed using contrastive finetuning, knowledge distillation and model merging for improved performance.
Granite-Embedding-107M-Multilingual is a 107M parameter dense biencoder embedding model from the Granite Embeddings suite that can be used to generate high quality text embeddings. This model produces embedding vectors of size 384 and is trained using a combination of open source relevance-pair datasets with permissive, enterprise-friendly license, and IBM collected and generated datasets. This model is developed using contrastive finetuning, knowledge distillation and model merging for improved performance.
tags:
- embeddings
overrides:
@ -7215,7 +7215,7 @@
- https://huggingface.co/uncensoredai/UncensoredLM-DeepSeek-R1-Distill-Qwen-14B
- https://huggingface.co/bartowski/uncensoredai_UncensoredLM-DeepSeek-R1-Distill-Qwen-14B-GGUF
description: |
An UncensoredLLM with Reasoning, what more could you want?
An UncensoredLLM with Reasoning, what more could you want?
overrides:
parameters:
model: uncensoredai_UncensoredLM-DeepSeek-R1-Distill-Qwen-14B-Q4_K_M.gguf
@ -8517,9 +8517,9 @@
- https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
- https://huggingface.co/bartowski/PocketDoc_Dans-PersonalityEngine-V1.2.0-24b-GGUF
description: |
This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline.
This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline.
It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.
It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.
overrides:
parameters:
model: PocketDoc_Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M.gguf
@ -8583,8 +8583,8 @@
model: BeaverAI_MN-2407-DSK-QwQify-v0.1-12B-Q4_K_M.gguf
files:
- filename: BeaverAI_MN-2407-DSK-QwQify-v0.1-12B-Q4_K_M.gguf
sha256: 689c4c75f0382421e1e691d826fe64363232f4c93453d516456d3e38189d38ea
uri: huggingface://bartowski/BeaverAI_MN-2407-DSK-QwQify-v0.1-12B-GGUF/BeaverAI_MN-2407-DSK-QwQify-v0.1-12B-Q4_K_M.gguf
sha256: f6ae7dd8be3aedd640483ccc6895c3fc205a019246bf2512a956589c0222386e
- &mudler
url: "github:mudler/LocalAI/gallery/mudler.yaml@master" ### START mudler's LocalAI specific-models
name: "LocalAI-llama3-8b-function-call-v0.2"
@ -14318,7 +14318,7 @@
- https://huggingface.co/nomic-ai/nomic-embed-text-v1.5
- https://huggingface.co/mradermacher/nomic-embed-text-v1.5-GGUF
description: |
Resizable Production Embeddings with Matryoshka Representation Learning
Resizable Production Embeddings with Matryoshka Representation Learning
tags:
- embeddings
overrides: