mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-09 12:03:15 +00:00
chore(model gallery): add qwen3-14b (#5271)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
bc1e059259
commit
c059f912b9
@ -76,6 +76,38 @@
|
||||
- filename: Qwen_Qwen3-32B-Q4_K_M.gguf
|
||||
sha256: e41ec56ddd376963a116da97506fadfccb50fb402bb6f3cb4be0bc179a582bd6
|
||||
uri: huggingface://bartowski/Qwen_Qwen3-32B-GGUF/Qwen_Qwen3-32B-Q4_K_M.gguf
|
||||
- !!merge <<: *qwen3
|
||||
name: "qwen3-14b"
|
||||
urls:
|
||||
- https://huggingface.co/Qwen/Qwen3-14B
|
||||
- https://huggingface.co/MaziyarPanahi/Qwen3-14B-GGUF
|
||||
description: |
|
||||
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
|
||||
|
||||
Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios.
|
||||
Significantly enhancement in its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
|
||||
Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
|
||||
Expertise in agent capabilities, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
|
||||
Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.
|
||||
|
||||
Qwen3-14B has the following features:
|
||||
|
||||
Type: Causal Language Models
|
||||
Training Stage: Pretraining & Post-training
|
||||
Number of Parameters: 14.8B
|
||||
Number of Paramaters (Non-Embedding): 13.2B
|
||||
Number of Layers: 40
|
||||
Number of Attention Heads (GQA): 40 for Q and 8 for KV
|
||||
Context Length: 32,768 natively and 131,072 tokens with YaRN.
|
||||
|
||||
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
|
||||
overrides:
|
||||
parameters:
|
||||
model: Qwen3-14B.Q4_K_M.gguf
|
||||
files:
|
||||
- filename: Qwen3-14B.Q4_K_M.gguf
|
||||
sha256: ee624d4be12433277bb9a340d3e5aabf5eb68fc788a7048ee99917edaa46494a
|
||||
uri: huggingface://MaziyarPanahi/Qwen3-14B-GGUF/Qwen3-14B.Q4_K_M.gguf
|
||||
- &gemma3
|
||||
url: "github:mudler/LocalAI/gallery/gemma.yaml@master"
|
||||
name: "gemma-3-27b-it"
|
||||
|
Loading…
x
Reference in New Issue
Block a user