mirror of
https://github.com/mudler/LocalAI.git
synced 2025-02-11 13:15:20 +00:00
models(gallery): add qwen2.5-coder-32b-instruct (#4127)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
fe7ffdbc63
commit
4e2a5719e7
@ -1,4 +1,55 @@
|
|||||||
---
|
---
|
||||||
|
- &qwen25coder
|
||||||
|
name: "qwen2.5-coder-14b"
|
||||||
|
url: "github:mudler/LocalAI/gallery/chatml.yaml@master"
|
||||||
|
license: apache-2.0
|
||||||
|
tags:
|
||||||
|
- llm
|
||||||
|
- gguf
|
||||||
|
- gpu
|
||||||
|
- qwen
|
||||||
|
- qwen2.5
|
||||||
|
- cpu
|
||||||
|
urls:
|
||||||
|
- https://huggingface.co/Qwen/Qwen2.5-Coder-14B
|
||||||
|
- https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF
|
||||||
|
description: |
|
||||||
|
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
||||||
|
|
||||||
|
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
||||||
|
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
||||||
|
Long-context Support up to 128K tokens.
|
||||||
|
overrides:
|
||||||
|
parameters:
|
||||||
|
model: Qwen2.5-Coder-14B.Q4_K_M.gguf
|
||||||
|
files:
|
||||||
|
- filename: Qwen2.5-Coder-14B.Q4_K_M.gguf
|
||||||
|
sha256: 94f277a9ac7caf117140b2fff4e1ccf4bc9f35395b0112f0d0d7c82c6f8d860e
|
||||||
|
uri: huggingface://mradermacher/Qwen2.5-Coder-14B-GGUF/Qwen2.5-Coder-14B.Q4_K_M.gguf
|
||||||
|
- !!merge <<: *qwen25coder
|
||||||
|
name: "qwen2.5-coder-3b-instruct"
|
||||||
|
urls:
|
||||||
|
- https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct
|
||||||
|
- https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-GGUF
|
||||||
|
overrides:
|
||||||
|
parameters:
|
||||||
|
model: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
|
||||||
|
files:
|
||||||
|
- filename: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
|
||||||
|
sha256: 3da3afe6cf5c674ac195803ea0dd6fee7e1c228c2105c1ce8c66890d1d4ab460
|
||||||
|
uri: huggingface://bartowski/Qwen2.5-Coder-3B-Instruct-GGUF/Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
|
||||||
|
- !!merge <<: *qwen25coder
|
||||||
|
name: "qwen2.5-coder-32b-instruct"
|
||||||
|
urls:
|
||||||
|
- https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct
|
||||||
|
- https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF
|
||||||
|
overrides:
|
||||||
|
parameters:
|
||||||
|
model: Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf
|
||||||
|
files:
|
||||||
|
- filename: Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf
|
||||||
|
sha256: 8e2fd78ff55e7cdf577fda257bac2776feb7d73d922613caf35468073807e815
|
||||||
|
uri: huggingface://bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf
|
||||||
- &opencoder
|
- &opencoder
|
||||||
name: "opencoder-8b-base"
|
name: "opencoder-8b-base"
|
||||||
icon: https://github.com/OpenCoder-llm/opencoder-llm.github.io/blob/main/static/images/opencoder_icon.jpg?raw=true
|
icon: https://github.com/OpenCoder-llm/opencoder-llm.github.io/blob/main/static/images/opencoder_icon.jpg?raw=true
|
||||||
@ -1118,42 +1169,6 @@
|
|||||||
- filename: calme-3.1-qwenloi-3b.Q5_K_M.gguf
|
- filename: calme-3.1-qwenloi-3b.Q5_K_M.gguf
|
||||||
sha256: 8962a8d1704979039063b5c69fafdb38b545c26143419ec4c574f37f2d6dd7b2
|
sha256: 8962a8d1704979039063b5c69fafdb38b545c26143419ec4c574f37f2d6dd7b2
|
||||||
uri: huggingface://MaziyarPanahi/calme-3.1-qwenloi-3b-GGUF/calme-3.1-qwenloi-3b.Q5_K_M.gguf
|
uri: huggingface://MaziyarPanahi/calme-3.1-qwenloi-3b-GGUF/calme-3.1-qwenloi-3b.Q5_K_M.gguf
|
||||||
- !!merge <<: *qwen25
|
|
||||||
name: "qwen2.5-coder-14b"
|
|
||||||
urls:
|
|
||||||
- https://huggingface.co/Qwen/Qwen2.5-Coder-14B
|
|
||||||
- https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF
|
|
||||||
description: |
|
|
||||||
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
|
||||||
|
|
||||||
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
|
||||||
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
|
||||||
Long-context Support up to 128K tokens.
|
|
||||||
overrides:
|
|
||||||
parameters:
|
|
||||||
model: Qwen2.5-Coder-14B.Q4_K_M.gguf
|
|
||||||
files:
|
|
||||||
- filename: Qwen2.5-Coder-14B.Q4_K_M.gguf
|
|
||||||
sha256: 94f277a9ac7caf117140b2fff4e1ccf4bc9f35395b0112f0d0d7c82c6f8d860e
|
|
||||||
uri: huggingface://mradermacher/Qwen2.5-Coder-14B-GGUF/Qwen2.5-Coder-14B.Q4_K_M.gguf
|
|
||||||
- !!merge <<: *qwen25
|
|
||||||
name: "qwen2.5-coder-3b-instruct"
|
|
||||||
urls:
|
|
||||||
- https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct
|
|
||||||
- https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-GGUF
|
|
||||||
description: |
|
|
||||||
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
|
|
||||||
|
|
||||||
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
|
|
||||||
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
|
|
||||||
Long-context Support up to 128K tokens.
|
|
||||||
overrides:
|
|
||||||
parameters:
|
|
||||||
model: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
|
|
||||||
files:
|
|
||||||
- filename: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
|
|
||||||
sha256: 3da3afe6cf5c674ac195803ea0dd6fee7e1c228c2105c1ce8c66890d1d4ab460
|
|
||||||
uri: huggingface://bartowski/Qwen2.5-Coder-3B-Instruct-GGUF/Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf
|
|
||||||
- &archfunct
|
- &archfunct
|
||||||
license: apache-2.0
|
license: apache-2.0
|
||||||
tags:
|
tags:
|
||||||
|
Loading…
x
Reference in New Issue
Block a user