chore(model gallery): add all-hands_openhands-lm-32b-v0.1 (#5111)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
Ettore Di Giacinto 2025-04-03 10:15:57 +02:00 committed by GitHub
parent cbbc954a8c
commit 7ee3288460
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -5410,6 +5410,36 @@
- filename: hammer2.0-7b-q5_k_m.gguf
sha256: 3682843c857595765f0786cf24b3d501af96fe5d99a9fb2526bc7707e28bae1e
uri: huggingface://Nekuromento/Hammer2.0-7b-Q5_K_M-GGUF/hammer2.0-7b-q5_k_m.gguf
- !!merge <<: *qwen25
icon: https://github.com/All-Hands-AI/OpenHands/blob/main/docs/static/img/logo.png?raw=true
name: "all-hands_openhands-lm-32b-v0.1"
urls:
- https://huggingface.co/all-hands/openhands-lm-32b-v0.1
- https://huggingface.co/bartowski/all-hands_openhands-lm-32b-v0.1-GGUF
description: |
Autonomous agents for software development are already contributing to a wide range of software development tasks. But up to this point, strong coding agents have relied on proprietary models, which means that even if you use an open-source agent like OpenHands, you are still reliant on API calls to an external service.
Today, we are excited to introduce OpenHands LM, a new open coding model that:
Is open and available on Hugging Face, so you can download it and run it locally
Is a reasonable size, 32B, so it can be run locally on hardware such as a single 3090 GPU
Achieves strong performance on software engineering tasks, including 37.2% resolve rate on SWE-Bench Verified
Read below for more details and our future plans!
What is OpenHands LM?
OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks. What sets OpenHands LM apart is our specialized fine-tuning process:
We used training data generated by OpenHands itself on a diverse set of open-source repositories
Specifically, we use an RL-based framework outlined in SWE-Gym, where we set up a training environment, generate training data using an existing agent, and then fine-tune the model on examples that were resolved successfully
It features a 128K token context window, ideal for handling large codebases and long-horizon software engineering tasks
overrides:
parameters:
model: all-hands_openhands-lm-32b-v0.1-Q4_K_M.gguf
files:
- filename: all-hands_openhands-lm-32b-v0.1-Q4_K_M.gguf
sha256: f7c2311d3264cc1e021a21a319748a9c75b74ddebe38551786aa4053448e5e74
uri: huggingface://bartowski/all-hands_openhands-lm-32b-v0.1-GGUF/all-hands_openhands-lm-32b-v0.1-Q4_K_M.gguf
- &llama31
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master" ## LLama3.1
icon: https://avatars.githubusercontent.com/u/153379578