models(gallery): add smollm2-1.7b-instruct (#4033)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
Ettore Di Giacinto 2024-11-02 11:01:39 +01:00 committed by GitHub
parent 817685e4c1
commit 26e522a558
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -861,6 +861,23 @@
- filename: SmolLM-1.7B-Instruct.Q4_K_M.gguf
sha256: 2b07eb2293ed3fc544a9858beda5bfb03dcabda6aa6582d3c85768c95f498d28
uri: huggingface://MaziyarPanahi/SmolLM-1.7B-Instruct-GGUF/SmolLM-1.7B-Instruct.Q4_K_M.gguf
- !!merge <<: *smollm
name: "smollm2-1.7b-instruct"
icon: https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/y45hIMNREW7w_XpHYB_0q.png
urls:
- https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
- https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF
description: |
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.
overrides:
parameters:
model: smollm2-1.7b-instruct-q4_k_m.gguf
files:
- filename: smollm2-1.7b-instruct-q4_k_m.gguf
sha256: decd2598bc2c8ed08c19adc3c8fdd461ee19ed5708679d1c54ef54a5a30d4f33
uri: huggingface://HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF/smollm2-1.7b-instruct-q4_k_m.gguf
- &llama31
## LLama3.1
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"