mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2024-12-18 20:17:50 +00:00
fixed model icon issue
This commit is contained in:
parent
1a287beef0
commit
1f8cd86133
@ -1 +1 @@
|
|||||||
Subproject commit 65a5b08e4bb0255605336b056da87d6bf71f744b
|
Subproject commit 11f823bb8a9ced46874d8abe136ba77952265c30
|
130
notebooks/ggml_quantize.ipynb
Normal file
130
notebooks/ggml_quantize.ipynb
Normal file
@ -0,0 +1,130 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"The first step consists of compiling llama.cpp and installing the required libraries in our Python environment."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Install llama.cpp\n",
|
||||||
|
"!git clone https://github.com/ggerganov/llama.cpp\n",
|
||||||
|
"!cd llama.cpp && git pull && make clean && LLAMA_CUBLAS=1 make\n",
|
||||||
|
"!pip install -r llama.cpp/requirements.txt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Now we can download our model. We will use an jondurbin/airoboros-m-7b-3.1.2 model"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"MODEL_ID = \"jondurbin/airoboros-m-7b-3.1.2\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Download model\n",
|
||||||
|
"!git lfs install\n",
|
||||||
|
"!git clone https://huggingface.co/{MODEL_ID}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"This step can take a while. Once it’s done, we need to convert our weight to GGML FP16 format"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"MODEL_NAME = MODEL_ID.split('/')[-1]\n",
|
||||||
|
"\n",
|
||||||
|
"# Convert to fp16\n",
|
||||||
|
"fp16 = f\"{MODEL_NAME}/{MODEL_NAME.lower()}.fp16.bin\"\n",
|
||||||
|
"!python llama.cpp/convert.py {MODEL_NAME} --outtype f16 --outfile {fp16}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Finally, we can quantize the model using one or several methods. In this case, we will use the Q4_K_M and Q5_K_M methods. This is the only step that actually requires a GPU."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"QUANTIZATION_METHODS = [\"q4_k_m\", \"q5_k_m\"]\n",
|
||||||
|
"\n",
|
||||||
|
"for method in QUANTIZATION_METHODS:\n",
|
||||||
|
" qtype = f\"{MODEL_NAME}/{MODEL_NAME.lower()}.{method.upper()}.gguf\"\n",
|
||||||
|
" !./llama.cpp/quantize {fp16} {qtype} {method}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Finally, we can push our quantized model to a new repo on the Hugging Face Hub with the “-GGUF” suffix. First, let’s log in and modify the following code block to match your username. You can enter your Hugging Face token (https://huggingface.co/settings/tokens) in Google Colab’s “Secrets” tab. We use the allow_patterns parameter to only upload GGUF models and not the entirety of the directory."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"!pip install -q huggingface_hub\n",
|
||||||
|
"from huggingface_hub import create_repo, HfApi\n",
|
||||||
|
"from google.colab import userdata\n",
|
||||||
|
"\n",
|
||||||
|
"# Defined in the secrets tab in Google Colab\n",
|
||||||
|
"hf_token = userdata.get('huggingface')\n",
|
||||||
|
"\n",
|
||||||
|
"api = HfApi()\n",
|
||||||
|
"username = \"parisneo\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Create empty repo\n",
|
||||||
|
"create_repo(\n",
|
||||||
|
" repo_id = f\"{username}/{MODEL_NAME}-GGUF\",\n",
|
||||||
|
" repo_type=\"model\",\n",
|
||||||
|
" exist_ok=True,\n",
|
||||||
|
" token=hf_token\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# Upload gguf files\n",
|
||||||
|
"api.upload_folder(\n",
|
||||||
|
" folder_path=MODEL_NAME,\n",
|
||||||
|
" repo_id=f\"{username}/{MODEL_NAME}-GGUF\",\n",
|
||||||
|
" allow_patterns=f\"*.gguf\",\n",
|
||||||
|
" token=hf_token\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
4
web/dist/index.html
vendored
4
web/dist/index.html
vendored
@ -6,8 +6,8 @@
|
|||||||
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<title>LoLLMS WebUI - Welcome</title>
|
<title>LoLLMS WebUI - Welcome</title>
|
||||||
<script type="module" crossorigin src="/assets/index-88e48a3d.js"></script>
|
<script type="module" crossorigin src="/assets/index-f8b842b6.js"></script>
|
||||||
<link rel="stylesheet" href="/assets/index-61efa780.css">
|
<link rel="stylesheet" href="/assets/index-37768396.css">
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<div id="app"></div>
|
<div id="app"></div>
|
||||||
|
@ -4813,8 +4813,8 @@ export default {
|
|||||||
try{
|
try{
|
||||||
let idx = this.$store.state.modelsZoo.findIndex(item => item.name == this.$store.state.selectedModel)
|
let idx = this.$store.state.modelsZoo.findIndex(item => item.name == this.$store.state.selectedModel)
|
||||||
if(idx>=0){
|
if(idx>=0){
|
||||||
console.log(`model avatar : ${this.$store.state.modelsZoo[idx].avatar}`)
|
console.log(`model avatar : ${this.$store.state.modelsZoo[idx].icon}`)
|
||||||
return this.$store.state.modelsZoo[idx].avatar
|
return this.$store.state.modelsZoo[idx].icon
|
||||||
}
|
}
|
||||||
else{
|
else{
|
||||||
return defaultModelImgPlaceholder
|
return defaultModelImgPlaceholder
|
||||||
|
@ -1 +1 @@
|
|||||||
Subproject commit 5cc039d1876320c7539711f8ce01b23c6e58c16c
|
Subproject commit 30a43e4bd12a10f046fae8fa641668f3019f7706
|
Loading…
Reference in New Issue
Block a user