mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2025-01-29 15:44:12 +00:00
Fix for #27
As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py. This batch works as follows Activates the virtual environment and Runs the Python app. If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it. It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart. Pressing a key makes the batch go to the start and the UI launches successfully. Added a remark
This commit is contained in:
parent
afc851b405
commit
5a711de8d0
2
run.bat
2
run.bat
@ -43,6 +43,7 @@ REM Run the Python app
|
||||
python app.py %*
|
||||
set app_result=%errorlevel%
|
||||
|
||||
REM Ask if user wants the model fixed
|
||||
IF %app_result% EQU 0 (
|
||||
goto END
|
||||
) ELSE (
|
||||
@ -52,6 +53,7 @@ IF %app_result% EQU 0 (
|
||||
if errorlevel 1 goto MODEL_FIX
|
||||
)
|
||||
|
||||
REM Git Clone, Renames the bad model and fixes it using the same original name
|
||||
:MODEL_FIX
|
||||
if not exist llama.cpp git clone https://github.com/ggerganov/llama.cpp.git
|
||||
move models\gpt4all-lora-quantized-ggml.bin models\gpt4all-lora-quantized-ggml.bin.original
|
||||
|
Loading…
x
Reference in New Issue
Block a user