From 5a711de8d0dcb9c4c18ae13ee9593584bf8cb973 Mon Sep 17 00:00:00 2001 From: arroyoquiel <81461845+arroyoquiel@users.noreply.github.com> Date: Sat, 8 Apr 2023 00:01:40 -0600 Subject: [PATCH] Fix for #27 As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py. This batch works as follows Activates the virtual environment and Runs the Python app. If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it. It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart. Pressing a key makes the batch go to the start and the UI launches successfully. Added a remark --- run.bat | 2 ++ 1 file changed, 2 insertions(+) diff --git a/run.bat b/run.bat index a7891f38..3d6ab34f 100644 --- a/run.bat +++ b/run.bat @@ -43,6 +43,7 @@ REM Run the Python app python app.py %* set app_result=%errorlevel% +REM Ask if user wants the model fixed IF %app_result% EQU 0 ( goto END ) ELSE ( @@ -52,6 +53,7 @@ IF %app_result% EQU 0 ( if errorlevel 1 goto MODEL_FIX ) +REM Git Clone, Renames the bad model and fixes it using the same original name :MODEL_FIX if not exist llama.cpp git clone https://github.com/ggerganov/llama.cpp.git move models\gpt4all-lora-quantized-ggml.bin models\gpt4all-lora-quantized-ggml.bin.original