As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.

This batch works as follows

1) Activates the virtual environment and Runs the Python app.
2) If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
3) It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
4) Pressing a key makes the batch go to the start and the UI launches successfully.
This commit is contained in:
arroyoquiel 2023-04-07 23:50:12 -06:00 committed by GitHub
parent 83dcbbef8d
commit afc851b405
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

33
run.bat
View File

@ -35,6 +35,33 @@ echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
echo on
call env/Scripts/activate.bat
python app.py %*
REM Activate the virtual environment
call env\Scripts\activate.bat
:RESTART
REM Run the Python app
python app.py %*
set app_result=%errorlevel%
IF %app_result% EQU 0 (
goto END
) ELSE (
echo.
choice /C YN /M "The model file (gpt4all-lora-quantized-ggml.bin) appears to be invalid. Do you want to fix it?"
if errorlevel 2 goto END
if errorlevel 1 goto MODEL_FIX
)
:MODEL_FIX
if not exist llama.cpp git clone https://github.com/ggerganov/llama.cpp.git
move models\gpt4all-lora-quantized-ggml.bin models\gpt4all-lora-quantized-ggml.bin.original
python llama.cpp\migrate-ggml-2023-03-30-pr613.py models\gpt4all-lora-quantized-ggml.bin.original models\gpt4all-lora-quantized-ggml.bin
echo The model file (gpt4all-lora-quantized-ggml.bin) has been fixed. Press any key to restart...
pause >nul
goto RESTART
:END
REM Wait for user input before exiting
echo.
echo Press any key to exit...
pause >nul