As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.

This batch works as follows

Activates the virtual environment and Runs the Python app.
If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
Pressing a key makes the batch go to the start and the UI launches successfully.

Added a remark
This commit is contained in:
arroyoquiel 2023-04-08 00:01:40 -06:00 committed by GitHub
parent afc851b405
commit 5a711de8d0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -43,6 +43,7 @@ REM Run the Python app
python app.py %*
set app_result=%errorlevel%
REM Ask if user wants the model fixed
IF %app_result% EQU 0 (
goto END
) ELSE (
@ -52,6 +53,7 @@ IF %app_result% EQU 0 (
if errorlevel 1 goto MODEL_FIX
)
REM Git Clone, Renames the bad model and fixes it using the same original name
:MODEL_FIX
if not exist llama.cpp git clone https://github.com/ggerganov/llama.cpp.git
move models\gpt4all-lora-quantized-ggml.bin models\gpt4all-lora-quantized-ggml.bin.original