As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.
This batch works as follows
Activates the virtual environment and Runs the Python app.
If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
Pressing a key makes the batch go to the start and the UI launches successfully.
Added a remark
As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.
This batch works as follows
1) Activates the virtual environment and Runs the Python app.
2) If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
3) It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
4) Pressing a key makes the batch go to the start and the UI launches successfully.