As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.
This batch works as follows
Activates the virtual environment and Runs the Python app.
If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
Pressing a key makes the batch go to the start and the UI launches successfully.
Added a remark
As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.
This batch works as follows
1) Activates the virtual environment and Runs the Python app.
2) If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
3) It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
4) Pressing a key makes the batch go to the start and the UI launches successfully.
Fancier [Y/N/B], does not require for you to press the letter and then enter, and also asks you if you want to download the model instead of downloading it automatically if you don't have it.
Pressing B lets you open the link in the browser for a faster download because most browsers support multi-segment downloads. With Invoke-WebRequest it would take me ~30 minutes to download with a 200Mbps speed.
Also lets you retry the download if failed or disconnected.
If previously there were a "models" folder, it would just close the install.bat window and wouldn't continue, so I removed the "else ()" and also added a "IF NOT EXIST models/gpt4all-lora-quantized-ggml.bin" if the user already had the model manually downloaded
Also added a (y/n) choice
If previously there were a "models" folder, it would just close the install.bat window and wouldn't continue, so I removed the "else ()" and added a "IF NOT EXIST models/gpt4all-lora-quantized-ggml.bin" if the user already had the model manually downloaded
Changed to "%ERRORLEVEL%" because ".ERRORLEVEL." is not a valid syntax, this should fix the "Failed to install required packages. Please check your internet connection and try again. Press any key to continue . . ." error despite the requirements being installed correctly.