Moved Git install to the beginning.
Changed the Git install so it doesn't exit if you pressed no if you already had it installed.
Changed the conversion: It asks to convert a model. If user agrees, it converts using Python. If not, it skips. On conversion failure, it reverts to original model.
Removed some testing pauses and added some echo. for easier readability.
Changed the Git install so it doesn't exit if you pressed no if you already had it installed.
Changed the conversion: It asks to convert a model. If user agrees, it converts using Python. If not, it skips. On conversion failure, it reverts to original model.
Removed some testing pauses and added some echo. for easier readability.
Changed the Git install so it doesn't exit if you pressed no if you already had it installed.
Changed the conversion: It asks to convert a model. If user agrees, it converts using Python. If not, it skips. On conversion failure, it reverts to original model.
Removed some testing pauses and added some echo. for easier readability.
Changed the Git install so it doesn't exit if you pressed no if you already had it installed.
Changed the conversion: It asks to convert a model. If user agrees, it converts using Python. If not, it skips. On conversion failure, it reverts to original model.
Removed some testing pauses and added some echo. for easier readability.
Changed the Git install so it doesn't exit if you pressed no if you already had it installed.
Changed the conversion: It asks to convert a model. If user agrees, it converts using Python. If not, it skips. On conversion failure, it reverts to original model.
As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.
This batch works as follows
Activates the virtual environment and Runs the Python app.
If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
Pressing a key makes the batch go to the start and the UI launches successfully.
Added a remark
As NJannasch mentioned, the model won't work unless the model gets changed by llama.cpp\migrate-ggml-2023-03-30-pr613.py.
This batch works as follows
1) Activates the virtual environment and Runs the Python app.
2) If the model gives an error, it asks you [Y/N] if you want to fix the model, N exits the batch, Y fixes it.
3) It renames the model so it conserves the original, and then applies the fix as a new model. After that it tells that the model has been fixed. Press any key to restart.
4) Pressing a key makes the batch go to the start and the UI launches successfully.