mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2025-03-22 03:45:20 +00:00
Merge branch 'lollms-upd' into lollms-pers-mounter
This commit is contained in:
commit
da85fbc5f0
3
.gitignore
vendored
3
.gitignore
vendored
@ -183,4 +183,5 @@ shared/*
|
||||
!shared/.keep
|
||||
|
||||
|
||||
uploads
|
||||
uploads
|
||||
global_paths_cfg.yaml
|
38
README.md
38
README.md
@ -1,14 +1,14 @@
|
||||
# Gpt4All Web UI
|
||||
# LoLLMS Web UI
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
[](https://discord.gg/4rR282WJb6)
|
||||
[](https://twitter.com/SpaceNerduino)
|
||||
[](https://www.youtube.com/user/Parisneo)
|
||||
|
||||
Welcome to GPT4ALL WebUI, the hub for LLM (Large Language Model) models. This project aims to provide a user-friendly interface to access and utilize various LLM models for a wide range of tasks. Whether you need help with writing, coding, organizing data, generating images, or seeking answers to your questions, GPT4ALL WebUI has got you covered.
|
||||
Welcome to LoLLMS WebUI (Lord of Large Language Models: One tool to rule them all), the hub for LLM (Large Language Model) models. This project aims to provide a user-friendly interface to access and utilize various LLM models for a wide range of tasks. Whether you need help with writing, coding, organizing data, generating images, or seeking answers to your questions, LoLLMS WebUI has got you covered.
|
||||
|
||||
[Click here for my youtube video on how to use the tool](https://youtu.be/ds_U0TDzbzI)
|
||||
## Features
|
||||
@ -29,7 +29,7 @@ Welcome to GPT4ALL WebUI, the hub for LLM (Large Language Model) models. This pr
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before installing GPT4ALL WebUI, make sure you have the following dependencies installed:
|
||||
Before installing LoLLMS WebUI, make sure you have the following dependencies installed:
|
||||
|
||||
- Python 3.10 or higher
|
||||
- Git (for cloning the repository)
|
||||
@ -48,10 +48,10 @@ If you receive an error or the version is lower than 3.10, please install a newe
|
||||
For Windows: `webui.bat`
|
||||
For Linux: `webui.sh`
|
||||
- Place the downloaded launcher in a folder of your choice, for example:
|
||||
Windows: `C:\ai\gpt4all-webui`
|
||||
Linux: `/home/user/ai/gpt4all-webui`
|
||||
Windows: `C:\ai\LoLLMS-webui`
|
||||
Linux: `/home/user/ai/LoLLMS-webui`
|
||||
- Run the launcher script. Note that you might encounter warnings from antivirus or Windows Defender due to the tool's newness and limited usage. These warnings are false positives caused by reputation conditions in some antivirus software. You can safely proceed with running the script.
|
||||
Once the installation is complete, the GPT4ALL WebUI will launch automatically.
|
||||
Once the installation is complete, the LoLLMS WebUI will launch automatically.
|
||||
|
||||
#### Using Conda
|
||||
If you use conda, you can create a virtual environment and install the required packages using the provided `requirements.txt` file. Here's an example of how to set it up:
|
||||
@ -64,19 +64,19 @@ cd lollms-webui
|
||||
Now create a new conda environment, activate it and install requirements
|
||||
|
||||
```bash
|
||||
conda create -n gpt4all-webui python=3.10
|
||||
conda activate gpt4all-webui
|
||||
conda create -n LoLLMS-webui python=3.10
|
||||
conda activate LoLLMS-webui
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
#### Using Docker
|
||||
Alternatively, you can use Docker to set up the GPT4ALL WebUI. Please refer to the Docker documentation for installation instructions specific to your operating system.
|
||||
Alternatively, you can use Docker to set up the LoLLMS WebUI. Please refer to the Docker documentation for installation instructions specific to your operating system.
|
||||
|
||||
## Usage
|
||||
|
||||
You can launch the app from the webui.sh or webui.bat launcher. It will automatically perform updates if any are present. If you don't prefer this method, you can also activate the virtual environment and launch the application using python app.py from the root of the project.
|
||||
Once the app is running, you can go to the application front link displayed in the console (by default localhost:9600 but can change if you change configuration)
|
||||
### Selecting a Model and Binding
|
||||
- Open the GPT4ALL WebUI and navigate to the Settings page.
|
||||
- Open the LoLLMS WebUI and navigate to the Settings page.
|
||||
- In the Models Zoo tab, select a binding from the list (e.g., llama-cpp-official).
|
||||
- Wait for the installation process to finish. You can monitor the progress in the console.
|
||||
- Once the installation is complete, click the Install button next to the desired model.
|
||||
@ -86,7 +86,7 @@ Once the app is running, you can go to the application front link displayed in t
|
||||
### Starting a Discussion
|
||||
- Go to the Discussions view.
|
||||
- Click the + button to create a new discussion.
|
||||
- You will see a predefined welcome message based on the selected personality (by default, GPT4All).
|
||||
- You will see a predefined welcome message based on the selected personality (by default, LoLLMS).
|
||||
- Ask a question or provide an initial prompt to start the discussion.
|
||||
- You can stop the generation process at any time by pressing the Stop Generating button.
|
||||
|
||||
@ -97,13 +97,13 @@ Once the app is running, you can go to the application front link displayed in t
|
||||
- To perform batch operations (exporting or deleting multiple discussions), enable Check Mode, select the discussions, and choose the desired action.
|
||||
|
||||
# Contributing
|
||||
Contributions to GPT4ALL WebUI are welcome! If you encounter any issues, have ideas for improvements, or want to contribute code, please open an issue or submit a pull request on the GitHub repository.
|
||||
Contributions to LoLLMS WebUI are welcome! If you encounter any issues, have ideas for improvements, or want to contribute code, please open an issue or submit a pull request on the GitHub repository.
|
||||
|
||||
# License
|
||||
This project is licensed under the Apache 2.0 License. You are free to use this software commercially, build upon it, and integrate it into your own projects. See the [LICENSE](https://github.com/ParisNeo/lollms-webui/blob/main/LICENSE) file for details.
|
||||
|
||||
# Acknowledgements
|
||||
Please note that GPT4ALL WebUI is not affiliated with the GPT4All application developed by Nomic AI. The latter is a separate professional application available at gpt4all.io, which has its own unique features and community.
|
||||
Please note that LoLLMS WebUI is not affiliated with the LoLLMS application developed by Nomic AI. The latter is a separate professional application available at LoLLMS.io, which has its own unique features and community.
|
||||
|
||||
We express our gratitude to all the contributors who have made this project possible and welcome additional contributions to further enhance the tool for the benefit of all users.
|
||||
|
||||
@ -113,8 +113,8 @@ For any questions or inquiries, feel free to reach out via our discord server: h
|
||||
|
||||
Thank you for your interest and support!
|
||||
|
||||
If you find this tool useful, don't forget to give it a star on GitHub, share your experience, and help us spread the word. Your feedback and bug reports are valuable to us as we continue developing and improving GPT4ALL WebUI.
|
||||
If you find this tool useful, don't forget to give it a star on GitHub, share your experience, and help us spread the word. Your feedback and bug reports are valuable to us as we continue developing and improving LoLLMS WebUI.
|
||||
|
||||
If you enjoyed this tutorial, consider subscribing to our YouTube channel for more updates, tutorials, and exciting content.
|
||||
|
||||
Happy exploring with GPT4ALL WebUI!
|
||||
Happy exploring with LoLLMS WebUI!
|
||||
|
142
api/__init__.py
142
api/__init__.py
@ -7,14 +7,16 @@
|
||||
# Description :
|
||||
# A simple api to communicate with lollms-webui and its models.
|
||||
######
|
||||
from flask import request
|
||||
from datetime import datetime
|
||||
from api.db import DiscussionsDB
|
||||
from api.helpers import compare_lists
|
||||
from pathlib import Path
|
||||
import importlib
|
||||
from lollms import AIPersonality, MSG_TYPE
|
||||
from lollms.binding import BindingConfig
|
||||
from lollms.paths import lollms_path, lollms_personal_configuration_path, lollms_personal_path, lollms_personal_models_path, lollms_bindings_zoo_path, lollms_personalities_zoo_path, lollms_default_cfg_path
|
||||
from lollms.personality import AIPersonality, MSG_TYPE
|
||||
from lollms.binding import LOLLMSConfig
|
||||
from lollms.paths import LollmsPaths
|
||||
from lollms.helpers import ASCIIColors
|
||||
import multiprocessing as mp
|
||||
import threading
|
||||
import time
|
||||
@ -83,8 +85,9 @@ def parse_requirements_file(requirements_path):
|
||||
|
||||
|
||||
class ModelProcess:
|
||||
def __init__(self, config:BindingConfig=None):
|
||||
def __init__(self, lollms_paths:LollmsPaths, config:LOLLMSConfig=None):
|
||||
self.config = config
|
||||
self.lollms_paths = lollms_paths
|
||||
self.generate_queue = mp.Queue()
|
||||
self.generation_queue = mp.Queue()
|
||||
self.cancel_queue = mp.Queue(maxsize=1)
|
||||
@ -92,8 +95,6 @@ class ModelProcess:
|
||||
self.set_config_queue = mp.Queue(maxsize=1)
|
||||
self.set_config_result_queue = mp.Queue(maxsize=1)
|
||||
|
||||
self.models_path = lollms_personal_models_path
|
||||
|
||||
self.process = None
|
||||
# Create synchronization objects
|
||||
self.start_signal = mp.Event()
|
||||
@ -139,7 +140,12 @@ class ModelProcess:
|
||||
print(f"Loading binding {binding_name} install ON")
|
||||
else:
|
||||
print(f"Loading binding : {binding_name} install is off")
|
||||
binding_path = lollms_path/"bindings_zoo"/binding_name
|
||||
|
||||
if binding_name is None:
|
||||
self.model
|
||||
|
||||
|
||||
binding_path = self.lollms_paths.bindings_zoo_path/binding_name
|
||||
if install:
|
||||
# first find out if there is a requirements.txt file
|
||||
install_file_name="install.py"
|
||||
@ -174,12 +180,6 @@ class ModelProcess:
|
||||
self.generate_queue.put(None)
|
||||
self.process.join()
|
||||
self.process = None
|
||||
|
||||
def set_binding(self, binding_path):
|
||||
self.binding = binding_path
|
||||
|
||||
def set_model(self, model_path):
|
||||
self.model = model_path
|
||||
|
||||
def set_config(self, config):
|
||||
try:
|
||||
@ -200,30 +200,32 @@ class ModelProcess:
|
||||
def cancel_generation(self):
|
||||
self.completion_signal.set()
|
||||
self.cancel_queue.put(('cancel',))
|
||||
print("Canel request received")
|
||||
ASCIIColors.error("Canel request received")
|
||||
|
||||
def clear_queue(self):
|
||||
self.clear_queue_queue.put(('clear_queue',))
|
||||
|
||||
def rebuild_binding(self, config):
|
||||
try:
|
||||
print(" ******************* Building Binding from main Process *************************")
|
||||
ASCIIColors.success(" ******************* Building Binding from main Process *************************")
|
||||
binding = self.load_binding(config["binding_name"], install=True)
|
||||
print("Binding loaded successfully")
|
||||
ASCIIColors.success("Binding loaded successfully")
|
||||
except Exception as ex:
|
||||
print("Couldn't build binding.")
|
||||
print(ex)
|
||||
ASCIIColors.error("Couldn't build binding.")
|
||||
ASCIIColors.error("-----------------")
|
||||
print(f"It seems that there is no valid binding selected. Please use the ui settings to select a binding.\nHere is encountered error: {ex}")
|
||||
ASCIIColors.error("-----------------")
|
||||
binding = None
|
||||
return binding
|
||||
|
||||
def _rebuild_model(self):
|
||||
try:
|
||||
self.reset_config_result()
|
||||
print(" ******************* Building Binding from generation Process *************************")
|
||||
ASCIIColors.success(" ******************* Building Binding from generation Process *************************")
|
||||
self.binding = self.load_binding(self.config["binding_name"], install=True)
|
||||
print("Binding loaded successfully")
|
||||
ASCIIColors.success("Binding loaded successfully")
|
||||
try:
|
||||
model_file = self.config.models_path/self.config["binding_name"]/self.config["model_name"]
|
||||
model_file = self.lollms_paths.personal_models_path/self.config["binding_name"]/self.config["model_name"]
|
||||
print(f"Loading model : {model_file}")
|
||||
self.model = self.binding(self.config)
|
||||
self.model_ready.value = 1
|
||||
@ -235,11 +237,11 @@ class ModelProcess:
|
||||
print(f"Couldn't build model {self.config['model_name']} : {ex}")
|
||||
self.model = None
|
||||
self._set_config_result['status'] ='failed'
|
||||
self._set_config_result['binding_status'] ='failed'
|
||||
self._set_config_result['errors'].append(f"couldn't build binding:{ex}")
|
||||
self._set_config_result['model_status'] ='failed'
|
||||
self._set_config_result['errors'].append(f"couldn't build model:{ex}")
|
||||
except Exception as ex:
|
||||
traceback.print_exc()
|
||||
print("Couldn't build binding")
|
||||
print("Couldn't build model")
|
||||
print(ex)
|
||||
self.binding = None
|
||||
self.model = None
|
||||
@ -249,20 +251,21 @@ class ModelProcess:
|
||||
|
||||
def rebuild_personalities(self):
|
||||
mounted_personalities=[]
|
||||
print(f" ******************* Building mounted Personalities from main Process *************************")
|
||||
ASCIIColors.success(f" ******************* Building mounted Personalities from main Process *************************")
|
||||
for personality in self.config['personalities']:
|
||||
try:
|
||||
print(f" {personality}")
|
||||
personality_path = lollms_personalities_zoo_path/f"{personality}"
|
||||
personality = AIPersonality(personality_path, run_scripts=False)
|
||||
personality_path = self.lollms_paths.personalities_zoo_path/f"{personality}"
|
||||
print(f"Loading from {personality_path}")
|
||||
personality = AIPersonality(self.lollms_paths, personality_path, run_scripts=False)
|
||||
mounted_personalities.append(personality)
|
||||
except Exception as ex:
|
||||
print(f"Personality file not found or is corrupted ({personality_path}).\nPlease verify that the personality you have selected exists or select another personality. Some updates may lead to change in personality name or category, so check the personality selection in settings to be sure.")
|
||||
ASCIIColors.error(f"Personality file not found or is corrupted ({personality_path}).\nPlease verify that the personality you have selected exists or select another personality. Some updates may lead to change in personality name or category, so check the personality selection in settings to be sure.")
|
||||
if self.config["debug"]:
|
||||
print(ex)
|
||||
personality = AIPersonality()
|
||||
personality = AIPersonality(self.lollms_paths)
|
||||
|
||||
print(f" ************ Personalities mounted (Main process) ***************************")
|
||||
ASCIIColors.success(f" ************ Personalities mounted (Main process) ***************************")
|
||||
|
||||
return mounted_personalities
|
||||
|
||||
@ -270,22 +273,21 @@ class ModelProcess:
|
||||
self.mounted_personalities=[]
|
||||
failed_personalities=[]
|
||||
self.reset_config_result()
|
||||
print(f" ******************* Building mounted Personalities from generation Process *************************")
|
||||
ASCIIColors.success(f" ******************* Building mounted Personalities from generation Process *************************")
|
||||
for personality in self.config['personalities']:
|
||||
try:
|
||||
print(f" {personality}")
|
||||
personality_path = lollms_path/f"personalities_zoo/{personality}"
|
||||
personality = AIPersonality(personality_path, run_scripts=True)
|
||||
personality_path = self.lollms_paths.personalities_zoo_path/f"{personality}"
|
||||
personality = AIPersonality(self.lollms_paths, personality_path, run_scripts=True, model=self.model)
|
||||
self.mounted_personalities.append(personality)
|
||||
except Exception as ex:
|
||||
print(f"Personality file not found or is corrupted ({personality_path}).\nPlease verify that the personality you have selected exists or select another personality. Some updates may lead to change in personality name or category, so check the personality selection in settings to be sure.")
|
||||
if self.config["debug"]:
|
||||
print(ex)
|
||||
personality = AIPersonality()
|
||||
ASCIIColors.error(f"Personality file not found or is corrupted ({personality_path}).\nPlease verify that the personality you have selected exists or select another personality. Some updates may lead to change in personality name or category, so check the personality selection in settings to be sure.")
|
||||
ASCIIColors.error(f"Exception received is: {ex}")
|
||||
personality = AIPersonality(self.lollms_paths, model=self.model)
|
||||
failed_personalities.append(personality_path)
|
||||
self._set_config_result['errors'].append(f"couldn't build personalities:{ex}")
|
||||
|
||||
print(f" ************ Personalities mounted (Generation process) ***************************")
|
||||
ASCIIColors.success(f" ************ Personalities mounted (Generation process) ***************************")
|
||||
if len(failed_personalities)==len(self.config['personalities']):
|
||||
self._set_config_result['status'] ='failed'
|
||||
self._set_config_result['personalities_status'] ='failed'
|
||||
@ -293,9 +295,11 @@ class ModelProcess:
|
||||
self._set_config_result['status'] ='semi_failed'
|
||||
self._set_config_result['personalities_status'] ='semi_failed'
|
||||
|
||||
self.personality = self.mounted_personalities[self.config['active_personality_id']]
|
||||
self.mounted_personalities = self.config["personalities"]
|
||||
print("Personality set successfully")
|
||||
if self.config['active_personality_id']<len(self.mounted_personalities):
|
||||
self.personality = self.mounted_personalities[self.config['active_personality_id']]
|
||||
ASCIIColors.success("Personality set successfully")
|
||||
else:
|
||||
ASCIIColors.error("Failed to set personality. Please select a valid one")
|
||||
|
||||
def _run(self):
|
||||
self._rebuild_model()
|
||||
@ -320,7 +324,7 @@ class ModelProcess:
|
||||
print("No model loaded. Waiting for new configuration instructions")
|
||||
|
||||
self.ready = True
|
||||
print(f"Listening on :http://{self.config['host']}:{self.config['port']}")
|
||||
ASCIIColors.print(ASCIIColors.color_bright_blue,f"Listening on :http://{self.config['host']}:{self.config['port']}")
|
||||
while True:
|
||||
try:
|
||||
if not self.generate_queue.empty():
|
||||
@ -333,10 +337,11 @@ class ModelProcess:
|
||||
if self.personality.processor_cfg is not None:
|
||||
if "custom_workflow" in self.personality.processor_cfg:
|
||||
if self.personality.processor_cfg["custom_workflow"]:
|
||||
print("Running workflow")
|
||||
ASCIIColors.success("Running workflow")
|
||||
self.completion_signal.clear()
|
||||
self.start_signal.set()
|
||||
output = self.personality.processor.run_workflow(self._generate, command[1], command[0], self._callback)
|
||||
|
||||
output = self.personality.processor.run_workflow( command[1], command[0], self._callback)
|
||||
self._callback(output, 0)
|
||||
self.completion_signal.set()
|
||||
self.start_signal.clear()
|
||||
@ -462,10 +467,12 @@ class ModelProcess:
|
||||
|
||||
|
||||
class LoLLMsAPPI():
|
||||
def __init__(self, config:BindingConfig, socketio, config_file_path:str) -> None:
|
||||
def __init__(self, config:LOLLMSConfig, socketio, config_file_path:str, lollms_paths: LollmsPaths) -> None:
|
||||
self.lollms_paths = lollms_paths
|
||||
|
||||
self.socketio = socketio
|
||||
#Create and launch the process
|
||||
self.process = ModelProcess(config)
|
||||
self.process = ModelProcess(self.lollms_paths, config)
|
||||
self.config = config
|
||||
self.binding = self.process.rebuild_binding(self.config)
|
||||
self.mounted_personalities = self.process.rebuild_personalities()
|
||||
@ -482,34 +489,38 @@ class LoLLMsAPPI():
|
||||
self._message_id = 0
|
||||
|
||||
self.db_path = config["db_path"]
|
||||
|
||||
# Create database object
|
||||
self.db = DiscussionsDB(self.db_path)
|
||||
if Path(self.db_path).is_absolute():
|
||||
# Create database object
|
||||
self.db = DiscussionsDB(self.db_path)
|
||||
else:
|
||||
# Create database object
|
||||
self.db = DiscussionsDB(self.lollms_paths.personal_path/"databases"/self.db_path)
|
||||
|
||||
# If the database is empty, populate it with tables
|
||||
self.db.populate()
|
||||
|
||||
# This is used to keep track of messages
|
||||
self.full_message_list = []
|
||||
|
||||
self.current_room_id = None
|
||||
# =========================================================================================
|
||||
# Socket IO stuff
|
||||
# =========================================================================================
|
||||
@socketio.on('connect')
|
||||
def connect():
|
||||
print('Client connected')
|
||||
ASCIIColors.success(f'Client {request.sid} connected')
|
||||
|
||||
@socketio.on('disconnect')
|
||||
def disconnect():
|
||||
print('Client disconnected')
|
||||
ASCIIColors.error(f'Client {request.sid} disconnected')
|
||||
|
||||
@socketio.on('install_model')
|
||||
def install_model(data):
|
||||
room_id = request.sid
|
||||
def install_model_():
|
||||
print("Install model triggered")
|
||||
model_path = data["path"]
|
||||
progress = 0
|
||||
installation_dir = Path(f'./models/{self.config["binding_name"]}/')
|
||||
installation_dir = self.lollms_paths.personal_models_path/self.config["binding_name"]
|
||||
filename = Path(model_path).name
|
||||
installation_path = installation_dir / filename
|
||||
print("Model install requested")
|
||||
@ -517,18 +528,18 @@ class LoLLMsAPPI():
|
||||
|
||||
if installation_path.exists():
|
||||
print("Error: Model already exists")
|
||||
socketio.emit('install_progress',{'status': 'failed', 'error': 'model already exists'})
|
||||
socketio.emit('install_progress',{'status': 'failed', 'error': 'model already exists'}, room=room_id)
|
||||
|
||||
socketio.emit('install_progress',{'status': 'progress', 'progress': progress})
|
||||
socketio.emit('install_progress',{'status': 'progress', 'progress': progress}, room=room_id)
|
||||
|
||||
def callback(progress):
|
||||
socketio.emit('install_progress',{'status': 'progress', 'progress': progress})
|
||||
socketio.emit('install_progress',{'status': 'progress', 'progress': progress}, room=room_id)
|
||||
|
||||
if hasattr(self.binding, "download_model"):
|
||||
self.binding.download_model(model_path, installation_path, callback)
|
||||
else:
|
||||
self.download_file(model_path, installation_path, callback)
|
||||
socketio.emit('install_progress',{'status': 'succeeded', 'error': ''})
|
||||
socketio.emit('install_progress',{'status': 'succeeded', 'error': ''}, room=room_id)
|
||||
tpe = threading.Thread(target=install_model_, args=())
|
||||
tpe.start()
|
||||
|
||||
@ -536,20 +547,21 @@ class LoLLMsAPPI():
|
||||
@socketio.on('uninstall_model')
|
||||
def uninstall_model(data):
|
||||
model_path = data['path']
|
||||
installation_dir = Path(f'./models/{self.config["binding_name"]}/')
|
||||
installation_dir = self.lollms_paths.personal_models_path/self.config["binding_name"]
|
||||
filename = Path(model_path).name
|
||||
installation_path = installation_dir / filename
|
||||
|
||||
if not installation_path.exists():
|
||||
socketio.emit('install_progress',{'status': 'failed', 'error': 'The model does not exist'})
|
||||
socketio.emit('install_progress',{'status': 'failed', 'error': 'The model does not exist'}, room=request.sid)
|
||||
|
||||
installation_path.unlink()
|
||||
socketio.emit('install_progress',{'status': 'succeeded', 'error': ''})
|
||||
socketio.emit('install_progress',{'status': 'succeeded', 'error': ''}, room=request.sid)
|
||||
|
||||
|
||||
|
||||
@socketio.on('generate_msg')
|
||||
def generate_msg(data):
|
||||
self.current_room_id = request.sid
|
||||
if self.process.model_ready.value==1:
|
||||
if self.current_discussion is None:
|
||||
if self.db.does_last_discussion_have_messages():
|
||||
@ -577,7 +589,7 @@ class LoLLMsAPPI():
|
||||
"message":"",
|
||||
"user_message_id": self.current_user_message_id,
|
||||
"ai_message_id": self.current_ai_message_id,
|
||||
}
|
||||
}, room=self.current_room_id
|
||||
)
|
||||
|
||||
@socketio.on('generate_msg_from')
|
||||
@ -740,6 +752,8 @@ class LoLLMsAPPI():
|
||||
"""
|
||||
if message_type == MSG_TYPE.MSG_TYPE_CHUNK:
|
||||
self.bot_says += chunk
|
||||
if message_type == MSG_TYPE.MSG_TYPE_FULL:
|
||||
self.bot_says = chunk
|
||||
if message_type.value < 2:
|
||||
self.socketio.emit('message', {
|
||||
'data': self.bot_says,
|
||||
@ -747,7 +761,7 @@ class LoLLMsAPPI():
|
||||
'ai_message_id':self.current_ai_message_id,
|
||||
'discussion_id':self.current_discussion.discussion_id,
|
||||
'message_type': message_type.value
|
||||
}
|
||||
}, room=self.current_room_id
|
||||
)
|
||||
if self.cancel_gen:
|
||||
print("Generation canceled")
|
||||
@ -775,7 +789,7 @@ class LoLLMsAPPI():
|
||||
"message":message,#markdown.markdown(message),
|
||||
"user_message_id": self.current_user_message_id,
|
||||
"ai_message_id": self.current_ai_message_id,
|
||||
}
|
||||
}, room=self.current_room_id
|
||||
)
|
||||
|
||||
# prepare query and reception
|
||||
@ -801,7 +815,7 @@ class LoLLMsAPPI():
|
||||
'data': self.bot_says,
|
||||
'ai_message_id':self.current_ai_message_id,
|
||||
'parent':self.current_user_message_id, 'discussion_id':self.current_discussion.discussion_id
|
||||
}
|
||||
}, room=self.current_room_id
|
||||
)
|
||||
|
||||
self.current_discussion.update_message(self.current_ai_message_id, self.bot_says)
|
||||
|
@ -1,6 +1,6 @@
|
||||
|
||||
import sqlite3
|
||||
|
||||
from pathlib import Path
|
||||
__author__ = "parisneo"
|
||||
__github__ = "https://github.com/ParisNeo/lollms-webui"
|
||||
__copyright__ = "Copyright 2023, "
|
||||
@ -13,7 +13,8 @@ class DiscussionsDB:
|
||||
MSG_TYPE_CONDITIONNING = 1
|
||||
|
||||
def __init__(self, db_path="database.db"):
|
||||
self.db_path = db_path
|
||||
self.db_path = Path(db_path)
|
||||
self.db_path .parent.mkdir(exist_ok=True, parents= True)
|
||||
|
||||
def populate(self):
|
||||
"""
|
||||
|
102
app.py
102
app.py
@ -24,9 +24,9 @@ import sys
|
||||
from tqdm import tqdm
|
||||
import subprocess
|
||||
import signal
|
||||
from lollms import AIPersonality, lollms_path, MSG_TYPE
|
||||
from lollms.console import ASCIIColors
|
||||
from lollms.paths import lollms_default_cfg_path, lollms_bindings_zoo_path, lollms_personalities_zoo_path, lollms_personal_path, lollms_personal_configuration_path, lollms_personal_models_path
|
||||
from lollms.personality import AIPersonality, MSG_TYPE
|
||||
from lollms.helpers import ASCIIColors, BaseConfig
|
||||
from lollms.paths import LollmsPaths
|
||||
from api.db import DiscussionsDB, Discussion
|
||||
from api.helpers import compare_lists
|
||||
from flask import (
|
||||
@ -40,15 +40,11 @@ from flask import (
|
||||
)
|
||||
from flask_socketio import SocketIO, emit
|
||||
from pathlib import Path
|
||||
import gc
|
||||
import yaml
|
||||
from geventwebsocket.handler import WebSocketHandler
|
||||
from gevent.pywsgi import WSGIServer
|
||||
import requests
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
import logging
|
||||
import psutil
|
||||
from lollms.binding import BindingConfig
|
||||
from lollms.binding import LOLLMSConfig
|
||||
|
||||
log = logging.getLogger('werkzeug')
|
||||
log.setLevel(logging.ERROR)
|
||||
@ -71,8 +67,8 @@ import markdown
|
||||
|
||||
|
||||
class LoLLMsWebUI(LoLLMsAPPI):
|
||||
def __init__(self, _app, _socketio, config:BindingConfig, config_file_path) -> None:
|
||||
super().__init__(config, _socketio, config_file_path)
|
||||
def __init__(self, _app, _socketio, config:LOLLMSConfig, config_file_path:Path|str, lollms_paths:LollmsPaths) -> None:
|
||||
super().__init__(config, _socketio, config_file_path, lollms_paths)
|
||||
|
||||
self.app = _app
|
||||
self.cancel_gen = False
|
||||
@ -87,6 +83,12 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
# =========================================================================================
|
||||
# Endpoints
|
||||
# =========================================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
self.add_endpoint("/switch_personal_path", "switch_personal_path", self.switch_personal_path, methods=["POST"])
|
||||
|
||||
self.add_endpoint("/add_reference_to_local_model", "add_reference_to_local_model", self.add_reference_to_local_model, methods=["POST"])
|
||||
|
||||
self.add_endpoint("/send_file", "send_file", self.send_file, methods=["POST"])
|
||||
@ -295,7 +297,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
return jsonify({"personality":self.personality.as_dict()})
|
||||
|
||||
def get_all_personalities(self):
|
||||
personalities_folder = lollms_personalities_zoo_path
|
||||
personalities_folder = self.lollms_paths.personalities_zoo_path
|
||||
personalities = {}
|
||||
for language_folder in personalities_folder.iterdir():
|
||||
lang = language_folder.stem
|
||||
@ -438,7 +440,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
else:
|
||||
self.config["active_personality_id"] = 0
|
||||
self.config["personalities"][self.config["active_personality_id"]] = f"{self.personality_language}/{self.personality_category}/{self.personality_name}"
|
||||
personality_fn = lollms_personalities_zoo_path/self.config["personalities"][self.config["active_personality_id"]]
|
||||
personality_fn = self.lollms_paths.personalities_zoo_path/self.config["personalities"][self.config["active_personality_id"]]
|
||||
self.personality.load_personality(personality_fn)
|
||||
else:
|
||||
self.config["personalities"].append(f"{self.personality_language}/{self.personality_category}/{self.personality_name}")
|
||||
@ -503,7 +505,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
current_drive = Path.cwd().anchor
|
||||
drive_disk_usage = psutil.disk_usage(current_drive)
|
||||
try:
|
||||
models_folder_disk_usage = psutil.disk_usage(lollms_personal_models_path/f'{self.config["binding_name"]}')
|
||||
models_folder_disk_usage = psutil.disk_usage(self.lollms_paths.personal_models_path/f'{self.config["binding_name"]}')
|
||||
return jsonify({
|
||||
"total_space":drive_disk_usage.total,
|
||||
"available_space":drive_disk_usage.free,
|
||||
@ -521,7 +523,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
})
|
||||
|
||||
def list_bindings(self):
|
||||
bindings_dir = lollms_bindings_zoo_path # replace with the actual path to the models folder
|
||||
bindings_dir = self.lollms_paths.bindings_zoo_path # replace with the actual path to the models folder
|
||||
bindings=[]
|
||||
for f in bindings_dir.iterdir():
|
||||
card = f/"binding_card.yaml"
|
||||
@ -530,7 +532,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
bnd = load_config(card)
|
||||
bnd["folder"]=f.stem
|
||||
icon_path = Path(f"bindings/{f.name}/logo.png")
|
||||
if Path(lollms_bindings_zoo_path/f"{f.name}/logo.png").exists():
|
||||
if Path(self.lollms_paths.bindings_zoo_path/f"{f.name}/logo.png").exists():
|
||||
bnd["icon"]=str(icon_path)
|
||||
|
||||
bindings.append(bnd)
|
||||
@ -548,23 +550,22 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
|
||||
|
||||
def list_personalities_languages(self):
|
||||
personalities_languages_dir = lollms_personalities_zoo_path # replace with the actual path to the models folder
|
||||
personalities_languages_dir = self.lollms_paths.personalities_zoo_path # replace with the actual path to the models folder
|
||||
personalities_languages = [f.stem for f in personalities_languages_dir.iterdir() if f.is_dir()]
|
||||
return jsonify(personalities_languages)
|
||||
|
||||
def list_personalities_categories(self):
|
||||
personalities_categories_dir = lollms_personalities_zoo_path/f'{self.personality_language}' # replace with the actual path to the models folder
|
||||
personalities_categories_dir = self.lollms_paths.personalities_zoo_path/f'{self.personality_language}' # replace with the actual path to the models folder
|
||||
personalities_categories = [f.stem for f in personalities_categories_dir.iterdir() if f.is_dir()]
|
||||
return jsonify(personalities_categories)
|
||||
|
||||
def list_personalities(self):
|
||||
try:
|
||||
personalities_dir = lollms_personalities_zoo_path/f'{self.personality_language}/{self.personality_category}' # replace with the actual path to the models folder
|
||||
personalities_dir = self.lollms_paths.personalities_zoo_path/f'{self.personality_language}/{self.personality_category}' # replace with the actual path to the models folder
|
||||
personalities = [f.stem for f in personalities_dir.iterdir() if f.is_dir()]
|
||||
except Exception as ex:
|
||||
personalities=[]
|
||||
if self.config["debug"]:
|
||||
print(f"No personalities found. Using default one {ex}")
|
||||
ASCIIColors.error(f"No personalities found. Using default one {ex}")
|
||||
return jsonify(personalities)
|
||||
|
||||
def list_languages(self):
|
||||
@ -627,19 +628,19 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
return send_from_directory(path, fn)
|
||||
|
||||
def serve_bindings(self, filename):
|
||||
path = str(lollms_bindings_zoo_path/("/".join(filename.split("/")[:-1])))
|
||||
path = str(self.lollms_paths.bindings_zoo_path/("/".join(filename.split("/")[:-1])))
|
||||
|
||||
fn = filename.split("/")[-1]
|
||||
return send_from_directory(path, fn)
|
||||
|
||||
def serve_personalities(self, filename):
|
||||
path = str(lollms_personalities_zoo_path/("/".join(filename.split("/")[:-1])))
|
||||
path = str(self.lollms_paths.personalities_zoo_path/("/".join(filename.split("/")[:-1])))
|
||||
|
||||
fn = filename.split("/")[-1]
|
||||
return send_from_directory(path, fn)
|
||||
|
||||
def serve_outputs(self, filename):
|
||||
root_dir = lollms_personal_path / "outputs"
|
||||
root_dir = self.lollms_paths.personal_path / "outputs"
|
||||
root_dir.mkdir(exist_ok=True, parents=True)
|
||||
path = str(root_dir/"/".join(filename.split("/")[:-1]))
|
||||
|
||||
@ -655,7 +656,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
return send_from_directory(path, fn)
|
||||
|
||||
def serve_data(self, filename):
|
||||
root_dir = lollms_personal_path / "data"
|
||||
root_dir = self.lollms_paths.personal_path / "data"
|
||||
root_dir.mkdir(exist_ok=True, parents=True)
|
||||
path = str(root_dir/"/".join(filename.split("/")[:-1]))
|
||||
|
||||
@ -663,7 +664,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
return send_from_directory(path, fn)
|
||||
|
||||
def serve_uploads(self, filename):
|
||||
root_dir = lollms_personal_path / "uploads"
|
||||
root_dir = self.lollms_paths.personal_path / "uploads"
|
||||
root_dir.mkdir(exist_ok=True, parents=True)
|
||||
|
||||
path = str(root_dir+"/".join(filename.split("/")[:-1]))
|
||||
@ -689,6 +690,22 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
self.process.cancel_generation()
|
||||
return jsonify({"status": True})
|
||||
|
||||
|
||||
def switch_personal_path(self):
|
||||
data = request.get_json()
|
||||
path = data["path"]
|
||||
global_paths_cfg = Path("./global_paths_cfg.yaml")
|
||||
if global_paths_cfg.exists():
|
||||
try:
|
||||
cfg = BaseConfig()
|
||||
cfg.load_config(global_paths_cfg)
|
||||
cfg.lollms_personal_path = path
|
||||
cfg.save_config(global_paths_cfg)
|
||||
return jsonify({"status": True})
|
||||
except Exception as ex:
|
||||
print(ex)
|
||||
return jsonify({"status": False, 'error':f"Couldn't switch path: {ex}"})
|
||||
|
||||
def add_reference_to_local_model(self):
|
||||
data = request.get_json()
|
||||
path = data["path"]
|
||||
@ -719,11 +736,11 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
name = data['name']
|
||||
|
||||
package_path = f"{language}/{category}/{name}"
|
||||
package_full_path = lollms_path/"personalities_zoo"/package_path
|
||||
package_full_path = self.lollms_paths.personalities_zoo_path/package_path
|
||||
config_file = package_full_path / "config.yaml"
|
||||
if config_file.exists():
|
||||
self.config["personalities"].append(package_path)
|
||||
self.personalities = self.process.rebuild_personalities()
|
||||
self.mounted_personalities = self.process.rebuild_personalities()
|
||||
self.personality = self.mounted_personalities[self.config["active_personality_id"]]
|
||||
self.apply_settings()
|
||||
return jsonify({"status": True,
|
||||
@ -766,12 +783,17 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
return jsonify({"status": False, "error":"Couldn't unmount personality"})
|
||||
|
||||
def select_personality(self):
|
||||
id = request.files['id']
|
||||
data = request.get_json()
|
||||
id = data['id']
|
||||
if id<len(self.config["personalities"]):
|
||||
self.config["active_personality_id"]=id
|
||||
self.personality = self.mounted_personalities[self.config["active_personality_id"]]
|
||||
self.apply_settings()
|
||||
return jsonify({"status": True})
|
||||
return jsonify({
|
||||
"status": True,
|
||||
"personalities":self.config["personalities"],
|
||||
"active_personality_id":self.config["active_personality_id"]
|
||||
})
|
||||
else:
|
||||
return jsonify({"status": False, "error":"Invalid ID"})
|
||||
|
||||
@ -984,7 +1006,7 @@ class LoLLMsWebUI(LoLLMsAPPI):
|
||||
path = f'{server}{filename}'
|
||||
else:
|
||||
path = f'{server}/{filename}'
|
||||
local_path = lollms_personal_models_path/f'{self.config["binding_name"]}/{filename}'
|
||||
local_path = lollms_paths.personal_models_path/f'{self.config["binding_name"]}/{filename}'
|
||||
is_installed = local_path.exists() or model_type.lower()=="api"
|
||||
models.append({
|
||||
'title': filename,
|
||||
@ -1081,10 +1103,16 @@ def sync_cfg(default_config, config):
|
||||
if key not in default_config:
|
||||
del config.config[key]
|
||||
removed_entries.append(key)
|
||||
|
||||
config["version"]=default_config["version"]
|
||||
|
||||
return config, added_entries, removed_entries
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
lollms_paths = LollmsPaths.find_paths(force_local=True)
|
||||
db_folder = lollms_paths.personal_path/"databases"
|
||||
db_folder.mkdir(parents=True, exist_ok=True)
|
||||
parser = argparse.ArgumentParser(description="Start the chatbot Flask app.")
|
||||
parser.add_argument(
|
||||
"-c", "--config", type=str, default="local_config", help="Sets the configuration file to be used."
|
||||
@ -1159,18 +1187,18 @@ if __name__ == "__main__":
|
||||
|
||||
if args.config!="local_config":
|
||||
args.config = "local_config"
|
||||
if not lollms_personal_configuration_path/f"local_config.yaml".exists():
|
||||
if not lollms_paths.personal_configuration_path/f"local_config.yaml".exists():
|
||||
print("No local configuration file found. Building from scratch")
|
||||
shutil.copy(default_config, lollms_personal_configuration_path/f"local_config.yaml")
|
||||
shutil.copy(default_config, lollms_paths.personal_configuration_path/f"local_config.yaml")
|
||||
|
||||
config_file_path = lollms_personal_configuration_path/f"local_config.yaml"
|
||||
config = BindingConfig(config_file_path)
|
||||
config_file_path = lollms_paths.personal_configuration_path/f"local_config.yaml"
|
||||
config = LOLLMSConfig(config_file_path)
|
||||
|
||||
|
||||
if "version" not in config or int(config["version"])<int(default_config["version"]):
|
||||
#Upgrade old configuration files to new format
|
||||
print("Configuration file is very old. Replacing with default configuration")
|
||||
config, added, removed =sync_cfg(default_config, config)
|
||||
ASCIIColors.error("Configuration file is very old.\nReplacing with default configuration")
|
||||
config, added, removed = sync_cfg(default_config, config)
|
||||
print(f"Added entries : {added}, removed entries:{removed}")
|
||||
config.save_config(config_file_path)
|
||||
|
||||
@ -1181,7 +1209,7 @@ if __name__ == "__main__":
|
||||
|
||||
# executor = ThreadPoolExecutor(max_workers=1)
|
||||
# app.config['executor'] = executor
|
||||
bot = LoLLMsWebUI(app, socketio, config, config_file_path)
|
||||
bot = LoLLMsWebUI(app, socketio, config, config_file_path, lollms_paths)
|
||||
|
||||
# chong Define custom WebSocketHandler with error handling
|
||||
class CustomWebSocketHandler(WebSocketHandler):
|
||||
|
@ -28,4 +28,4 @@ user_name: user
|
||||
|
||||
# UI parameters
|
||||
debug: False
|
||||
db_path: databases/database.db
|
||||
db_path: database.db
|
@ -6,11 +6,10 @@ services:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- ./data:/srv/help
|
||||
- ./data:/srv/data
|
||||
- ./data/.parisneo:/root/.parisneo/
|
||||
- ./models:/srv/models
|
||||
- ./configs:/srv/configs
|
||||
- ./personalities:/srv/personalities
|
||||
- ./web:/srv/web
|
||||
ports:
|
||||
- "9600:9600"
|
||||
|
@ -1,4 +0,0 @@
|
||||
Here you can drop your models depending on the selected binding
|
||||
Currently, supported bindings are:
|
||||
- llamacpp
|
||||
- gpt-j
|
@ -9,4 +9,5 @@ gevent
|
||||
gevent-websocket
|
||||
pyaipersonality>=0.0.14
|
||||
lollms
|
||||
langchain
|
||||
langchain
|
||||
requests
|
6
tests/end_point_tests/select_personality.http
Normal file
6
tests/end_point_tests/select_personality.http
Normal file
@ -0,0 +1,6 @@
|
||||
POST http://localhost:9600/select_personality
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"id": 0
|
||||
}
|
1
web/dist/assets/index-29d93ec2.css
vendored
Normal file
1
web/dist/assets/index-29d93ec2.css
vendored
Normal file
File diff suppressed because one or more lines are too long
1
web/dist/assets/index-940fed0d.css
vendored
1
web/dist/assets/index-940fed0d.css
vendored
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
4
web/dist/index.html
vendored
4
web/dist/index.html
vendored
@ -6,8 +6,8 @@
|
||||
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>GPT4All - WEBUI</title>
|
||||
<script type="module" crossorigin src="/assets/index-9a571523.js"></script>
|
||||
<link rel="stylesheet" href="/assets/index-940fed0d.css">
|
||||
<script type="module" crossorigin src="/assets/index-ce2e3117.js"></script>
|
||||
<link rel="stylesheet" href="/assets/index-29d93ec2.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
|
@ -209,30 +209,31 @@
|
||||
class="text-2xl hover:text-primary p-2 -m-2 w-full text-left flex items-center">
|
||||
<i :data-feather="mzc_collapsed ? 'chevron-right' : 'chevron-down'" class="mr-2 flex-shrink-0"></i>
|
||||
<h3 class="text-lg font-semibold cursor-pointer select-none mr-2">
|
||||
Models zoo</h3>
|
||||
Models zoo</h3>
|
||||
<div class="flex flex-row items-center">
|
||||
<div v-if="!isModelSelected" class="text-base text-red-600 flex gap-3 items-center mr-2">
|
||||
<i data-feather="alert-triangle" class="flex-shrink-0"></i>
|
||||
No model selected!
|
||||
</div>
|
||||
|
||||
<div v-if="configFile.model_name" class="mr-2">|</div>
|
||||
|
||||
<div v-if="configFile.model_name"
|
||||
class=" text-base font-semibold cursor-pointer select-none items-center">
|
||||
|
||||
<div class="flex gap-1 items-center">
|
||||
<img :src="imgModel" class="w-8 h-8 rounded-lg object-fill">
|
||||
<h3 class="font-bold font-large text-lg line-clamp-1">
|
||||
{{ configFile.model_name }}
|
||||
</h3>
|
||||
</div>
|
||||
|
||||
|
||||
<div v-if="!isModelSelected" class="text-base text-red-600 flex gap-3 items-center mr-2">
|
||||
<i data-feather="alert-triangle" class="flex-shrink-0"></i>
|
||||
No model selected!
|
||||
</div>
|
||||
|
||||
<div v-if="configFile.model_name" class="mr-2">|</div>
|
||||
|
||||
<div v-if="configFile.model_name" class="text-base font-semibold cursor-pointer select-none items-center">
|
||||
<div class="flex gap-1 items-center">
|
||||
<img :src="imgModel" class="w-8 h-8 rounded-lg object-fill">
|
||||
<h3 class="font-bold font-large text-lg line-clamp-1">
|
||||
{{ configFile.model_name }}
|
||||
</h3>
|
||||
<button @click.stop="showInputDialog" class="text-base hover:text-primary-dark ml-1 bg-bg-light-tone dark:bg-bg-dark-tone hover:bg-bg-dark-tone duration-200 rounded-lg px-2 py-1">
|
||||
+
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
|
||||
<div :class="{ 'hidden': mzc_collapsed }" class="flex flex-col mb-2 px-3 pb-0">
|
||||
<div class="mb-2">
|
||||
<label for="model" class="block ml-2 mb-2 text-sm font-medium text-gray-900 dark:text-white">
|
||||
@ -647,6 +648,17 @@
|
||||
transform: scale(1);
|
||||
}
|
||||
}
|
||||
.bg-primary-light {
|
||||
background-color: aqua
|
||||
}
|
||||
|
||||
.hover:bg-primary-light:hover {
|
||||
background-color: aquamarine
|
||||
}
|
||||
|
||||
.font-bold {
|
||||
font-weight: bold;
|
||||
}
|
||||
</style>
|
||||
<script>
|
||||
import filesize from '../plugins/filesize'
|
||||
@ -680,6 +692,9 @@ export default {
|
||||
data() {
|
||||
|
||||
return {
|
||||
// install custom model
|
||||
showModelInputDialog: false,
|
||||
modelPath: '',
|
||||
// Zoo stuff
|
||||
models: [],
|
||||
personalities: [],
|
||||
@ -723,6 +738,32 @@ export default {
|
||||
created() {
|
||||
|
||||
}, methods: {
|
||||
showInputDialog() {
|
||||
console.log("Input dialog shown")
|
||||
this.showModelInputDialog = true;
|
||||
},
|
||||
closeInputDialog() {
|
||||
this.showModelInputDialog = false;
|
||||
this.modelPath = '';
|
||||
},
|
||||
validateModelPath() {
|
||||
// Perform validation of the model path (e.g., checking if it is a local file or internet link)
|
||||
// ...
|
||||
|
||||
// Trigger the `download_model` endpoint with the path as a POST
|
||||
this.$axios.post('/download_model', { path: this.modelPath })
|
||||
.then(response => {
|
||||
// Handle the response
|
||||
// ...
|
||||
})
|
||||
.catch(error => {
|
||||
// Handle the error
|
||||
// ...
|
||||
});
|
||||
|
||||
// Close the input dialog
|
||||
this.closeInputDialog();
|
||||
},
|
||||
collapseAll(val) {
|
||||
this.bec_collapsed = val
|
||||
this.mzc_collapsed = val
|
||||
@ -972,7 +1013,6 @@ export default {
|
||||
this.api_get_req("list_models").then(response => { this.modelsArr = response })
|
||||
//this.api_get_req("list_personalities_languages").then(response => { this.persLangArr = response })
|
||||
this.api_get_req("list_personalities_categories").then(response => { this.persCatgArr = response })
|
||||
this.api_get_req("list_personalities").then(response => { this.persArr = response })
|
||||
//this.api_get_req("list_languages").then(response => { this.langArr = response })
|
||||
this.api_get_req("get_config").then(response => {
|
||||
console.log("Received config")
|
||||
@ -995,6 +1035,10 @@ export default {
|
||||
this.configFile.personality_folder = response["personality_name"]
|
||||
console.log("received infos")
|
||||
});
|
||||
this.api_get_req("list_personalities").then(response => {
|
||||
this.persArr = response
|
||||
console.log(`Listed personalities:\n${response}`)
|
||||
})
|
||||
this.api_get_req("disk_usage").then(response => {
|
||||
this.diskUsage = response
|
||||
})
|
||||
|
16
webui.bat
16
webui.bat
@ -46,9 +46,9 @@ if errorlevel 1 (
|
||||
)
|
||||
:NO_INTERNET
|
||||
|
||||
if exist GPT4All (
|
||||
echo GPT4All folder found
|
||||
cd GPT4All
|
||||
if exist lollms-webui (
|
||||
echo lollms-webui folder found
|
||||
cd lollms-webui
|
||||
set /p="Activating virtual environment ..." <nul
|
||||
call env\Scripts\activate.bat
|
||||
)
|
||||
@ -108,16 +108,16 @@ goto :CHECK_PYTHON_INSTALL
|
||||
|
||||
:CLONE_REPO
|
||||
REM Check if repository exists
|
||||
if exist GPT4All (
|
||||
echo GPT4All folder found
|
||||
cd GPT4All
|
||||
if exist lollms-webui (
|
||||
echo lollms-webui folder found
|
||||
cd lollms-webui
|
||||
echo Pulling latest changes
|
||||
git pull
|
||||
) else (
|
||||
echo Cloning repository...
|
||||
rem Clone the Git repository into a temporary directory
|
||||
git clone https://github.com/ParisNeo/lollms-webui.git ./GPT4All
|
||||
cd GPT4All
|
||||
git clone https://github.com/ParisNeo/lollms-webui.git ./lollms-webui
|
||||
cd lollms-webui
|
||||
echo Pulling latest changes
|
||||
git pull
|
||||
)
|
||||
|
8
webui.sh
8
webui.sh
@ -57,13 +57,13 @@ if ping -q -c 1 google.com >/dev/null 2>&1; then
|
||||
echo Pulling latest changes
|
||||
git pull origin main
|
||||
else
|
||||
if [[ -d GPT4All ]] ;then
|
||||
cd GPT4All
|
||||
if [[ -d lollms-webui ]] ;then
|
||||
cd lollms-webui
|
||||
else
|
||||
echo Cloning repository...
|
||||
rem Clone the Git repository into a temporary directory
|
||||
git clone https://github.com/ParisNeo/lollms-webui.git ./GPT4All
|
||||
cd GPT4All
|
||||
git clone https://github.com/ParisNeo/lollms-webui.git ./lollms-webui
|
||||
cd lollms-webui
|
||||
fi
|
||||
fi
|
||||
echo Pulling latest version...
|
||||
|
Loading…
x
Reference in New Issue
Block a user