mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2025-03-21 19:35:19 +00:00
Merge branch 'main' of https://github.com/nomic-ai/gpt4all-ui
This commit is contained in:
commit
d5d9a0f01e
274
README.md
274
README.md
@ -6,100 +6,197 @@
|
||||

|
||||
[](https://discord.gg/DZ4wsgg4)
|
||||
|
||||
This is a Flask web application that provides a chat UI for interacting with llamacpp based chatbots such as GPT4all, vicuna etc...
|
||||
This is a Flask web application that provides a chat UI for interacting with [llamacpp](https://github.com/ggerganov/llama.cpp) based chatbots such as [GPT4all](https://github.com/nomic-ai/gpt4all), vicuna etc...
|
||||
|
||||
Follow us on our [Discord server](https://discord.gg/DZ4wsgg4).
|
||||
|
||||
## What is GPT4All ?
|
||||

|
||||
|
||||
GPT4All is an exceptional language model, designed and developed by Nomic-AI, a proficient company dedicated to natural language processing. The app uses Nomic-AI's advanced library to communicate with the cutting-edge GPT4All model, which operates locally on the user's PC, ensuring seamless and efficient communication.
|
||||
|
||||
If you are interested in learning more about this groundbreaking project, visit their [Github repository](https://github.com/nomic-ai/gpt4all), where you can find comprehensive information regarding the app's functionalities and technical details. Moreover, you can delve deeper into the training process and database by going through their detailed Technical report, available for download at [Technical report](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf).
|
||||
|
||||
One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real-time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users.
|
||||
One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users.
|
||||
|
||||
It's worth noting that the model has recently been launched, and it's expected to evolve over time, enabling it to become even better in the future. This webui is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt over time.
|
||||
It's worth noting that the model has recently been launched, and it's expected to evolve across time, enabling it to become even better in the future. This web UI is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt across time.
|
||||
# Features
|
||||
|
||||
## UI screenshot
|
||||
### MAIN page
|
||||

|
||||
### Settings page
|
||||

|
||||
### Extensions page
|
||||
The extensions interface is not yet ready but once it will be, any one may build its own plugins and share them with the community.
|
||||

|
||||
### Training page
|
||||
This page is not yet ready, but it will eventually be released to allow you to fine tune your own model and share it if you want
|
||||

|
||||
### Help
|
||||
This page shows credits to the developers, How to use, few FAQ, and some examples to test.
|
||||
- Chat with locally hosted AI inside a web browser
|
||||
- Create, edit, and share your AI's personality
|
||||
- Audio in and audio out with many options for language and voices (only Chrome web browser is supported at this time)
|
||||
- History of discussion with resume functionality
|
||||
- Add new discussion, rename discussion, remove discussion
|
||||
- Export database to json format
|
||||
- Export discussion to text format
|
||||
|
||||
## Installation
|
||||
# Installation and running
|
||||
|
||||
To install the app, follow these steps:
|
||||
Make sure that your CPU supports `AVX2` instruction set. Without it, this application won't run out of the box. To check your CPU features, please visit the website of your CPU manufacturer for more information and look for `Instruction set extension: AVX2`.
|
||||
|
||||
1. Clone the GitHub repository:
|
||||
## Windows 10 and 11
|
||||
|
||||
```
|
||||
git clone https://github.com/nomic-ai/gpt4all-ui
|
||||
### Simple:
|
||||
|
||||
1. Download this repository .zip:
|
||||
|
||||

|
||||
|
||||
2. Extract contents into a folder.
|
||||
3. Install application by double clicking on `install.bat` file from Windows Explorer as normal user.
|
||||
4. Run application by double clicking on `run.bat` file from Windows Explorer as normal user to start the application.
|
||||
|
||||
### Advanced mode:
|
||||
|
||||
1. Install [git](https://git-scm.com/download/win).
|
||||
2. Open Terminal/PowerShell and navigate to a folder you want to clone this repository.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/nomic-ai/gpt4all-ui.git
|
||||
```
|
||||
|
||||
### Manual setup
|
||||
Hint: Scroll down for docker-compose setup
|
||||
4. Install application by double clicking on `install.bat` file from Windows explorer as normal user.
|
||||
5. Run application by double clicking on `run.bat` file from Windows explorer as normal user to start the application.
|
||||
|
||||
1. Navigate to the project directory:
|
||||
## Linux
|
||||
|
||||
1. Open terminal/console and install dependencies:
|
||||
|
||||
`Debian-based:`
|
||||
```
|
||||
sudo apt install git python3 python3-venv
|
||||
```
|
||||
`Red Hat-based:`
|
||||
```
|
||||
sudo dnf install git python3
|
||||
```
|
||||
`Arch-based:`
|
||||
```
|
||||
sudo pacman -S git python3
|
||||
```
|
||||
|
||||
2. Clone repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/nomic-ai/gpt4all-ui.git
|
||||
```
|
||||
```bash
|
||||
cd gpt4all-ui
|
||||
```
|
||||
|
||||
2. Run the appropriate installation script for your platform:
|
||||
|
||||
On Windows :
|
||||
```cmd
|
||||
install.bat
|
||||
```
|
||||
- On Linux
|
||||
3. Run installation:
|
||||
|
||||
```bash
|
||||
bash ./install.sh
|
||||
```
|
||||
|
||||
- On Mac os
|
||||
4. Run application:
|
||||
|
||||
```bash
|
||||
bash ./install-macos.sh
|
||||
bash ./run.sh
|
||||
```
|
||||
|
||||
On Linux/MacOS, if you have issues, refer more details are presented [here](docs/Linux_Osx_Install.md)
|
||||
## MacOS
|
||||
|
||||
1. Open terminal/console and install `brew`:
|
||||
|
||||
```
|
||||
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||
```
|
||||
|
||||
2. Install dependencies:
|
||||
|
||||
```
|
||||
brew install git python3
|
||||
```
|
||||
|
||||
3. Clone repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/nomic-ai/gpt4all-ui.git
|
||||
```
|
||||
```bash
|
||||
cd gpt4all-ui
|
||||
```
|
||||
|
||||
4. Run installation:
|
||||
|
||||
```bash
|
||||
bash ./install.sh
|
||||
```
|
||||
|
||||
5. Run application:
|
||||
|
||||
```bash
|
||||
bash ./run.sh
|
||||
```
|
||||
|
||||
On Linux/MacOS, if you have issues, refer to the details presented [here](docs/Linux_Osx_Install.md)
|
||||
These scripts will create a Python virtual environment and install the required dependencies. It will also download the models and install them.
|
||||
|
||||
## Docker Compose
|
||||
Make sure to put models the inside the `models` directory.
|
||||
After that, you can simply use docker-compose or podman-compose to build and start the application:
|
||||
|
||||
Build
|
||||
```bash
|
||||
docker compose -f docker-compose.yml build
|
||||
```
|
||||
|
||||
Start
|
||||
```bash
|
||||
docker compose -f docker-compose.yml up
|
||||
```
|
||||
|
||||
Stop
|
||||
```
|
||||
Ctrl + C
|
||||
```
|
||||
|
||||
Start detached (runs in background)
|
||||
```bash
|
||||
docker compose -f docker-compose.yml up -d
|
||||
```
|
||||
|
||||
Stop detached (one that runs in background)
|
||||
```bash
|
||||
docker compose stop
|
||||
```
|
||||
|
||||
After that, you can open the application in your browser on http://localhost:9600
|
||||
|
||||
Now you're ready to work!
|
||||
|
||||
# Supported models
|
||||
You can also refuse to download the model during the install procedure and download it manually.
|
||||
For now we support any ggml model such as :
|
||||
- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin)
|
||||
- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) NOTE: Does not work out of the box
|
||||
- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin)
|
||||
- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) NOTE: Does not work out of the box
|
||||
- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) NOTE: Does not work out of the box
|
||||
- [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin)
|
||||
- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/blob/main/ggml-alpaca-7b-q4.bin) NOTE: Does not work out of the box - Needs conversion
|
||||
|
||||
Just download the model into the models folder and start using the tool.
|
||||
## Usage
|
||||
For simple newbies on Windows:
|
||||
```cmd
|
||||
run.bat
|
||||
```
|
||||
**For now, we support ggml models that work "out-of-the-box" (tested on Windows 11 and Ubuntu 22.04.2), such as:**
|
||||
|
||||
For simple newbies on Linux/MacOsX:
|
||||
```bash
|
||||
bash run.sh
|
||||
```
|
||||
- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) or visit [repository](https://huggingface.co/ParisNeo/GPT4All)
|
||||
- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit)
|
||||
- [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit)
|
||||
|
||||
if you want more control on your launch, you can activate your environment:
|
||||
**These models don't work "out-of-the-box" and need to be converted to the right ggml type:**
|
||||
|
||||
- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit)
|
||||
- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/)
|
||||
- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/)
|
||||
- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/resolve/main/ggml-alpaca-7b-q4.bin) or visit [repository](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/)
|
||||
|
||||
Just download the model into the `models` folder and start using the tool.
|
||||
|
||||
# Build custom personalities and share them
|
||||
|
||||
To build a new personality, create a new file with the name of the personality inside the `personalities` folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, conditionning, etc. of your personality. Then save the file.
|
||||
|
||||
You can launch the application using the personality in two ways:
|
||||
- Change it permanently by putting the name of the personality inside your configuration file
|
||||
- Use the `--personality` or `-p` option to give the personality name to be used
|
||||
|
||||
If you deem your personality worthy of sharing, you can share the it by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file, and do a pull request.
|
||||
|
||||
# Advanced Usage
|
||||
|
||||
If you want more control on your launch, you can activate your environment:
|
||||
|
||||
On Windows:
|
||||
```cmd
|
||||
@ -118,16 +215,15 @@ To run the Flask server, execute the following command:
|
||||
python app.py [--config CONFIG] [--personality PERSONALITY] [--port PORT] [--host HOST] [--temp TEMP] [--n-predict N_PREDICT] [--top-k TOP_K] [--top-p TOP_P] [--repeat-penalty REPEAT_PENALTY] [--repeat-last-n REPEAT_LAST_N] [--ctx-size CTX_SIZE]
|
||||
```
|
||||
|
||||
On Linux/MacOS more details are [here](docs/Linux_Osx_Usage.md)
|
||||
|
||||
On Linux/MacOS more details can be found [here](docs/Linux_Osx_Usage.md)
|
||||
|
||||
## Options
|
||||
* `--config`: the configuration file to be used. It contains default configurations to be used. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml)
|
||||
* `--personality`: the personality file name. It contains the definition of the pezrsonality of the chatbot. It should be placed in personalities folder. The default personality is `gpt4all_chatbot.yaml`
|
||||
* `--config`: the configuration file to be used. It contains default configurations. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml)
|
||||
* `--personality`: the personality file name. It contains the definition of the pezrsonality of the chatbot and should be placed in personalities folder. The default personality is `gpt4all_chatbot.yaml`
|
||||
* `--model`: the name of the model to be used. The model should be placed in models folder (default: gpt4all-lora-quantized.bin)
|
||||
* `--seed`: the random seed for reproductibility. If fixed, it is possible to reproduce the outputs exactly (default: random)
|
||||
* `--port`: the port on which to run the server (default: 9600)
|
||||
* `--host`: the host address on which to run the server (default: localhost)
|
||||
* `--host`: the host address at which to run the server (default: localhost). To expose application to local network, set this to 0.0.0.0.
|
||||
* `--temp`: the sampling temperature for the model (default: 0.1)
|
||||
* `--n-predict`: the number of tokens to predict at a time (default: 128)
|
||||
* `--top-k`: the number of top-k candidates to consider for sampling (default: 40)
|
||||
@ -136,65 +232,25 @@ On Linux/MacOS more details are [here](docs/Linux_Osx_Usage.md)
|
||||
* `--repeat-last-n`: the number of tokens to use for detecting repeated n-grams (default: 64)
|
||||
* `--ctx-size`: the maximum context size to use for generating responses (default: 2048)
|
||||
|
||||
Note: All options are optional, and have default values.
|
||||
Note: All options are optional and have default values.
|
||||
|
||||
Once the server is running, open your web browser and navigate to http://localhost:9600 (or http://your host name:your port number if you have selected different values for those) to access the chatbot UI. To use the app, open a web browser and navigate to this URL.
|
||||
|
||||
Make sure to adjust the default values and descriptions of the options to match your specific application.
|
||||
|
||||
### Docker Compose Setup
|
||||
Make sure to have the `gpt4all-lora-quantized-ggml.bin` inside the `models` directory.
|
||||
After that you can simply use docker-compose or podman-compose to build and start the application:
|
||||
# Update application To latest version
|
||||
|
||||
Build
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml build
|
||||
```
|
||||
|
||||
Start
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml up
|
||||
```
|
||||
|
||||
After that you can open the application in your browser on http://localhost:9600
|
||||
|
||||
|
||||
## Update To latest version
|
||||
|
||||
On windows use:
|
||||
On Windows, run:
|
||||
```bash
|
||||
update.bat
|
||||
```
|
||||
On linux or macos use:
|
||||
On Linux or OS X, run:
|
||||
```bash
|
||||
bash update.sh
|
||||
```
|
||||
# Contribute
|
||||
|
||||
|
||||
## Build custom personalities and share them
|
||||
|
||||
To build a new personality, create a new file with the name of the personality inside the personalities folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file.
|
||||
|
||||
You can launch the application using the personality in two ways:
|
||||
- Either you want to change it permanently by putting the name of the personality inside your configuration file
|
||||
- Or just use the `--personality` or `-p` option to give the personality name to be used.
|
||||
|
||||
If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request.
|
||||
|
||||
|
||||
## Features
|
||||
|
||||
- Chat with AI
|
||||
- Create, edit, and share personality
|
||||
- Audio in and audio out with many options for language and voices
|
||||
- History of discussion with resume functionality
|
||||
- Add new discussion, rename discussion, remove discussion
|
||||
- Export database to json format
|
||||
- Export discussion to text format
|
||||
|
||||
## Contribute
|
||||
|
||||
This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities.
|
||||
This is an open-source project by the community and for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities.
|
||||
|
||||
We welcome contributions from anyone who is interested in improving our chatbot. Whether you want to report a bug, suggest a feature, or submit a pull request, we encourage you to get involved and help us make our chatbot even better.
|
||||
|
||||
@ -221,18 +277,18 @@ We will review your pull request as soon as possible and provide feedback on any
|
||||
|
||||
Please note that all contributions are subject to review and approval by our project maintainers. We reserve the right to reject any contribution that does not align with our project goals or standards.
|
||||
|
||||
## Future Plans
|
||||
# Future Plans
|
||||
|
||||
Here are some of the future plans for this project:
|
||||
|
||||
**Enhanced control of chatbot parameters:** We plan to improve the user interface (UI) of the chatbot to allow users to control the parameters of the chatbot such as temperature and other variables. This will give users more control over the chatbot's responses, and allow for a more customized experience.
|
||||
**Enhanced control of chatbot parameters:** We plan to improve the UI of the chatbot to allow users to control the parameters of the chatbot such as temperature and other variables. This will give users more control over the chatbot's responses, and allow for a more customized experience.
|
||||
|
||||
**Extension system for plugins:** We are also working on an extension system that will allow developers to create plugins for the chatbot. These plugins will be able to add new features and capabilities to the chatbot, and allow for greater customization of the chatbot's behavior.
|
||||
|
||||
**Enhanced UI with themes and skins:** Additionally, we plan to enhance the user interface of the chatbot to allow for themes and skins. This will allow users to personalize the appearance of the chatbot, and make it more visually appealing.
|
||||
**Enhanced UI with themes and skins:** Additionally, we plan to enhance the UI of the chatbot to allow for themes and skins. This will allow users to personalize the appearance of the chatbot and make it more visually appealing.
|
||||
|
||||
We are excited about these future plans for the project and look forward to implementing them in the near future. Stay tuned for updates!
|
||||
|
||||
## License
|
||||
# License
|
||||
|
||||
This project is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/nomic-ai/GPT4All-ui/blob/main/LICENSE) file for details.
|
||||
|
@ -82,6 +82,7 @@ if (!userAgent.match(/firefox|fxios/i)) {
|
||||
return;
|
||||
}
|
||||
const audio_out_button = document.createElement("button");
|
||||
audio_out_button.title = "Listen to message";
|
||||
audio_out_button.id = "audio-out-button";
|
||||
audio_out_button.classList.add("audio_btn",'bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded', "w-10", "h-10");
|
||||
audio_out_button.innerHTML = "🕪";
|
||||
@ -155,6 +156,7 @@ if (!userAgent.match(/firefox|fxios/i)) {
|
||||
|
||||
if (!found) {
|
||||
const audio_in_button = document.createElement("button");
|
||||
audio_in_button.title = "Type with your voice";
|
||||
audio_in_button.id = "audio_in_tool";
|
||||
audio_in_button.classList.add("audio_btn");
|
||||
audio_in_button.innerHTML = "🎤";
|
||||
|
@ -48,6 +48,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) {
|
||||
const resendImg = document.createElement('img');
|
||||
resendImg.src = "/static/images/refresh.png";
|
||||
resendImg.classList.add('py-1', 'px-1', 'rounded', 'w-10', 'h-10');
|
||||
resendButton.title = "Resend message";
|
||||
resendButton.appendChild(resendImg)
|
||||
resendButton.addEventListener('click', () => {
|
||||
// get user input and clear input field
|
||||
@ -145,6 +146,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) {
|
||||
const editImg = document.createElement('img');
|
||||
editImg.src = "/static/images/edit_discussion.png";
|
||||
editImg.classList.add('py-1', 'px-1', 'rounded', 'w-10', 'h-10');
|
||||
editButton.title = "Edit message";
|
||||
editButton.appendChild(editImg)
|
||||
|
||||
editButton.addEventListener('click', () => {
|
||||
@ -194,6 +196,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) {
|
||||
const deleteImg = document.createElement('img');
|
||||
deleteImg.src = "/static/images/delete_discussion.png";
|
||||
deleteImg.classList.add('py-2', 'px-2', 'rounded', 'w-15', 'h-15');
|
||||
deleteButton.title = "Delete message";
|
||||
deleteButton.appendChild(deleteImg)
|
||||
deleteButton.addEventListener('click', () => {
|
||||
const url = `/delete_message?id=${id}`;
|
||||
@ -209,6 +212,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) {
|
||||
});
|
||||
const rank_up = document.createElement('button');
|
||||
rank_up.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded', "w-10", "h-10");
|
||||
rank_up.title = "Upvote";
|
||||
rank_up.style.float = 'right'; // set the float property to right
|
||||
rank_up.style.display = 'inline-block'
|
||||
rank_up.innerHTML = '';
|
||||
@ -253,6 +257,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) {
|
||||
|
||||
const rank_down = document.createElement('button');
|
||||
rank_down.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded', "w-10", "h-10");
|
||||
rank_down.title = "Downvote";
|
||||
rank_down.style.float = 'right'; // set the float property to right
|
||||
rank_down.style.display = 'inline-block'
|
||||
rank_down.innerHTML = '';
|
||||
|
@ -1,6 +1,6 @@
|
||||
function db_export(){
|
||||
const exportButton = document.getElementById('export-button');
|
||||
|
||||
exportButton.title = "Export database";
|
||||
exportButton.addEventListener('click', () => {
|
||||
const messages = Array.from(chatWindow.querySelectorAll('.message')).map(messageElement => {
|
||||
const senderElement = messageElement.querySelector('.sender');
|
||||
|
@ -59,6 +59,7 @@ function populate_discussions_list()
|
||||
renameButton.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded',"w-10","h-10");
|
||||
const renameImg = document.createElement('img');
|
||||
renameImg.src = "/static/images/edit_discussion.png";
|
||||
renameButton.title = "Rename discussion";
|
||||
renameImg.classList.add('py-2', 'px-2', 'rounded', 'w-15', 'h-15');
|
||||
renameButton.appendChild(renameImg);
|
||||
|
||||
@ -123,6 +124,7 @@ function populate_discussions_list()
|
||||
deleteButton.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded',"w-10","h-10");
|
||||
const deleteImg = document.createElement('img');
|
||||
deleteImg.src = "/static/images/delete_discussion.png";
|
||||
deleteButton.title = "Delete discussion";
|
||||
deleteImg.classList.add('py-2', 'px-2', 'rounded', 'w-15', 'h-15');
|
||||
|
||||
deleteButton.addEventListener('click', () => {
|
||||
@ -155,6 +157,7 @@ function populate_discussions_list()
|
||||
const discussionButton = document.createElement('button');
|
||||
discussionButton.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-2', 'px-4', 'rounded', 'ml-2', 'w-full');
|
||||
discussionButton.textContent = discussion.title;
|
||||
discussionButton.title = "Open discussion";
|
||||
discussionButton.addEventListener('click', () => {
|
||||
console.log(`Showing messages for discussion ${discussion.id}`);
|
||||
load_discussion(discussion);
|
||||
@ -177,7 +180,7 @@ function populate_discussions_list()
|
||||
function populate_menu(){
|
||||
// adding export discussion button
|
||||
const exportDiscussionButton = document.querySelector('#export-discussion-button');
|
||||
|
||||
exportDiscussionButton.title = "Export discussion to a file";
|
||||
exportDiscussionButton.addEventListener('click', () => {
|
||||
fetch(`/export_discussion`)
|
||||
.then(response => response.text())
|
||||
@ -201,7 +204,9 @@ function populate_menu(){
|
||||
actionBtns.appendChild(exportDiscussionButton);
|
||||
|
||||
const newDiscussionBtn = document.querySelector('#new-discussion-btn');
|
||||
newDiscussionBtn.title = "Create new discussion";
|
||||
const resetDBButton = document.querySelector('#reset-discussions-btn');
|
||||
resetDBButton.title = "Reset all discussions/database";
|
||||
resetDBButton.addEventListener('click', () => {
|
||||
|
||||
});
|
||||
|
@ -10,7 +10,7 @@
|
||||
</head>
|
||||
<body class="w-screen h-500 bg-primary text-gray-400 flex flex-col bg-gray-900">
|
||||
<div class="w-full h-50 border-b border-accent bg-tertiary text-2xl font-bold flex justify-between items-center px-6 py-6">
|
||||
<div class="w-12 h-12"><img src="{{ url_for('static', filename='images/icon.png') }}"></div>
|
||||
<div class="w-12 h-12"><a href="#main"><img src="{{ url_for('static', filename='images/icon.png') }}"></a></div>
|
||||
<h1>GPT4All - WEBUI</h1>
|
||||
</div>
|
||||
<div class="border-b border-gray-800 content-center items-center">
|
||||
|
17
update.sh
17
update.sh
@ -1,4 +1,4 @@
|
||||
|
||||
#!/bin/sh
|
||||
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
|
||||
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
|
||||
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
|
||||
@ -34,17 +34,16 @@ echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
|
||||
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
|
||||
echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
|
||||
|
||||
echo Activate the virtual environment
|
||||
echo "Activate the virtual environment"
|
||||
source env/bin/activate
|
||||
|
||||
echo Pull latest version of the code
|
||||
echo "Pull latest version of the code"
|
||||
git pull
|
||||
|
||||
echo Download latest personalities
|
||||
if not exist tmp/personalities git clone https://github.com/ParisNeo/GPT4All_Personalities.git tmp/personalities
|
||||
cp tmp/personalities/* personalities
|
||||
if ! test -d ./tmp/personalities; then
|
||||
git clone https://github.com/ParisNeo/GPT4All_Personalities.git ./tmp/personalities
|
||||
fi
|
||||
cp ./tmp/personalities/* ./personalities/
|
||||
|
||||
echo Cleaning tmp folder
|
||||
echo "Cleaning tmp folder"
|
||||
rm -rf ./tmp
|
||||
|
||||
pause
|
||||
|
Loading…
x
Reference in New Issue
Block a user