From 66d4c760c8668bd6df84a440b91d14a96c50e9f4 Mon Sep 17 00:00:00 2001 From: andzejsp Date: Sat, 15 Apr 2023 14:17:02 +0300 Subject: [PATCH 1/7] Organized readme --- README.md | 216 ++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 145 insertions(+), 71 deletions(-) diff --git a/README.md b/README.md index 5da8bd06..5e7bd61b 100644 --- a/README.md +++ b/README.md @@ -31,75 +31,178 @@ The extensions interface is not yet ready but once it will be, any one may build ### Training page This page is not yet ready, but it will eventually be released to allow you to fine tune your own model and share it if you want ![image](https://user-images.githubusercontent.com/827993/231810125-b39c0672-f748-4311-9523-9b27b8a89dfe.png) -### Help -This page shows credits to the developers, How to use, few FAQ, and some examples to test. -## Installation +# Features -To install the app, follow these steps: +- Chat with AI +- Create, edit, and share personality +- Audio in and audio out with many options for language and voices (Chromuim web browsers only) +- History of discussion with resume functionality +- Add new discussion, rename discussion, remove discussion +- Export database to json format +- Export discussion to text format -1. Clone the GitHub repository: +# Installation and running -``` -git clone https://github.com/nomic-ai/gpt4all-ui +Make sure that your CPU supports `AVX2` instruction set. Without it this application wont run out of the box. To check your CPU features, please visit the website of your CPU manufacturer for more information and look for `Instruction set extension: AVX2`. + +## Windows 10 and 11 + +## Noob mode + +1. Download this repo .zip: + + + +1. Install [git](https://git-scm.com/download/win). +2. Open powershell by pressing `win + R` buttons on your keyboard then write `powershell` and press `enter`. +3. Navigate to where you want to clone this repository to with `cd` command. For example you want to download it to drive and folder `E:\git-repos` +```bash +cd e: ``` -### Manual setup -Hint: Scroll down for docker-compose setup +```bash +cd git-repos +``` -1. Navigate to the project directory: +The command bellow will create directory `E:\git-repos\gpt4all-ui` and clone the repository there. +```bash +git clone https://github.com/nomic-ai/gpt4all-ui.git +``` + +4. Install application by double clicking on `install.bat` file from Windows explorer as normal user. +5. Run application by double clicking on `run.bat` file from Windows explorer as normal user to start the application. + +## Linux + +1. Open terminal/console and install dependencies: + +`Debian-based:` +``` +sudo apt install git python3 python3-venv +``` +`Red Hat-based:` +``` +sudo dnf install git python3 +``` +`Arch-based:` +``` +sudo pacman -S git python3 +``` + +2. Clone repository: + +```bash +git clone https://github.com/nomic-ai/gpt4all-ui.git +``` ```bash cd gpt4all-ui ``` -2. Run the appropriate installation script for your platform: - -On Windows : -```cmd -install.bat -``` -- On Linux +3. Run installation: ```bash bash ./install.sh ``` -- On Mac os +4. Run application: ```bash -bash ./install-macos.sh +bash ./run.sh +``` + +## MacOS + +1. Open terminal/console and install dependencies: + +`Brew:` +``` +$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" +``` + +``` +brew install git python3 +``` + +2. Clone repository: + +```bash +git clone https://github.com/nomic-ai/gpt4all-ui.git +``` +```bash +cd gpt4all-ui +``` + +3. Run installation: + +```bash +bash ./install.sh +``` + +4. Run application: + +```bash +bash ./run.sh ``` On Linux/MacOS, if you have issues, refer more details are presented [here](docs/Linux_Osx_Install.md) These scripts will create a Python virtual environment and install the required dependencies. It will also download the models and install them. +## Docker Compose +Make sure to put models the inside the `models` directory. +After that you can simply use docker-compose or podman-compose to build and start the application: + +Build +```bash +docker compose -f docker-compose.yml build +``` + +Start +```bash +docker compose -f docker-compose.yml up +``` + +Stop +``` +Ctrl + C +``` + +Start detached (runs in background) +```bash +docker compose -f docker-compose.yml up -d +``` + +Stop detached (one that runs in background) +```bash +docker compose stop +``` + +After that you can open the application in your browser on http://localhost:9600 + + Now you're ready to work! # Supported models You can also refuse to download the model during the install procedure and download it manually. -For now we support any ggml model such as : -- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) -- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) NOTE: Does not work out of the box -- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) -- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) NOTE: Does not work out of the box -- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) NOTE: Does not work out of the box +## For now we support ggml model such as : + +- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) +- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) - [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) -- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/blob/main/ggml-alpaca-7b-q4.bin) NOTE: Does not work out of the box - Needs conversion -Just download the model into the models folder and start using the tool. -## Usage -For simple newbies on Windows: -```cmd -run.bat -``` +## These models dont work out of the box and need to be converted to the right ggml type: -For simple newbies on Linux/MacOsX: -```bash -bash run.sh -``` +- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) +- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) +- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) +- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/blob/main/ggml-alpaca-7b-q4.bin) -if you want more control on your launch, you can activate your environment: +Just download the model into the `models` folder and start using the tool. + +# Advanced Usage + +If you want more control on your launch, you can activate your environment: On Windows: ```cmd @@ -120,14 +223,13 @@ python app.py [--config CONFIG] [--personality PERSONALITY] [--port PORT] [--hos On Linux/MacOS more details are [here](docs/Linux_Osx_Usage.md) - ## Options * `--config`: the configuration file to be used. It contains default configurations to be used. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml) * `--personality`: the personality file name. It contains the definition of the pezrsonality of the chatbot. It should be placed in personalities folder. The default personality is `gpt4all_chatbot.yaml` * `--model`: the name of the model to be used. The model should be placed in models folder (default: gpt4all-lora-quantized.bin) * `--seed`: the random seed for reproductibility. If fixed, it is possible to reproduce the outputs exactly (default: random) * `--port`: the port on which to run the server (default: 9600) -* `--host`: the host address on which to run the server (default: localhost) +* `--host`: the host address on which to run the server (default: localhost). To expose application to local network set this to 0.0.0.0. * `--temp`: the sampling temperature for the model (default: 0.1) * `--n-predict`: the number of tokens to predict at a time (default: 128) * `--top-k`: the number of top-k candidates to consider for sampling (default: 40) @@ -142,24 +244,7 @@ Once the server is running, open your web browser and navigate to http://localho Make sure to adjust the default values and descriptions of the options to match your specific application. -### Docker Compose Setup -Make sure to have the `gpt4all-lora-quantized-ggml.bin` inside the `models` directory. -After that you can simply use docker-compose or podman-compose to build and start the application: - -Build -```bash -docker-compose -f docker-compose.yml build -``` - -Start -```bash -docker-compose -f docker-compose.yml up -``` - -After that you can open the application in your browser on http://localhost:9600 - - -## Update To latest version +# Update application To latest version On windows use: ```bash @@ -170,8 +255,7 @@ On linux or macos use: bash update.sh ``` - -## Build custom personalities and share them +# Build custom personalities and share them To build a new personality, create a new file with the name of the personality inside the personalities folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file. @@ -182,17 +266,7 @@ You can launch the application using the personality in two ways: If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request. -## Features - -- Chat with AI -- Create, edit, and share personality -- Audio in and audio out with many options for language and voices -- History of discussion with resume functionality -- Add new discussion, rename discussion, remove discussion -- Export database to json format -- Export discussion to text format - -## Contribute +# Contribute This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities. @@ -233,6 +307,6 @@ Here are some of the future plans for this project: We are excited about these future plans for the project and look forward to implementing them in the near future. Stay tuned for updates! -## License +# License This project is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/nomic-ai/GPT4All-ui/blob/main/LICENSE) file for details. From de7caeaf0b0a0feb6cc4eebc522d206e523e9ba1 Mon Sep 17 00:00:00 2001 From: Andzejs Poprockis <80409979+andzejsp@users.noreply.github.com> Date: Sat, 15 Apr 2023 14:32:07 +0300 Subject: [PATCH 2/7] Update README.md --- README.md | 36 ++++++++++++++++-------------------- 1 file changed, 16 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 5e7bd61b..dc15ccda 100644 --- a/README.md +++ b/README.md @@ -48,24 +48,20 @@ Make sure that your CPU supports `AVX2` instruction set. Without it this applica ## Windows 10 and 11 -## Noob mode +### Simple: -1. Download this repo .zip: +1. Download this repository .zip: +![image](https://user-images.githubusercontent.com/80409979/232210909-0ce3dc80-ed34-4b32-b828-e124e3df3ff1.png) +2. Extract contents into a folder. +3. Install application by double clicking on `install.bat` file from Windows explorer as normal user. +4. Run application by double clicking on `run.bat` file from Windows explorer as normal user to start the application. + +### Advanced mode: 1. Install [git](https://git-scm.com/download/win). -2. Open powershell by pressing `win + R` buttons on your keyboard then write `powershell` and press `enter`. -3. Navigate to where you want to clone this repository to with `cd` command. For example you want to download it to drive and folder `E:\git-repos` -```bash -cd e: -``` - -```bash -cd git-repos -``` - -The command bellow will create directory `E:\git-repos\gpt4all-ui` and clone the repository there. +2. Open terminal/powershell and navigate to a folder you want to clone this repository. ```bash git clone https://github.com/nomic-ai/gpt4all-ui.git @@ -114,18 +110,19 @@ bash ./run.sh ## MacOS -1. Open terminal/console and install dependencies: +1. Open terminal/console and install `brew`: -`Brew:` ``` $ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` +2. Install dependencies: + ``` brew install git python3 ``` -2. Clone repository: +3. Clone repository: ```bash git clone https://github.com/nomic-ai/gpt4all-ui.git @@ -134,13 +131,13 @@ git clone https://github.com/nomic-ai/gpt4all-ui.git cd gpt4all-ui ``` -3. Run installation: +4. Run installation: ```bash bash ./install.sh ``` -4. Run application: +5. Run application: ```bash bash ./run.sh @@ -180,12 +177,11 @@ docker compose stop After that you can open the application in your browser on http://localhost:9600 - Now you're ready to work! # Supported models You can also refuse to download the model during the install procedure and download it manually. -## For now we support ggml model such as : +## For now we support ggml models that work out of the box (tested on Windows 11 and Ubuntu 22.04.2) such as : - [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) - [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) From b67c726823f5d53c81c354ee1bb7d0f993677c70 Mon Sep 17 00:00:00 2001 From: andzejsp Date: Sat, 15 Apr 2023 14:55:44 +0300 Subject: [PATCH 3/7] more changes and reordering --- README.md | 56 +++++++++++++++++++++---------------------------------- 1 file changed, 21 insertions(+), 35 deletions(-) diff --git a/README.md b/README.md index dc15ccda..6be7497b 100644 --- a/README.md +++ b/README.md @@ -6,11 +6,11 @@ ![GitHub forks](https://img.shields.io/github/forks/nomic-ai/GPT4All-ui) [![Discord](https://img.shields.io/discord/1092918764925882418?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https://discord.gg/DZ4wsgg4) -This is a Flask web application that provides a chat UI for interacting with llamacpp based chatbots such as GPT4all, vicuna etc... +This is a Flask web application that provides a chat UI for interacting with [llamacpp](https://github.com/ggerganov/llama.cpp) based chatbots such as [GPT4all](https://github.com/nomic-ai/gpt4all), vicuna etc... Follow us on our [Discord server](https://discord.gg/DZ4wsgg4). -## What is GPT4All ? +![image](https://user-images.githubusercontent.com/827993/231911545-750c8293-58e4-4fac-8b34-f5c0d57a2f7d.png) GPT4All is an exceptional language model, designed and developed by Nomic-AI, a proficient company dedicated to natural language processing. The app uses Nomic-AI's advanced library to communicate with the cutting-edge GPT4All model, which operates locally on the user's PC, ensuring seamless and efficient communication. @@ -19,24 +19,11 @@ If you are interested in learning more about this groundbreaking project, visit One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real-time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users. It's worth noting that the model has recently been launched, and it's expected to evolve over time, enabling it to become even better in the future. This webui is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt over time. - -## UI screenshot -### MAIN page -![image](https://user-images.githubusercontent.com/827993/231911545-750c8293-58e4-4fac-8b34-f5c0d57a2f7d.png) -### Settings page -![image](https://user-images.githubusercontent.com/827993/231912018-4e69e0c3-cbef-4dc8-81b3-d977d96cc7de.png) -### Extensions page -The extensions interface is not yet ready but once it will be, any one may build its own plugins and share them with the community. -![image](https://user-images.githubusercontent.com/827993/231809762-0dd8127e-0cab-4310-9df3-d1cff89cf589.png) -### Training page -This page is not yet ready, but it will eventually be released to allow you to fine tune your own model and share it if you want -![image](https://user-images.githubusercontent.com/827993/231810125-b39c0672-f748-4311-9523-9b27b8a89dfe.png) - # Features -- Chat with AI -- Create, edit, and share personality -- Audio in and audio out with many options for language and voices (Chromuim web browsers only) +- Chat with locally hosted AI inside a web browser +- Create, edit, and share your AI's personality +- Audio in and audio out with many options for language and voices (only Chrome web browser is supported at this time) - History of discussion with resume functionality - Add new discussion, rename discussion, remove discussion - Export database to json format @@ -181,13 +168,14 @@ Now you're ready to work! # Supported models You can also refuse to download the model during the install procedure and download it manually. -## For now we support ggml models that work out of the box (tested on Windows 11 and Ubuntu 22.04.2) such as : + +**For now we support ggml models that work out of the box (tested on Windows 11 and Ubuntu 22.04.2) such as:** - [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) - [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) - [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) -## These models dont work out of the box and need to be converted to the right ggml type: +**These models dont work out of the box and need to be converted to the right ggml type:** - [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) - [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) @@ -196,6 +184,16 @@ You can also refuse to download the model during the install procedure and downl Just download the model into the `models` folder and start using the tool. +# Build custom personalities and share them + +To build a new personality, create a new file with the name of the personality inside the `personalities` folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file. + +You can launch the application using the personality in two ways: +- Either you want to change it permanently by putting the name of the personality inside your configuration file +- Or just use the `--personality` or `-p` option to give the personality name to be used. + +If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request. + # Advanced Usage If you want more control on your launch, you can activate your environment: @@ -242,26 +240,14 @@ Make sure to adjust the default values and descriptions of the options to match # Update application To latest version -On windows use: +On windows run: ```bash update.bat ``` -On linux or macos use: +On linux or macos run: ```bash bash update.sh ``` - -# Build custom personalities and share them - -To build a new personality, create a new file with the name of the personality inside the personalities folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file. - -You can launch the application using the personality in two ways: -- Either you want to change it permanently by putting the name of the personality inside your configuration file -- Or just use the `--personality` or `-p` option to give the personality name to be used. - -If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request. - - # Contribute This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities. @@ -291,7 +277,7 @@ We will review your pull request as soon as possible and provide feedback on any Please note that all contributions are subject to review and approval by our project maintainers. We reserve the right to reject any contribution that does not align with our project goals or standards. -## Future Plans +# Future Plans Here are some of the future plans for this project: From 0d918d6fc78ca247c13789bca0c0e26c0dbe77aa Mon Sep 17 00:00:00 2001 From: andzejsp Date: Sat, 15 Apr 2023 15:09:31 +0300 Subject: [PATCH 4/7] fixed links --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 6be7497b..84ea33e0 100644 --- a/README.md +++ b/README.md @@ -171,16 +171,16 @@ You can also refuse to download the model during the install procedure and downl **For now we support ggml models that work out of the box (tested on Windows 11 and Ubuntu 22.04.2) such as:** -- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) -- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) -- [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) +- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) or visit [repository](https://huggingface.co/ParisNeo/GPT4All) +- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit) +- [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit) **These models dont work out of the box and need to be converted to the right ggml type:** -- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) -- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) -- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) -- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/blob/main/ggml-alpaca-7b-q4.bin) +- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit) +- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/) +- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/) +- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/resolve/main/ggml-alpaca-7b-q4.bin) or visit [repository](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/) Just download the model into the `models` folder and start using the tool. From 0bccd4d3112c00b35ee5d4a4485c99a2a8c62a05 Mon Sep 17 00:00:00 2001 From: andzejsp Date: Sat, 15 Apr 2023 15:47:11 +0300 Subject: [PATCH 5/7] Fixed update script --- update.sh | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/update.sh b/update.sh index be1545e8..87c9b95f 100644 --- a/update.sh +++ b/update.sh @@ -1,4 +1,4 @@ - +#!/bin/sh echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH @@ -34,17 +34,16 @@ echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH echo HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -echo Activate the virtual environment +echo "Activate the virtual environment" source env/bin/activate -echo Pull latest version of the code +echo "Pull latest version of the code" git pull -echo Download latest personalities -if not exist tmp/personalities git clone https://github.com/ParisNeo/GPT4All_Personalities.git tmp/personalities -cp tmp/personalities/* personalities +if ! test -d ./tmp/personalities; then + git clone https://github.com/ParisNeo/GPT4All_Personalities.git ./tmp/personalities +fi +cp ./tmp/personalities/* ./personalities/ -echo Cleaning tmp folder +echo "Cleaning tmp folder" rm -rf ./tmp - -pause From 20fdc1aa07150bcbf3cedcda80f75c1b9390c44c Mon Sep 17 00:00:00 2001 From: andzejsp Date: Sat, 15 Apr 2023 16:57:23 +0300 Subject: [PATCH 6/7] Added titles to buttons --- static/js/audio.js | 2 ++ static/js/chat.js | 5 +++++ static/js/db_export.js | 2 +- static/js/discussions.js | 7 ++++++- templates/chat.html | 2 +- 5 files changed, 15 insertions(+), 3 deletions(-) diff --git a/static/js/audio.js b/static/js/audio.js index ae69a470..cafe7945 100644 --- a/static/js/audio.js +++ b/static/js/audio.js @@ -82,6 +82,7 @@ if (!userAgent.match(/firefox|fxios/i)) { return; } const audio_out_button = document.createElement("button"); + audio_out_button.title = "Listen to message"; audio_out_button.id = "audio-out-button"; audio_out_button.classList.add("audio_btn",'bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded', "w-10", "h-10"); audio_out_button.innerHTML = "🕪"; @@ -155,6 +156,7 @@ if (!userAgent.match(/firefox|fxios/i)) { if (!found) { const audio_in_button = document.createElement("button"); + audio_in_button.title = "Type with your voice"; audio_in_button.id = "audio_in_tool"; audio_in_button.classList.add("audio_btn"); audio_in_button.innerHTML = "🎤"; diff --git a/static/js/chat.js b/static/js/chat.js index 4dc571b2..75b203dd 100644 --- a/static/js/chat.js +++ b/static/js/chat.js @@ -48,6 +48,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) { const resendImg = document.createElement('img'); resendImg.src = "/static/images/refresh.png"; resendImg.classList.add('py-1', 'px-1', 'rounded', 'w-10', 'h-10'); + resendButton.title = "Resend message"; resendButton.appendChild(resendImg) resendButton.addEventListener('click', () => { // get user input and clear input field @@ -145,6 +146,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) { const editImg = document.createElement('img'); editImg.src = "/static/images/edit_discussion.png"; editImg.classList.add('py-1', 'px-1', 'rounded', 'w-10', 'h-10'); + editButton.title = "Edit message"; editButton.appendChild(editImg) editButton.addEventListener('click', () => { @@ -194,6 +196,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) { const deleteImg = document.createElement('img'); deleteImg.src = "/static/images/delete_discussion.png"; deleteImg.classList.add('py-2', 'px-2', 'rounded', 'w-15', 'h-15'); + deleteButton.title = "Delete message"; deleteButton.appendChild(deleteImg) deleteButton.addEventListener('click', () => { const url = `/delete_message?id=${id}`; @@ -209,6 +212,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) { }); const rank_up = document.createElement('button'); rank_up.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded', "w-10", "h-10"); + rank_up.title = "Upvote"; rank_up.style.float = 'right'; // set the float property to right rank_up.style.display = 'inline-block' rank_up.innerHTML = ''; @@ -253,6 +257,7 @@ function addMessage(sender, message, id, rank = 0, can_edit = false) { const rank_down = document.createElement('button'); rank_down.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded', "w-10", "h-10"); + rank_down.title = "Downvote"; rank_down.style.float = 'right'; // set the float property to right rank_down.style.display = 'inline-block' rank_down.innerHTML = ''; diff --git a/static/js/db_export.js b/static/js/db_export.js index 3c350fb2..62254580 100644 --- a/static/js/db_export.js +++ b/static/js/db_export.js @@ -1,6 +1,6 @@ function db_export(){ const exportButton = document.getElementById('export-button'); - + exportButton.title = "Export database"; exportButton.addEventListener('click', () => { const messages = Array.from(chatWindow.querySelectorAll('.message')).map(messageElement => { const senderElement = messageElement.querySelector('.sender'); diff --git a/static/js/discussions.js b/static/js/discussions.js index b9b9c873..07368cec 100644 --- a/static/js/discussions.js +++ b/static/js/discussions.js @@ -59,6 +59,7 @@ function populate_discussions_list() renameButton.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded',"w-10","h-10"); const renameImg = document.createElement('img'); renameImg.src = "/static/images/edit_discussion.png"; + renameButton.title = "Rename discussion"; renameImg.classList.add('py-2', 'px-2', 'rounded', 'w-15', 'h-15'); renameButton.appendChild(renameImg); @@ -123,6 +124,7 @@ function populate_discussions_list() deleteButton.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-0', 'px-0', 'rounded',"w-10","h-10"); const deleteImg = document.createElement('img'); deleteImg.src = "/static/images/delete_discussion.png"; + deleteButton.title = "Delete discussion"; deleteImg.classList.add('py-2', 'px-2', 'rounded', 'w-15', 'h-15'); deleteButton.addEventListener('click', () => { @@ -155,6 +157,7 @@ function populate_discussions_list() const discussionButton = document.createElement('button'); discussionButton.classList.add('bg-green-500', 'hover:bg-green-700', 'text-white', 'font-bold', 'py-2', 'px-4', 'rounded', 'ml-2', 'w-full'); discussionButton.textContent = discussion.title; + discussionButton.title = "Open discussion"; discussionButton.addEventListener('click', () => { console.log(`Showing messages for discussion ${discussion.id}`); load_discussion(discussion); @@ -177,7 +180,7 @@ function populate_discussions_list() function populate_menu(){ // adding export discussion button const exportDiscussionButton = document.querySelector('#export-discussion-button'); - + exportDiscussionButton.title = "Export discussion to a file"; exportDiscussionButton.addEventListener('click', () => { fetch(`/export_discussion`) .then(response => response.text()) @@ -201,7 +204,9 @@ function populate_menu(){ actionBtns.appendChild(exportDiscussionButton); const newDiscussionBtn = document.querySelector('#new-discussion-btn'); + newDiscussionBtn.title = "Create new discussion"; const resetDBButton = document.querySelector('#reset-discussions-btn'); + resetDBButton.title = "Reset all discussions/database"; resetDBButton.addEventListener('click', () => { }); diff --git a/templates/chat.html b/templates/chat.html index 07a9c14f..11c47637 100644 --- a/templates/chat.html +++ b/templates/chat.html @@ -10,7 +10,7 @@
-
+

GPT4All - WEBUI

From f78304a636946ccfbdbc6e9e23bb1871abceca0a Mon Sep 17 00:00:00 2001 From: Bill Rix <111203201+bill-rix@users.noreply.github.com> Date: Sat, 15 Apr 2023 07:28:27 -0700 Subject: [PATCH 7/7] Update README.md Some spelling and grammar fixes. --- README.md | 50 +++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/README.md b/README.md index 84ea33e0..0b74533a 100644 --- a/README.md +++ b/README.md @@ -16,9 +16,9 @@ GPT4All is an exceptional language model, designed and developed by Nomic-AI, a If you are interested in learning more about this groundbreaking project, visit their [Github repository](https://github.com/nomic-ai/gpt4all), where you can find comprehensive information regarding the app's functionalities and technical details. Moreover, you can delve deeper into the training process and database by going through their detailed Technical report, available for download at [Technical report](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf). -One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real-time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users. +One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users. -It's worth noting that the model has recently been launched, and it's expected to evolve over time, enabling it to become even better in the future. This webui is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt over time. +It's worth noting that the model has recently been launched, and it's expected to evolve across time, enabling it to become even better in the future. This web UI is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt across time. # Features - Chat with locally hosted AI inside a web browser @@ -31,7 +31,7 @@ It's worth noting that the model has recently been launched, and it's expected t # Installation and running -Make sure that your CPU supports `AVX2` instruction set. Without it this application wont run out of the box. To check your CPU features, please visit the website of your CPU manufacturer for more information and look for `Instruction set extension: AVX2`. +Make sure that your CPU supports `AVX2` instruction set. Without it, this application won't run out of the box. To check your CPU features, please visit the website of your CPU manufacturer for more information and look for `Instruction set extension: AVX2`. ## Windows 10 and 11 @@ -42,13 +42,13 @@ Make sure that your CPU supports `AVX2` instruction set. Without it this applica ![image](https://user-images.githubusercontent.com/80409979/232210909-0ce3dc80-ed34-4b32-b828-e124e3df3ff1.png) 2. Extract contents into a folder. -3. Install application by double clicking on `install.bat` file from Windows explorer as normal user. -4. Run application by double clicking on `run.bat` file from Windows explorer as normal user to start the application. +3. Install application by double clicking on `install.bat` file from Windows Explorer as normal user. +4. Run application by double clicking on `run.bat` file from Windows Explorer as normal user to start the application. ### Advanced mode: 1. Install [git](https://git-scm.com/download/win). -2. Open terminal/powershell and navigate to a folder you want to clone this repository. +2. Open Terminal/PowerShell and navigate to a folder you want to clone this repository. ```bash git clone https://github.com/nomic-ai/gpt4all-ui.git @@ -130,12 +130,12 @@ bash ./install.sh bash ./run.sh ``` -On Linux/MacOS, if you have issues, refer more details are presented [here](docs/Linux_Osx_Install.md) +On Linux/MacOS, if you have issues, refer to the details presented [here](docs/Linux_Osx_Install.md) These scripts will create a Python virtual environment and install the required dependencies. It will also download the models and install them. ## Docker Compose Make sure to put models the inside the `models` directory. -After that you can simply use docker-compose or podman-compose to build and start the application: +After that, you can simply use docker-compose or podman-compose to build and start the application: Build ```bash @@ -162,20 +162,20 @@ Stop detached (one that runs in background) docker compose stop ``` -After that you can open the application in your browser on http://localhost:9600 +After that, you can open the application in your browser on http://localhost:9600 Now you're ready to work! # Supported models You can also refuse to download the model during the install procedure and download it manually. -**For now we support ggml models that work out of the box (tested on Windows 11 and Ubuntu 22.04.2) such as:** +**For now, we support ggml models that work "out-of-the-box" (tested on Windows 11 and Ubuntu 22.04.2), such as:** - [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) or visit [repository](https://huggingface.co/ParisNeo/GPT4All) - [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit) - [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit) -**These models dont work out of the box and need to be converted to the right ggml type:** +**These models don't work "out-of-the-box" and need to be converted to the right ggml type:** - [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit) - [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/) @@ -186,13 +186,13 @@ Just download the model into the `models` folder and start using the tool. # Build custom personalities and share them -To build a new personality, create a new file with the name of the personality inside the `personalities` folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file. +To build a new personality, create a new file with the name of the personality inside the `personalities` folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, conditionning, etc. of your personality. Then save the file. You can launch the application using the personality in two ways: -- Either you want to change it permanently by putting the name of the personality inside your configuration file -- Or just use the `--personality` or `-p` option to give the personality name to be used. +- Change it permanently by putting the name of the personality inside your configuration file +- Use the `--personality` or `-p` option to give the personality name to be used -If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request. +If you deem your personality worthy of sharing, you can share the it by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file, and do a pull request. # Advanced Usage @@ -215,15 +215,15 @@ To run the Flask server, execute the following command: python app.py [--config CONFIG] [--personality PERSONALITY] [--port PORT] [--host HOST] [--temp TEMP] [--n-predict N_PREDICT] [--top-k TOP_K] [--top-p TOP_P] [--repeat-penalty REPEAT_PENALTY] [--repeat-last-n REPEAT_LAST_N] [--ctx-size CTX_SIZE] ``` -On Linux/MacOS more details are [here](docs/Linux_Osx_Usage.md) +On Linux/MacOS more details can be found [here](docs/Linux_Osx_Usage.md) ## Options -* `--config`: the configuration file to be used. It contains default configurations to be used. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml) -* `--personality`: the personality file name. It contains the definition of the pezrsonality of the chatbot. It should be placed in personalities folder. The default personality is `gpt4all_chatbot.yaml` +* `--config`: the configuration file to be used. It contains default configurations. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml) +* `--personality`: the personality file name. It contains the definition of the pezrsonality of the chatbot and should be placed in personalities folder. The default personality is `gpt4all_chatbot.yaml` * `--model`: the name of the model to be used. The model should be placed in models folder (default: gpt4all-lora-quantized.bin) * `--seed`: the random seed for reproductibility. If fixed, it is possible to reproduce the outputs exactly (default: random) * `--port`: the port on which to run the server (default: 9600) -* `--host`: the host address on which to run the server (default: localhost). To expose application to local network set this to 0.0.0.0. +* `--host`: the host address at which to run the server (default: localhost). To expose application to local network, set this to 0.0.0.0. * `--temp`: the sampling temperature for the model (default: 0.1) * `--n-predict`: the number of tokens to predict at a time (default: 128) * `--top-k`: the number of top-k candidates to consider for sampling (default: 40) @@ -232,7 +232,7 @@ On Linux/MacOS more details are [here](docs/Linux_Osx_Usage.md) * `--repeat-last-n`: the number of tokens to use for detecting repeated n-grams (default: 64) * `--ctx-size`: the maximum context size to use for generating responses (default: 2048) -Note: All options are optional, and have default values. +Note: All options are optional and have default values. Once the server is running, open your web browser and navigate to http://localhost:9600 (or http://your host name:your port number if you have selected different values for those) to access the chatbot UI. To use the app, open a web browser and navigate to this URL. @@ -240,17 +240,17 @@ Make sure to adjust the default values and descriptions of the options to match # Update application To latest version -On windows run: +On Windows, run: ```bash update.bat ``` -On linux or macos run: +On Linux or OS X, run: ```bash bash update.sh ``` # Contribute -This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities. +This is an open-source project by the community and for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities. We welcome contributions from anyone who is interested in improving our chatbot. Whether you want to report a bug, suggest a feature, or submit a pull request, we encourage you to get involved and help us make our chatbot even better. @@ -281,11 +281,11 @@ Please note that all contributions are subject to review and approval by our pro Here are some of the future plans for this project: -**Enhanced control of chatbot parameters:** We plan to improve the user interface (UI) of the chatbot to allow users to control the parameters of the chatbot such as temperature and other variables. This will give users more control over the chatbot's responses, and allow for a more customized experience. +**Enhanced control of chatbot parameters:** We plan to improve the UI of the chatbot to allow users to control the parameters of the chatbot such as temperature and other variables. This will give users more control over the chatbot's responses, and allow for a more customized experience. **Extension system for plugins:** We are also working on an extension system that will allow developers to create plugins for the chatbot. These plugins will be able to add new features and capabilities to the chatbot, and allow for greater customization of the chatbot's behavior. -**Enhanced UI with themes and skins:** Additionally, we plan to enhance the user interface of the chatbot to allow for themes and skins. This will allow users to personalize the appearance of the chatbot, and make it more visually appealing. +**Enhanced UI with themes and skins:** Additionally, we plan to enhance the UI of the chatbot to allow for themes and skins. This will allow users to personalize the appearance of the chatbot and make it more visually appealing. We are excited about these future plans for the project and look forward to implementing them in the near future. Stay tuned for updates!