Merge pull request #87 from andzejsp/readme-edit

Readme edit
This commit is contained in:
Saifeddine ALOUI 2023-04-15 13:22:25 +01:00 committed by GitHub
commit 17b57e36a3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

254
README.md
View File

@ -6,11 +6,11 @@
![GitHub forks](https://img.shields.io/github/forks/nomic-ai/GPT4All-ui)
[![Discord](https://img.shields.io/discord/1092918764925882418?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https://discord.gg/DZ4wsgg4)
This is a Flask web application that provides a chat UI for interacting with llamacpp based chatbots such as GPT4all, vicuna etc...
This is a Flask web application that provides a chat UI for interacting with [llamacpp](https://github.com/ggerganov/llama.cpp) based chatbots such as [GPT4all](https://github.com/nomic-ai/gpt4all), vicuna etc...
Follow us on our [Discord server](https://discord.gg/DZ4wsgg4).
## What is GPT4All ?
![image](https://user-images.githubusercontent.com/827993/231911545-750c8293-58e4-4fac-8b34-f5c0d57a2f7d.png)
GPT4All is an exceptional language model, designed and developed by Nomic-AI, a proficient company dedicated to natural language processing. The app uses Nomic-AI's advanced library to communicate with the cutting-edge GPT4All model, which operates locally on the user's PC, ensuring seamless and efficient communication.
@ -19,87 +19,184 @@ If you are interested in learning more about this groundbreaking project, visit
One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real-time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users.
It's worth noting that the model has recently been launched, and it's expected to evolve over time, enabling it to become even better in the future. This webui is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt over time.
# Features
## UI screenshot
### MAIN page
![image](https://user-images.githubusercontent.com/827993/231911545-750c8293-58e4-4fac-8b34-f5c0d57a2f7d.png)
### Settings page
![image](https://user-images.githubusercontent.com/827993/231912018-4e69e0c3-cbef-4dc8-81b3-d977d96cc7de.png)
### Extensions page
The extensions interface is not yet ready but once it will be, any one may build its own plugins and share them with the community.
![image](https://user-images.githubusercontent.com/827993/231809762-0dd8127e-0cab-4310-9df3-d1cff89cf589.png)
### Training page
This page is not yet ready, but it will eventually be released to allow you to fine tune your own model and share it if you want
![image](https://user-images.githubusercontent.com/827993/231810125-b39c0672-f748-4311-9523-9b27b8a89dfe.png)
### Help
This page shows credits to the developers, How to use, few FAQ, and some examples to test.
- Chat with locally hosted AI inside a web browser
- Create, edit, and share your AI's personality
- Audio in and audio out with many options for language and voices (only Chrome web browser is supported at this time)
- History of discussion with resume functionality
- Add new discussion, rename discussion, remove discussion
- Export database to json format
- Export discussion to text format
## Installation
# Installation and running
To install the app, follow these steps:
Make sure that your CPU supports `AVX2` instruction set. Without it this application wont run out of the box. To check your CPU features, please visit the website of your CPU manufacturer for more information and look for `Instruction set extension: AVX2`.
1. Clone the GitHub repository:
## Windows 10 and 11
```
git clone https://github.com/nomic-ai/gpt4all-ui
### Simple:
1. Download this repository .zip:
![image](https://user-images.githubusercontent.com/80409979/232210909-0ce3dc80-ed34-4b32-b828-e124e3df3ff1.png)
2. Extract contents into a folder.
3. Install application by double clicking on `install.bat` file from Windows explorer as normal user.
4. Run application by double clicking on `run.bat` file from Windows explorer as normal user to start the application.
### Advanced mode:
1. Install [git](https://git-scm.com/download/win).
2. Open terminal/powershell and navigate to a folder you want to clone this repository.
```bash
git clone https://github.com/nomic-ai/gpt4all-ui.git
```
### Manual setup
Hint: Scroll down for docker-compose setup
4. Install application by double clicking on `install.bat` file from Windows explorer as normal user.
5. Run application by double clicking on `run.bat` file from Windows explorer as normal user to start the application.
1. Navigate to the project directory:
## Linux
1. Open terminal/console and install dependencies:
`Debian-based:`
```
sudo apt install git python3 python3-venv
```
`Red Hat-based:`
```
sudo dnf install git python3
```
`Arch-based:`
```
sudo pacman -S git python3
```
2. Clone repository:
```bash
git clone https://github.com/nomic-ai/gpt4all-ui.git
```
```bash
cd gpt4all-ui
```
2. Run the appropriate installation script for your platform:
On Windows :
```cmd
install.bat
```
- On Linux
3. Run installation:
```bash
bash ./install.sh
```
- On Mac os
4. Run application:
```bash
bash ./install-macos.sh
bash ./run.sh
```
## MacOS
1. Open terminal/console and install `brew`:
```
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
2. Install dependencies:
```
brew install git python3
```
3. Clone repository:
```bash
git clone https://github.com/nomic-ai/gpt4all-ui.git
```
```bash
cd gpt4all-ui
```
4. Run installation:
```bash
bash ./install.sh
```
5. Run application:
```bash
bash ./run.sh
```
On Linux/MacOS, if you have issues, refer more details are presented [here](docs/Linux_Osx_Install.md)
These scripts will create a Python virtual environment and install the required dependencies. It will also download the models and install them.
## Docker Compose
Make sure to put models the inside the `models` directory.
After that you can simply use docker-compose or podman-compose to build and start the application:
Build
```bash
docker compose -f docker-compose.yml build
```
Start
```bash
docker compose -f docker-compose.yml up
```
Stop
```
Ctrl + C
```
Start detached (runs in background)
```bash
docker compose -f docker-compose.yml up -d
```
Stop detached (one that runs in background)
```bash
docker compose stop
```
After that you can open the application in your browser on http://localhost:9600
Now you're ready to work!
# Supported models
You can also refuse to download the model during the install procedure and download it manually.
For now we support any ggml model such as :
- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin)
- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) NOTE: Does not work out of the box
- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin)
- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) NOTE: Does not work out of the box
- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) NOTE: Does not work out of the box
- [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin)
- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/blob/main/ggml-alpaca-7b-q4.bin) NOTE: Does not work out of the box - Needs conversion
Just download the model into the models folder and start using the tool.
## Usage
For simple newbies on Windows:
```cmd
run.bat
```
**For now we support ggml models that work out of the box (tested on Windows 11 and Ubuntu 22.04.2) such as:**
For simple newbies on Linux/MacOsX:
```bash
bash run.sh
```
- [GPT4ALL 7B](https://huggingface.co/ParisNeo/GPT4All/resolve/main/gpt4all-lora-quantized-ggml.bin) or visit [repository](https://huggingface.co/ParisNeo/GPT4All)
- [Vicuna 7B rev 1](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit)
- [Vicuna 13B rev 1](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit)
if you want more control on your launch, you can activate your environment:
**These models dont work out of the box and need to be converted to the right ggml type:**
- [Vicuna 7B](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit/resolve/main/ggml-vicuna-7b-4bit.bin) or visit [repository](https://huggingface.co/eachadea/legacy-ggml-vicuna-7b-4bit)
- [Vicuna 13B q4 v0](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_0.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/)
- [Vicuna 13B q4 v1](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin) or visit [repository](https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/)
- [ALPACA 7B](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/resolve/main/ggml-alpaca-7b-q4.bin) or visit [repository](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/)
Just download the model into the `models` folder and start using the tool.
# Build custom personalities and share them
To build a new personality, create a new file with the name of the personality inside the `personalities` folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file.
You can launch the application using the personality in two ways:
- Either you want to change it permanently by putting the name of the personality inside your configuration file
- Or just use the `--personality` or `-p` option to give the personality name to be used.
If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request.
# Advanced Usage
If you want more control on your launch, you can activate your environment:
On Windows:
```cmd
@ -120,14 +217,13 @@ python app.py [--config CONFIG] [--personality PERSONALITY] [--port PORT] [--hos
On Linux/MacOS more details are [here](docs/Linux_Osx_Usage.md)
## Options
* `--config`: the configuration file to be used. It contains default configurations to be used. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml)
* `--personality`: the personality file name. It contains the definition of the pezrsonality of the chatbot. It should be placed in personalities folder. The default personality is `gpt4all_chatbot.yaml`
* `--model`: the name of the model to be used. The model should be placed in models folder (default: gpt4all-lora-quantized.bin)
* `--seed`: the random seed for reproductibility. If fixed, it is possible to reproduce the outputs exactly (default: random)
* `--port`: the port on which to run the server (default: 9600)
* `--host`: the host address on which to run the server (default: localhost)
* `--host`: the host address on which to run the server (default: localhost). To expose application to local network set this to 0.0.0.0.
* `--temp`: the sampling temperature for the model (default: 0.1)
* `--n-predict`: the number of tokens to predict at a time (default: 128)
* `--top-k`: the number of top-k candidates to consider for sampling (default: 40)
@ -142,57 +238,17 @@ Once the server is running, open your web browser and navigate to http://localho
Make sure to adjust the default values and descriptions of the options to match your specific application.
### Docker Compose Setup
Make sure to have the `gpt4all-lora-quantized-ggml.bin` inside the `models` directory.
After that you can simply use docker-compose or podman-compose to build and start the application:
# Update application To latest version
Build
```bash
docker-compose -f docker-compose.yml build
```
Start
```bash
docker-compose -f docker-compose.yml up
```
After that you can open the application in your browser on http://localhost:9600
## Update To latest version
On windows use:
On windows run:
```bash
update.bat
```
On linux or macos use:
On linux or macos run:
```bash
bash update.sh
```
## Build custom personalities and share them
To build a new personality, create a new file with the name of the personality inside the personalities folder. You can look at `gpt4all_chatbot.yaml` file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file.
You can launch the application using the personality in two ways:
- Either you want to change it permanently by putting the name of the personality inside your configuration file
- Or just use the `--personality` or `-p` option to give the personality name to be used.
If you deem your personality worthy of sharing, you can share the personality by adding it to the [GPT4all personalities](https://github.com/ParisNeo/GPT4All_Personalities) repository. Just fork the repo, add your file and do a pull request.
## Features
- Chat with AI
- Create, edit, and share personality
- Audio in and audio out with many options for language and voices
- History of discussion with resume functionality
- Add new discussion, rename discussion, remove discussion
- Export database to json format
- Export discussion to text format
## Contribute
# Contribute
This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities.
@ -221,7 +277,7 @@ We will review your pull request as soon as possible and provide feedback on any
Please note that all contributions are subject to review and approval by our project maintainers. We reserve the right to reject any contribution that does not align with our project goals or standards.
## Future Plans
# Future Plans
Here are some of the future plans for this project:
@ -233,6 +289,6 @@ Here are some of the future plans for this project:
We are excited about these future plans for the project and look forward to implementing them in the near future. Stay tuned for updates!
## License
# License
This project is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/nomic-ai/GPT4All-ui/blob/main/LICENSE) file for details.