Lord of Large Language Models Web User Interface
Go to file
2023-04-15 13:37:42 +02:00
.github Bump docker/build-push-action from 2 to 4 2023-04-07 17:37:52 +00:00
.vscode Upgraded install batch files 2023-04-06 22:07:20 +02:00
configs New code struture with separated backend 2023-04-15 13:30:08 +02:00
databases New code struture with separated backend 2023-04-15 13:30:08 +02:00
docs Extensions system is being prepared 2023-04-15 10:19:09 +02:00
models Merge branch 'main' of https://github.com/ParisNeo/gpt4all-ui 2023-04-07 20:04:59 +02:00
personalities New code struture with separated backend 2023-04-15 13:30:08 +02:00
pyGpt4All New code struture with separated backend 2023-04-15 13:30:08 +02:00
scripts Enhanced audio out 2023-04-14 23:15:54 +02:00
static New code struture with separated backend 2023-04-15 13:30:08 +02:00
templates upgraded ui 2023-04-15 00:19:56 +02:00
test Update python code (isort, black, pylint) and some manual tuning 2023-04-07 18:58:42 +02:00
.gitignore New code struture with separated backend 2023-04-15 13:30:08 +02:00
.hadolint.yaml Hadolint config added 2023-04-07 18:14:03 +02:00
app.py Bugfixes for personality changes 2023-04-15 13:37:42 +02:00
CHANGELOG.md Smaller changes to structure and added .github folder with pipeline 2023-04-07 17:58:26 +02:00
CODE_OF_CONDUCT.md Initial commit 2023-04-06 21:12:49 +02:00
docker-compose.yml upgraded docker files 2023-04-13 22:54:04 +02:00
Dockerfile upgraded docker files 2023-04-13 22:54:04 +02:00
install-macos.sh upgraded model installation 2023-04-14 21:06:47 +02:00
install.3.10.sh upgraded model installation 2023-04-14 21:06:47 +02:00
install.bat upgraded model installation 2023-04-14 21:06:47 +02:00
install.sh upgraded model installation 2023-04-14 21:06:47 +02:00
LICENSE Initial commit 2023-04-06 12:38:00 -04:00
README.md added a model and made few notes 2023-04-14 23:56:43 +03:00
requirements.txt configuration is now percistant 2023-04-12 22:36:03 +02:00
run.bat Repared windows issues 2023-04-08 12:25:40 +02:00
run.sh activate the correct virtual env in run.sh 2023-04-07 07:38:26 +03:00
uninstall.bat Upgraded install batch files 2023-04-06 22:07:20 +02:00
uninstall.sh Initial commit 2023-04-06 21:12:49 +02:00
update.bat Upgraded code 2023-04-13 21:40:46 +02:00
update.sh bugfix 2023-04-14 02:42:04 +02:00

Gpt4All Web UI

GitHub license GitHub issues GitHub stars GitHub forks Discord

This is a Flask web application that provides a chat UI for interacting with llamacpp based chatbots such as GPT4all, vicuna etc...

Follow us on our Discord server.

What is GPT4All ?

GPT4All is an exceptional language model, designed and developed by Nomic-AI, a proficient company dedicated to natural language processing. The app uses Nomic-AI's advanced library to communicate with the cutting-edge GPT4All model, which operates locally on the user's PC, ensuring seamless and efficient communication.

If you are interested in learning more about this groundbreaking project, visit their Github repository, where you can find comprehensive information regarding the app's functionalities and technical details. Moreover, you can delve deeper into the training process and database by going through their detailed Technical report, available for download at Technical report.

One of the app's impressive features is that it allows users to send messages to the chatbot and receive instantaneous responses in real-time, ensuring a seamless user experience. Additionally, the app facilitates the exportation of the entire chat history in either text or JSON format, providing greater flexibility to the users.

It's worth noting that the model has recently been launched, and it's expected to evolve over time, enabling it to become even better in the future. This webui is designed to provide the community with easy and fully localized access to a chatbot that will continue to improve and adapt over time.

UI screenshot

MAIN page

image

Settings page

image

Extensions page

The extensions interface is not yet ready but once it will be, any one may build its own plugins and share them with the community. image

Training page

This page is not yet ready, but it will eventually be released to allow you to fine tune your own model and share it if you want image

Help

This page shows credits to the developers, How to use, few FAQ, and some examples to test.

Installation

To install the app, follow these steps:

  1. Clone the GitHub repository:
git clone https://github.com/nomic-ai/gpt4all-ui

Manual setup

Hint: Scroll down for docker-compose setup

  1. Navigate to the project directory:
cd gpt4all-ui
  1. Run the appropriate installation script for your platform:

On Windows :

install.bat
  • On Linux
bash ./install.sh
  • On Mac os
bash ./install-macos.sh

On Linux/MacOS, if you have issues, refer more details are presented here These scripts will create a Python virtual environment and install the required dependencies. It will also download the models and install them.

Now you're ready to work!

Supported models

You can also refuse to download the model during the install procedure and download it manually. For now we support any ggml model such as :

Just download the model into the models folder and start using the tool.

Usage

For simple newbies on Windows:

run.bat

For simple newbies on Linux/MacOsX:

bash run.sh

if you want more control on your launch, you can activate your environment:

On Windows:

env/Scripts/activate.bat

On Linux/MacOs:

source venv/bin/activate

Now you are ready to customize your Bot.

To run the Flask server, execute the following command:

python app.py [--config CONFIG] [--personality PERSONALITY] [--port PORT] [--host HOST] [--temp TEMP] [--n-predict N_PREDICT] [--top-k TOP_K] [--top-p TOP_P] [--repeat-penalty REPEAT_PENALTY] [--repeat-last-n REPEAT_LAST_N] [--ctx-size CTX_SIZE]

On Linux/MacOS more details are here

Options

  • --config: the configuration file to be used. It contains default configurations to be used. The script parameters will override the configurations inside the configuration file. It must be placed in configs folder (default: default.yaml)
  • --personality: the personality file name. It contains the definition of the pezrsonality of the chatbot. It should be placed in personalities folder. The default personality is gpt4all_chatbot.yaml
  • --model: the name of the model to be used. The model should be placed in models folder (default: gpt4all-lora-quantized.bin)
  • --seed: the random seed for reproductibility. If fixed, it is possible to reproduce the outputs exactly (default: random)
  • --port: the port on which to run the server (default: 9600)
  • --host: the host address on which to run the server (default: localhost)
  • --temp: the sampling temperature for the model (default: 0.1)
  • --n-predict: the number of tokens to predict at a time (default: 128)
  • --top-k: the number of top-k candidates to consider for sampling (default: 40)
  • --top-p: the cumulative probability threshold for top-p sampling (default: 0.90)
  • --repeat-penalty: the penalty to apply for repeated n-grams (default: 1.3)
  • --repeat-last-n: the number of tokens to use for detecting repeated n-grams (default: 64)
  • --ctx-size: the maximum context size to use for generating responses (default: 2048)

Note: All options are optional, and have default values.

Once the server is running, open your web browser and navigate to http://localhost:9600 (or http://your host name:your port number if you have selected different values for those) to access the chatbot UI. To use the app, open a web browser and navigate to this URL.

Make sure to adjust the default values and descriptions of the options to match your specific application.

Docker Compose Setup

Make sure to have the gpt4all-lora-quantized-ggml.bin inside the models directory. After that you can simply use docker-compose or podman-compose to build and start the application:

Build

docker-compose -f docker-compose.yml build

Start

docker-compose -f docker-compose.yml up

After that you can open the application in your browser on http://localhost:9600

Update To latest version

On windows use:

update.bat

On linux or macos use:

bash update.sh

Build custom personalities and share them

To build a new personality, create a new file with the name of the personality inside the personalities folder. You can look at gpt4all_chatbot.yaml file as an example. Then you can fill the fields with the description, the conditionning etc of your personality. Then save the file.

You can launch the application using the personality in two ways:

  • Either you want to change it permanently by putting the name of the personality inside your configuration file
  • Or just use the --personality or -p option to give the personality name to be used.

If you deem your personality worthy of sharing, you can share the personality by adding it to the GPT4all personalities repository. Just fork the repo, add your file and do a pull request.

Features

  • Chat with AI
  • Create, edit, and share personality
  • Audio in and audio out with many options for language and voices
  • History of discussion with resume functionality
  • Add new discussion, rename discussion, remove discussion
  • Export database to json format
  • Export discussion to text format

Contribute

This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities.

We welcome contributions from anyone who is interested in improving our chatbot. Whether you want to report a bug, suggest a feature, or submit a pull request, we encourage you to get involved and help us make our chatbot even better.

Before contributing, please take a moment to review our code of conduct. We expect all contributors to abide by this code of conduct, which outlines our expectations for respectful communication, collaborative development, and innovative contributions.

Reporting Bugs

If you find a bug or other issue with our chatbot, please report it by opening an issue. Be sure to provide as much detail as possible, including steps to reproduce the issue and any relevant error messages.

Suggesting Features

If you have an idea for a new feature or improvement to our chatbot, we encourage you to open an issue to discuss it. We welcome feedback and ideas from the community and will consider all suggestions that align with our project goals.

Contributing Code

If you want to contribute code to our chatbot, please follow these steps:

  1. Fork the repository and create a new branch for your changes.
  2. Make your changes and ensure that they follow our coding conventions.
  3. Test your changes to ensure that they work as expected.
  4. Submit a pull request with a clear description of your changes and the problem they solve.

We will review your pull request as soon as possible and provide feedback on any necessary changes. We appreciate your contributions and look forward to working with you!

Please note that all contributions are subject to review and approval by our project maintainers. We reserve the right to reject any contribution that does not align with our project goals or standards.

Future Plans

Here are some of the future plans for this project:

Enhanced control of chatbot parameters: We plan to improve the user interface (UI) of the chatbot to allow users to control the parameters of the chatbot such as temperature and other variables. This will give users more control over the chatbot's responses, and allow for a more customized experience.

Extension system for plugins: We are also working on an extension system that will allow developers to create plugins for the chatbot. These plugins will be able to add new features and capabilities to the chatbot, and allow for greater customization of the chatbot's behavior.

Enhanced UI with themes and skins: Additionally, we plan to enhance the user interface of the chatbot to allow for themes and skins. This will allow users to personalize the appearance of the chatbot, and make it more visually appealing.

We are excited about these future plans for the project and look forward to implementing them in the near future. Stay tuned for updates!

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.