Create README

Create README.md

upgraded readme

upgraded

upgraded

Added an example of a console chat

Upgraded code

upgraded submodule

upgraded example

upgraded console application

Added some logo and readme

upgraded

upgraded

updated

updated

changed logo

upgraded

upgrade

upgraded

upgraded

upgraded version

Added console app

upgraded code and service information

changed documentation title

upgraded code

updated zoo

Upgraded logo

upgradded

Update server_endpoints.md

Update README.md

Update server_endpoints.md

Enhanced code

enhanced work + added training

fixed error in README

upgraded readme

Fixed console problem

enhanced code

Added reference to models

upgraded version

Update README.md

upgraded binding

Update README.md

enhanced server

upgraded console and server

upgraded tool

upgraded

upgraded

Upgraded to new Version

enhanced

updated personalities zoo

personalities_zoo

upgraded readme

Possibility to send files to personalities

Possibility to send files to personalities

upgraded code

bugfix

updated

upgraded

upgraded console

updated readme

version upgrade

Update README.md

Added menu build at startup

change

upgraded code

now you select a personality of not selected

upgraded

upgraded documentation

upgraded documentation

updated

Upgraded

bugfix

now you can build custom personalities

updated. now we can use other personalities

Bugfix

added return

changed colors

added protection

added back to personality installation

bugfix

typo

fixed autogptq

fixed autogptq

gptq

upgraded gptq

changed version

upgraded console

typo

Added send file

updated send file

upgraded personality

upgraded image analysis tool

updated

upgraded version

upgraded tool

updated

gpt4all is now working

version update

upgraded naming scheme

hapen

Upgraded path data

upgraded version

updated

upgraded version

upgraded install procedures

personal path can be changed online

upgraded chatgpt

upgraded

upgraded

updated version

bugfix

upgraded personalities

upgraded version

enhanced

enhanced

update

bugfix

version update

Added reset functionality

Added settings

upgraded

enhanced library

upgraded models

Upgraded

upgraded

rebased

upgraded code

fixed gpt4all

updated version
This commit is contained in:
Saifeddine ALOUI 2023-06-02 12:46:41 +02:00
parent a65750e5fc
commit 61a4f15109
53 changed files with 16863 additions and 0 deletions

21
.gitignore vendored
View File

@ -158,3 +158,24 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Custom stuff
.installed
shared/*
*.ckpt
*.safetensors
models
# rest tests
*.http
# shared resources
shared
src
temp
outputs
# Global path configuration
global_paths_cfg.yaml

16
.gitmodules vendored Normal file
View File

@ -0,0 +1,16 @@
[submodule "lollms/bindings_zoo"]
path = lollms/bindings_zoo
url = https://github.com/ParisNeo/lollms_bindings_zoo.git
branch = main
[submodule "lollms/personalities_zoo"]
path = lollms/personalities_zoo
url = https://github.com/ParisNeo/lollms_personalities_zoo.git
branch = main
[submodule "lollms/bindings_zoo"]
path = lollms/bindings_zoo
url = https://github.com/ParisNeo/lollms_bindings_zoo.git
branch = main
[submodule "lollms/personalities_zoo"]
path = lollms/personalities_zoo
url = https://github.com/ParisNeo/lollms_personalities_zoo.git
branch = main

3
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,3 @@
{
"ros.distro": "noetic"
}

9
MANIFEST.in Normal file
View File

@ -0,0 +1,9 @@
recursive-include lollms/configs *
recursive-include lollms/bindings_zoo *
recursive-include lollms/personalities_zoo *
global-exclude *.bin
global-exclude *.pyc
global-exclude local_config.yaml
global-exclude .installed
global-exclude .git
global-exclude .gitignore

273
README.md Normal file
View File

@ -0,0 +1,273 @@
# Lord of Large Language Models (LoLLMs)
<div align="center">
<img src="https://github.com/ParisNeo/lollms/blob/main/lollms/assets/logo.png" alt="Logo" width="200" height="200">
</div>
![GitHub license](https://img.shields.io/github/license/ParisNeo/lollms)
![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms)
![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms)
![GitHub forks](https://img.shields.io/github/forks/ParisNeo/lollms)
[![Discord](https://img.shields.io/discord/1092918764925882418?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https://discord.gg/4rR282WJb6)
[![Follow me on Twitter](https://img.shields.io/twitter/follow/SpaceNerduino?style=social)](https://twitter.com/SpaceNerduino)
[![Follow Me on YouTube](https://img.shields.io/badge/Follow%20Me%20on-YouTube-red?style=flat&logo=youtube)](https://www.youtube.com/user/Parisneo)
Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.
## Features
- Fully integrated library with access to bindings, personalities and helper tools.
- Generate text using large language models.
- Supports multiple personalities for generating text with different styles and tones.
- Real-time text generation with WebSocket-based communication.
- RESTful API for listing personalities and adding new personalities.
- Easy integration with various applications and frameworks.
- Possibility to send files to personalities
## Installation
You can install LoLLMs using pip, the Python package manager. Open your terminal or command prompt and run the following command:
```bash
pip install --upgrade lollms
```
Or if you want to get the latest version from the git:
```bash
pip install --upgrade git+https://github.com/ParisNeo/lollms.git
```
To simply configure your environment run the console app:
```bash
lollms-console
```
The first time you will be prompted to select a binding.
![image](https://github.com/ParisNeo/lollms/assets/827993/2d7f58fe-089d-4d3e-a21a-0609f8e27969)
Once the binding is selected, you have to install at least a model. You have two options:
1- install from internet. Just give the link to a model on hugging face. For example. if you select the default llamacpp python bindings (7), you can install this model:
```bash
https://huggingface.co/TheBloke/airoboros-7b-gpt4-GGML/resolve/main/airoboros-7b-gpt4.ggmlv3.q4_1.bin
```
2- install from local drive. Just give the path to a model on your pc. The model will not be copied. We only create a reference to the model. This is useful if you use multiple clients so that you can mutualize your models with other tools.
Now you are ready to use the server.
## Library example
Here is the smallest possible example that allows you to use the full potential of the tool with nearly no code
```python
from lollms.console import Conversation
cv = Conversation(None)
cv.start_conversation()
```
Now you can reimplement the start_conversation method to do the things you want:
```python
from lollms.console import Conversation
class MyConversation(Conversation):
def __init__(self, cfg=None):
super().__init__(cfg, show_welcome_message=False)
def start_conversation(self):
prompt = "Once apon a time"
def callback(text, type=None):
print(text, end="", flush=True)
return True
print(prompt, end="", flush=True)
output = self.safe_generate(prompt, callback=callback)
if __name__ == '__main__':
cv = MyConversation()
cv.start_conversation()
```
Or if you want here is a conversation tool written in few lines
```python
from lollms.console import Conversation
class MyConversation(Conversation):
def __init__(self, cfg=None):
super().__init__(cfg, show_welcome_message=False)
def start_conversation(self):
full_discussion=""
while True:
prompt = input("You: ")
if prompt=="exit":
return
if prompt=="menu":
self.menu.main_menu()
full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text
full_discussion += self.personality.ai_message_prefix
def callback(text, type=None):
print(text, end="", flush=True)
return True
print(self.personality.name+": ",end="",flush=True)
output = self.safe_generate(full_discussion, callback=callback)
full_discussion += output.strip()+self.personality.link_text
print()
if __name__ == '__main__':
cv = MyConversation()
cv.start_conversation()
```
Here we use the safe_generate method that does all the cropping for you ,so you can chat forever and will never run out of context.
## Socket IO Server Usage
Once installed, you can start the LoLLMs Server using the `lollms-server` command followed by the desired parameters.
```
lollms-server --host <host> --port <port> --config <config_file> --bindings_path <bindings_path> --personalities_path <personalities_path> --models_path <models_path> --binding_name <binding_name> --model_name <model_name> --personality_full_name <personality_full_name>
```
### Parameters
- `--host`: The hostname or IP address to bind the server (default: localhost).
- `--port`: The port number to run the server (default: 9600).
- `--config`: Path to the configuration file (default: None).
- `--bindings_path`: The path to the Bindings folder (default: "./bindings_zoo").
- `--personalities_path`: The path to the personalities folder (default: "./personalities_zoo").
- `--models_path`: The path to the models folder (default: "./models").
- `--binding_name`: The default binding to be used (default: "llama_cpp_official").
- `--model_name`: The default model name (default: "Manticore-13B.ggmlv3.q4_0.bin").
- `--personality_full_name`: The full name of the default personality (default: "personality").
### Examples
Start the server with default settings:
```
lollms-server
```
Start the server on a specific host and port:
```
lollms-server --host 0.0.0.0 --port 5000
```
## API Endpoints
### WebSocket Events
- `connect`: Triggered when a client connects to the server.
- `disconnect`: Triggered when a client disconnects from the server.
- `list_personalities`: List all available personalities.
- `add_personality`: Add a new personality to the server.
- `generate_text`: Generate text based on the provided prompt and selected personality.
For more details refer to the [API documentation](doc/server_endpoints.md)
### RESTful API
- `GET /personalities`: List all available personalities.
- `POST /personalities`: Add a new personality to the server.
Sure! Here are examples of how to communicate with the LoLLMs Server using JavaScript and Python.
### JavaScript Example
```javascript
// Establish a WebSocket connection with the server
const socket = io.connect('http://localhost:9600');
// Event: When connected to the server
socket.on('connect', () => {
console.log('Connected to the server');
// Request the list of available personalities
socket.emit('list_personalities');
});
// Event: Receive the list of personalities from the server
socket.on('personalities_list', (data) => {
const personalities = data.personalities;
console.log('Available Personalities:', personalities);
// Select a personality and send a text generation request
const selectedPersonality = personalities[0];
const prompt = 'Once upon a time...';
socket.emit('generate_text', { personality: selectedPersonality, prompt: prompt });
});
// Event: Receive the generated text from the server
socket.on('text_generated', (data) => {
const generatedText = data.text;
console.log('Generated Text:', generatedText);
// Do something with the generated text
});
// Event: When disconnected from the server
socket.on('disconnect', () => {
console.log('Disconnected from the server');
});
```
### Python Example
```python
import socketio
# Create a SocketIO client
sio = socketio.Client()
# Event: When connected to the server
@sio.on('connect')
def on_connect():
print('Connected to the server')
# Request the list of available personalities
sio.emit('list_personalities')
# Event: Receive the list of personalities from the server
@sio.on('personalities_list')
def on_personalities_list(data):
personalities = data['personalities']
print('Available Personalities:', personalities)
# Select a personality and send a text generation request
selected_personality = personalities[0]
prompt = 'Once upon a time...'
sio.emit('generate_text', {'personality': selected_personality, 'prompt': prompt})
# Event: Receive the generated text from the server
@sio.on('text_generated')
def on_text_generated(data):
generated_text = data['text']
print('Generated Text:', generated_text)
# Do something with the generated text
# Event: When disconnected from the server
@sio.on('disconnect')
def on_disconnect():
print('Disconnected from the server')
# Connect to the server
sio.connect('http://localhost:9600')
# Keep the client running
sio.wait()
```
Make sure to have the necessary dependencies installed for the JavaScript and Python examples. For JavaScript, you need the `socket.io-client` package, and for Python, you need the `python-socketio` package.
## Contributing
Contributions to the LoLLMs Server project are welcome and appreciated. If you would like to contribute, please follow the guidelines outlined in the [CONTRIBUTING.md](https://github.com/ParisNeo/lollms/blob/main/CONTRIBUTING.md) file.
## License
LoLLMs Server is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/ParisNeo/lollms/blob/main/LICENSE) file for more information.
## Repository
The source code for LoLLMs Server can be found on GitHub

309
doc/server_endpoints.md Normal file
View File

@ -0,0 +1,309 @@
# Lord Of Large Language Models Socket.io Endpoints Documentation
<img src="https://github.com/ParisNeo/lollms/blob/main/lollms/assets/logo.png" alt="Logo" width="200" height="200">
The server provides several Socket.io endpoints that clients can use to interact with the server. The default URL for the server is `http://localhost:9600`, but it can be changed using the configuration file or launch parameters.
## Endpoints
### `connect`
- Event: `'connect'`
- Description: This event is triggered when a client connects to the server.
- Actions:
- Adds the client to the list of connected clients with a unique session ID.
- Prints a message indicating the client's session ID.
### `disconnect`
- Event: `'disconnect'`
- Description: This event is triggered when a client disconnects from the server.
- Actions:
- Removes the client from the list of connected clients, if it exists.
- Prints a message indicating the client's session ID.
#### `list_available_bindings`
- Event: `'list_available_bindings'`
- Description: This event is triggered when a client requests a list of available bindings.
- Parameters: None
- Actions:
- Initializes an empty list `binding_infs` to store information about each binding.
- Iterates over the files and directories in the `self.bindings_path` directory.
- For each directory in `self.bindings_path`:
- Reads the content of the `binding_card.yaml` file, which contains information about the binding card.
- Reads the content of the `models.yaml` file, which contains information about the models associated with the binding.
- Creates an entry dictionary that includes the binding's name, card information, and model information.
- Appends the entry to the `binding_infs` list.
- Emits a response event `'bindings_list'` to the client containing the list of available bindings and their information (`bindings`) as well as a `success` parameter that is `False` when not successful.
Events generated:
- `'bindings_list'`: Sent to the client as a response to the `'list_available_bindings'` request. It contains the list of available bindings along with their associated information (`binding_infs`).
#### `list_available_personalities`
- Event: `'list_available_personalities'`
- Description: This event is triggered when a client requests a list of available personalities.
- Parameters: None
- Actions:
- Retrieves the path to the personalities folder from the server (`self.personalities_path`).
- Initializes an empty dictionary to store the available personalities.
- Iterates over each language folder in the personalities folder.
- Checks if the current item is a directory.
- Initializes an empty dictionary to store the personalities within the language.
- Iterates over each category folder within the language folder.
- Checks if the current item is a directory.
- Initializes an empty list to store the personalities within the category.
- Iterates over each personality folder within the category folder.
- Checks if the current item is a directory.
- Tries to load personality information from the config file (`config.yaml`) within the personality folder.
- Retrieves the name, description, author, and version from the config data.
- Checks if the `scripts` folder exists within the personality folder to determine if the personality has scripts.
- Checks for the existence of logo files named `logo.gif` or `logo.webp` or `logo.png` or `logo.jpg` or `logo.jpeg` or `logo.bmp` within the `assets` folder to determine if the personality has a logo.
- Sets the `avatar` field of the personality info based on the available logo file.
- Appends the personality info to the list of personalities within the category.
- Adds the list of personalities to the dictionary of the current category within the language.
- Adds the dictionary of categories to the dictionary of the current language.
- Sends a response to the client containing the dictionary of available personalities.
Events generated:
- `'personalities_list'`: Emits an event to the client with the list of available personalities, categorized by language and category. The event data includes the personality information such as name, description, author, version, presence of scripts, and avatar image file path.
#### `list_available_models`
- Event: `'list_available_models'`
- Description: This event is triggered when a client requests a list of available models.
- Parameters: None (except `self` which refers to the class instance)
- Actions:
- Checks if a binding class is selected. If not, emits an event `'available_models_list'` with a failure response indicating that no binding is selected.
- Retrieves the list of available models using the binding class.
- Processes each model in the list to extract relevant information such as filename, server, image URL, license, owner, owner link, filesize, description, model type, etc.
- Constructs a dictionary representation for each model with the extracted information.
- Appends each model dictionary to the `models` list.
- Emits an event `'available_models_list'` with a success response containing the list of available models to the client.
Events generated:
- `'available_models_list'`: This event is emitted as a response to the client requesting a list of available models. It contains the success status and a list of available models with their details, such as title, icon, license, owner, owner link, description, installation status, file path, filesize, and model type.
#### `list_available_personalities_languages`
- Event: `'list_available_personalities_languages'`
- Description: This event is triggered when a client requests a list of available personality languages.
- Actions:
- Attempts to retrieve a list of available personality languages by iterating over the `self.personalities_path` directory.
- Sends a response to the client containing the success status and the list of available personality languages.
Parameters: None
Events:
- `'available_personalities_languages_list'`: This event is emitted as a response to the client after listing the available personality languages.
- Data:
- `'success'` (boolean): Indicates whether the operation was successful or not.
- `'available_personalities_languages'` (list): Contains the available personality languages as a list of strings.
#### `list_available_personalities_categories`
- Event: `'list_available_personalities_categories'`
- Description: This event is triggered when a client requests a list of available personality categories based on a specified language.
- Parameters:
- `data`: A dictionary containing the following parameter:
- `language`: The language for which to retrieve available personality categories.
- Actions:
- Extracts the `language` parameter from the request data.
- Attempts to retrieve the available personality categories for the specified language.
- Emits an event `'available_personalities_categories_list'` to the client.
- If successful, sends a response with a list of available personality categories in the `'available_personalities_categories'` field of the event data.
- If an error occurs, sends a response with an error message in the `'error'` field of the event data.
Events:
- Event: `'available_personalities_categories_list'`
- Description: This event is emitted in response to the `list_available_personalities_categories` event.
- Data:
- If successful:
- `'success'` (boolean): Indicates whether the retrieval of available personality categories was successful.
- `'available_personalities_categories'` (list): A list of available personality categories.
- If an error occurs:
- `'success'` (boolean): Indicates whether an error occurred during the retrieval of available personality categories.
- `'error'` (string): The error message describing the encountered error.
#### `list_available_personalities_names`
- Event: `'list_available_personalities_names'`
- Description: This event is triggered when a client requests a list of available personality names based on the specified language and category.
- Parameters:
- `language` (string): The language for which the available personality names are requested.
- `category` (string): The category for which the available personality names are requested.
- Actions:
- Extracts the `language` and `category` parameters from the request data.
- Retrieves the list of available personalities by iterating over the directory specified by the `language` and `category` parameters.
- Sends a response to the client containing the list of available personality names.
- Event Generated: `'list_available_personalities_names_list'`
- Description: This event is emitted as a response to the `list_available_personalities_names` request, providing the list of available personality names.
- Parameters:
- `success` (bool): Indicates the success or failure of the request.
- `list_available_personalities_names` (list): The list of available personality names.
- `error` (string, optional): If the request fails, this parameter contains the error message.
#### `select_binding`
- Event: `'select_binding'`
- Description: This event is triggered when a client selects a binding.
- Parameters:
- `data['binding_name']`: The name of the binding selected by the client.
Actions:
- Creates a deep copy of the `self.config` dictionary and assigns it to `self.cp_config` variable.
- Updates the `"binding_name"` value in `self.cp_config` with the selected binding name obtained from `data['binding_name']`.
- Attempts to build a binding instance using the `self.bindings_path` and `self.cp_config`.
- If successful, updates `self.binding_class` with the created binding instance and updates `self.config` with `self.cp_config`.
- Sends a response to the client indicating the success of the binding selection along with the selected binding name.
- If an exception occurs during the binding creation process, the exception is printed and a response is sent to the client indicating the failure of the binding selection along with the selected binding name and the error message.
Events generated:
- `'select_binding'`: This event is emitted to the client to provide a response regarding the binding selection. It contains the following data:
- `'success'`: A boolean value indicating the success or failure of the binding selection.
- `'binding_name'`: The name of the selected binding.
- If the binding selection fails, it also includes:
- `'error'`: An error message explaining the reason for the failure.
#### `select_model`
- Event: `'select_model'`
- Description: This event is triggered when a client requests to select a model.
- Parameters:
- `data['model_name']` (string): The name of the model to select.
- Actions:
- Extracts the model name from the request data.
- Checks if a binding class is available (`self.binding_class`).
- If no binding class is available, emits a `'select_model'` event with a failure response, indicating that a binding needs to be selected first.
- Returns and exits the function.
- Creates a deep copy of the configuration (`self.config`) and assigns it to `self.cp_config`.
- Sets the `"model_name"` property of `self.cp_config` to the selected model name.
- Tries to create an instance of the binding class (`self.binding_class`) with `self.cp_config`.
- If successful, assigns the created binding instance to `self.current_model`.
- Emits a `'select_model'` event with a success response, indicating that the model selection was successful.
- Returns and exits the function.
- If an exception occurs during model creation, prints the exception and emits a `'select_model'` event with a failure response, indicating that a binding needs to be selected first.
Events generated:
- `'select_model'` (success response):
- Emits to the client a success response indicating that the model selection was successful.
- Parameters:
- `'success'` (boolean): `True` to indicate success.
- `'model_name'` (string): The selected model name.
- `'select_model'` (failure response):
- Emits to the client a failure response indicating that a binding needs to be selected first or an error occurred during model creation.
- Parameters:
- `'success'` (boolean): `False` to indicate failure.
- `'model_name'` (string): The selected model name.
- `'error'` (string): An error message providing additional details.
#### `add_personality`
- Event: `'add_personality'`
- Description: This event is triggered when a client requests to add a new personality.
- Parameters:
- `data`: A dictionary containing the following key-value pairs:
- `'path'`: The path to the personality file.
- Actions:
- Extracts the personality path from the `data` dictionary.
- Attempts to create a new `AIPersonality` instance with the provided path.
- Appends the created personality to the `self.personalities` list.
- Appends the personality path to the `self.config["personalities"]` list.
- Saves the updated configuration using `self.config.save_config()`.
- Sends a response to the client indicating the success of the personality addition along with the name and ID of the added personality.
- Events Generated:
- `'personality_added'`: This event is emitted to the client to indicate the successful addition of the personality. The emitted data is a dictionary with the following key-value pairs:
- `'success'`: `True` to indicate success.
- `'name'`: The name of the added personality.
- `'id'`: The ID of the added personality in the `self.personalities` list.
- `'personality_add_failed'`: This event is emitted to the client if an exception occurs during the personality addition. The emitted data is a dictionary with the following key-value pairs:
- `'success'`: `False` to indicate failure.
- `'error'`: A string containing the error message explaining the cause of the failure.
#### `activate_personality`
- Event: `'activate_personality'`
- Description: This event is triggered when a client requests to activate a personality.
- Actions:
- Extracts the personality ID from the request data.
- Checks if the personality ID is valid (within the range of `self.personalities`).
- Sets the `self.active_personality` to the selected personality.
- Sends a response to the client indicating the success of the personality activation along with the name and ID of the activated personality.
- Updates the default personality ID in `self.config["active_personality_id"]`.
- Saves the updated configuration using `self.config.save_config()`.
- Event Generated:
- `'activate_personality'`: Emits the event to the client with the following data:
- `'success'`: Indicates whether the personality activation was successful (`True` or `False`).
- `'name'`: The name of the activated personality.
- `'id'`: The ID (index) of the activated personality in the `self.personalities` list.
#### `list_active_personalities`
- Event: `'list_active_personalities'`
- Description: This event is triggered when a client requests a list of active personalities.
- Parameters: None
- Actions:
- Retrieves the names of all the active personalities from the `self.personalities` list.
- Sends a response to the client containing the list of active personality names.
- Event Generated: `'active_personalities_list'`
- Event Data:
- `'success'`: A boolean value indicating the success of the operation.
- `'personalities'`: A list of strings representing the names of the active personalities.
Please note that the `'list_active_personalities'` event does not require any parameters when triggering the endpoint. It simply returns the list of active personalities to the client.
#### `activate_personality`
- Event: `'activate_personality'`
- Description: This event is triggered when a client requests to activate a personality.
- Parameters:
- `data['id']` (integer): The ID of the personality to activate.
- Actions:
- Extracts the personality ID from the request data.
- Checks if the personality ID is valid by comparing it with the length of the `self.personalities` list.
- If the personality ID is valid:
- Sets the `self.active_personality` to the personality at the specified ID.
- Sends a response to the client indicating the success of the personality activation, along with the name and ID of the activated personality.
- Updates the `active_personality_id` in the `self.config` object with the activated personality's ID.
- Saves the updated configuration.
- If the personality ID is not valid:
- Sends a response to the client indicating the failure of the personality activation, along with an error message.
Generated Events:
- `'activate_personality'`: This event is emitted to the client after successfully activating a personality.
- Parameters:
- `{'success': True, 'name': self.active_personality, 'id': len(self.personalities) - 1}`:
- `'success'` (boolean): Indicates whether the personality activation was successful.
- `'name'` (string): The name of the activated personality.
- `'id'` (integer): The ID of the activated personality.
- `'personality_add_failed'`: This event is emitted to the client if the personality ID provided is not valid.
- Parameters:
- `{'success': False, 'error': 'Personality ID not valid'}`:
- `'success'` (boolean): Indicates whether the personality activation failed.
- `'error'` (string): The error message indicating the reason for the failure.
#### `generate_text`
- Event: `'generate_text'`
- Description: This event is triggered when a client requests text generation.
- Parameters:
- `data`: A dictionary containing the following fields:
- `prompt` (string): The text prompt for text generation.
- `personality` (integer): The index of the selected personality for conditioning the text generation.
- Actions:
- Retrieves the selected model and client ID from the server.
- Extracts the prompt and selected personality index from the request data.
- Initializes an empty answer list for text chunks.
- Retrieves the full discussion blocks from the client's data.
- Defines a callback function to handle generated text chunks.
- Preprocesses the prompt based on the selected personality's configuration, if applicable.
- Constructs the full discussion text by combining the personality's conditioning, prompt, and AI message prefix.
- Prints the input prompt for debugging purposes.
- If a personality processor is available and has a custom workflow, runs the processor's workflow with the prompt and full discussion text, providing the callback function for text chunk emission.
- If no custom workflow is available, generates text using the selected model with the full discussion text, specifying the number of predictions.
- Appends the generated text to the full discussion blocks.
- Prints a success message for debugging purposes.
- Emits the generated text to the client through the `'text_generated'` event.
Events generated:
- `'text_chunk'`: Generated text chunks are emitted to the client through this event during the text generation process.
- `'text_generated'`: Once the text generation process is complete, the final generated text is emitted to the client through this event.

View File

@ -0,0 +1,27 @@
from lollms.console import Conversation
class MyConversation(Conversation):
def __init__(self, cfg=None):
super().__init__(cfg, show_welcome_message=False)
def start_conversation(self):
full_discussion=""
while True:
prompt = input("You: ")
if prompt=="exit":
return
if prompt=="menu":
self.menu.main_menu()
full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text
full_discussion += self.personality.ai_message_prefix
def callback(text, type=None):
print(text, end="", flush=True)
return True
print(self.personality.name+": ",end="",flush=True)
output = self.safe_generate(full_discussion, callback=callback)
full_discussion += output.strip()+self.personality.link_text
print()
if __name__ == '__main__':
cv = MyConversation()
cv.start_conversation()

View File

@ -0,0 +1,43 @@
# AIPersonality Server and PyQt Client
This is a Python project that consists of a server and a PyQt client for interacting with the AIPersonality text generation model. The server is built using Flask and Flask-SocketIO, while the client is implemented using PyQt5.
## Server
The server code is located in the file `lllm_server.py`. It sets up a Flask application with Flask-SocketIO to establish a WebSocket connection with clients. The server receives text generation requests from clients, generates text based on the given prompt, and sends the generated text back to the clients.
To run the server, execute the following command:
```bash
python server.py --host localhost --port 9600 --config configs/config.yaml --bindings_path bindings_zoo
```
You can customize the host, port, configuration file, and bindings path by providing appropriate command-line arguments.
## Client
The client code is implemented using PyQt5 and can be found in the file client.py. It provides a graphical user interface (GUI) for interacting with the server. The client connects to the server using WebSocket and allows users to enter a prompt and generate text based on that prompt.
To run the client, execute the following command:
```bash
pyaipersonality-server --host 0.0.0.0 --port 9600
```
The client GUI will appear, and you can enter a prompt in the text area. Click the "Generate Text" button to send the prompt to the server for text generation. The generated text will be displayed in the text area.
Make sure you have the necessary dependencies installed, such as Flask, Flask-SocketIO, Flask-CORS, pyaipersonality, and PyQt5, before running the server and client.
## Dependencies
The project depends on the following Python packages:
- Flask
- Flask-SocketIO
- Flask-CORS
- pyaipersonality
- PyQt5
You can install the dependencies using pip:
```bash
pip install flask flask-socketio flask-cors pyaipersonality pyqt5
```
# License
PyAIPersonality is licensed under the Apache 2.0 license. See the `LICENSE` file for more information.

View File

@ -0,0 +1,4 @@
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 50 50">
<path d="M 44 1.59375 L 33.5625 12 L 31.3125 9.75 C 28.9695 7.41 25.18675 7.41 22.84375 9.75 L 18.5 14.125 L 17.1875 12.8125 A 1.0001 1.0001 0 0 0 16.375 12.5 A 1.0001 1.0001 0 0 0 15.78125 14.21875 L 35.78125 34.21875 A 1.0001 1.0001 0 1 0 37.1875 32.8125 L 35.875 31.5 L 40.25 27.15625 C 42.594 24.81425 42.592 21.0315 40.25 18.6875 L 40.25 18.65625 L 38 16.40625 L 48.40625 6 L 44 1.59375 z M 13.40625 15.46875 A 1.0001 1.0001 0 0 0 12.8125 17.1875 L 14.125 18.5 L 9.75 22.84375 C 7.406 25.18575 7.408 28.99975 9.75 31.34375 L 12 33.59375 L 1.59375 44 L 6 48.40625 L 16.40625 38 L 18.65625 40.25 C 20.99925 42.59 24.81325 42.59 27.15625 40.25 L 31.5 35.875 L 32.8125 37.1875 A 1.0001 1.0001 0 1 0 34.21875 35.78125 L 14.21875 15.78125 A 1.0001 1.0001 0 0 0 13.5 15.46875 A 1.0001 1.0001 0 0 0 13.40625 15.46875 z"/>
</svg>

After

Width:  |  Height:  |  Size: 913 B

View File

@ -0,0 +1,4 @@
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 50 50">
<path style="text-indent:0;text-align:start;line-height:normal;text-transform:none;block-progression:tb;-inkscape-font-specification:Sans" d="M 43.6875 2 L 38.65625 7.0625 L 36.34375 4.75 C 34.00075 2.41 30.18675 2.41 27.84375 4.75 L 23.03125 9.59375 L 21.71875 8.28125 A 1.0001 1.0001 0 0 0 20.78125 8 A 1.0001 1.0001 0 0 0 20.28125 9.71875 L 25.0625 14.5 L 18.9375 20.65625 L 20.34375 22.0625 L 26.5 15.9375 L 34.0625 23.5 L 27.9375 29.65625 L 29.34375 31.0625 L 35.5 24.9375 L 40.28125 29.71875 A 1.016466 1.016466 0 1 0 41.71875 28.28125 L 40.40625 26.96875 L 45.25 22.15625 C 47.594 19.81425 47.592 16.0315 45.25 13.6875 L 45.25 13.65625 L 42.9375 11.34375 L 48 6.3125 L 43.6875 2 z M 8.90625 19.96875 A 1.0001 1.0001 0 0 0 8.78125 20 A 1.0001 1.0001 0 0 0 8.28125 21.71875 L 9.59375 23.03125 L 4.75 27.84375 C 2.406 30.18575 2.408 33.99975 4.75 36.34375 L 7.0625 38.625 L 2 43.6875 L 6.3125 48 L 11.375 42.9375 L 13.65625 45.25 C 15.99925 47.59 19.81325 47.59 22.15625 45.25 L 26.96875 40.40625 L 28.28125 41.71875 A 1.016466 1.016466 0 1 0 29.71875 40.28125 L 9.71875 20.28125 A 1.0001 1.0001 0 0 0 8.90625 19.96875 z" overflow="visible" font-family="Sans"/>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@ -0,0 +1,184 @@
import sys
from PyQt5.QtGui import QIcon
from PyQt5.QtCore import QObject, pyqtSignal, pyqtSlot
from PyQt5.QtWidgets import QApplication, QMainWindow, QTextEdit,QHBoxLayout, QLineEdit, QVBoxLayout, QWidget, QToolBar, QAction, QPushButton, QStatusBar, QComboBox
from PyQt5.QtSvg import QSvgWidget
from socketio.client import Client
from socketio.exceptions import ConnectionError
from pathlib import Path
class ServerConnector(QObject):
text_chunk_received = pyqtSignal(str)
text_generated = pyqtSignal(str)
connection_failed = pyqtSignal()
connection_status_changed = pyqtSignal(bool)
personalities_received = pyqtSignal(list)
def __init__(self, parent=None):
super(ServerConnector, self).__init__(parent)
self.socketio = Client()
self.connected = False
self.personalities = []
self.selected_personality_id = 0
self.socketio.on('connect', self.handle_connect)
self.socketio.on('text_chunk', self.handle_text_chunk)
self.socketio.on('text_generated', self.handle_text_generated)
self.socketio.on('active_personalities_list', self.handle_personalities_received)
def handle_connect(self):
self.socketio.emit('connect')
self.list_personalities()
def connect_to_server(self):
if not self.connected:
try:
self.socketio.connect('http://localhost:9600')
self.connected = True
self.connection_status_changed.emit(True)
except ConnectionError:
self.connection_failed.emit()
self.connection_status_changed.emit(False)
def disconnect_from_server(self):
if self.connected:
self.socketio.disconnect()
self.connected = False
self.connection_status_changed.emit(False)
def list_personalities(self):
self.socketio.emit('list_active_personalities')
@pyqtSlot(str)
def generate_text(self, prompt):
if not self.connected:
self.connection_failed.emit()
return
data = {
'client_id': self.socketio.sid,
'prompt': prompt,
'personality': self.selected_personality_id
}
self.socketio.emit('generate_text', data)
def handle_personalities_list(self, data):
personalities = data['personalities']
self.personalities_list_received.emit(personalities)
def handle_text_chunk(self, data):
chunk = data['chunk']
self.text_chunk_received.emit(chunk)
def handle_text_generated(self, data):
text = data['text']
self.text_generated.emit(text)
def handle_personalities_received(self, data):
personalities = data['personalities']
print(f"Received List of personalities:{personalities}")
self.personalities = personalities
self.personalities_received.emit(personalities)
class MainWindow(QMainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
self.setWindowTitle("AIPersonality Client")
self.user_input_layout = QHBoxLayout()
self.user_text = QLineEdit()
self.text_edit = QTextEdit()
self.toolbar = QToolBar()
self.submit_button = QPushButton("Submit")
self.user_input_layout.addWidget(self.user_text)
self.user_input_layout.addWidget(self.submit_button)
self.statusbar = QStatusBar()
self.personality_combo_box = QComboBox()
self.personality_combo_box.setMinimumWidth(500)
self.connect_action = QAction(QIcon(str(Path(__file__).parent/'assets/connected.svg')), "", self)
self.connect_action.setCheckable(True)
self.connect_action.toggled.connect(self.toggle_connection)
self.toolbar.addAction(self.connect_action)
self.toolbar.addWidget(self.personality_combo_box)
self.addToolBar(self.toolbar)
layout = QVBoxLayout()
layout.addLayout(self.user_input_layout)
layout.addWidget(self.text_edit)
widget = QWidget()
widget.setLayout(layout)
self.setCentralWidget(widget)
self.connector = ServerConnector()
self.connector.text_chunk_received.connect(self.handle_text_chunk)
self.connector.text_generated.connect(self.handle_text_generated)
self.connector.connection_failed.connect(self.handle_connection_failed)
self.connector.connection_status_changed.connect(self.handle_connection_status_changed)
self.connector.personalities_received.connect(self.handle_personalities_received)
self.connector.connect_to_server()
self.submit_button.clicked.connect(self.submit_text)
self.setStatusBar(self.statusbar)
self.update_statusbar()
@pyqtSlot(bool)
def toggle_connection(self, checked):
if checked:
self.connector.connect_to_server()
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/connected.svg')))
else:
self.connector.disconnect_from_server()
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/disconnected.svg')))
@pyqtSlot()
def submit_text(self):
prompt = self.user_text.text()
self.selected_personality_id = self.personality_combo_box.currentIndex()
self.text_edit.insertPlainText("User:"+prompt+"\n"+self.connector.personalities[self.selected_personality_id]+":")
self.connector.generate_text(prompt)
@pyqtSlot(str)
def handle_text_chunk(self, chunk):
self.text_edit.insertPlainText(chunk)
@pyqtSlot(str)
def handle_text_generated(self, text):
self.text_edit.append(text)
@pyqtSlot()
def handle_connection_failed(self):
self.text_edit.append("Failed to connect to the server.")
@pyqtSlot(bool)
def handle_connection_status_changed(self, connected):
if connected:
self.statusbar.showMessage("Connected to the server")
else:
self.statusbar.showMessage("Disconnected from the server")
@pyqtSlot(list)
def handle_personalities_received(self, personalities):
print("Received personalities")
self.personality_combo_box.clear()
self.personality_combo_box.addItems(personalities)
def update_statusbar(self):
if self.connector.connected:
self.statusbar.showMessage("Connected to the server")
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/connected.svg')))
else:
self.statusbar.showMessage("Disconnected from the server")
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/disconnected.svg')))
if __name__ == '__main__':
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec_())

View File

@ -0,0 +1,3 @@
Flask_SocketIO==5.3.4
PyQt5==5.15.9
python-socketio[client]

View File

@ -0,0 +1,17 @@
from lollms.console import Conversation
class MyConversation(Conversation):
def __init__(self, cfg=None):
super().__init__(cfg, show_welcome_message=False)
def start_conversation(self):
prompt = "Once apon a time"
def callback(text, type=None):
print(text, end="", flush=True)
return True
print(prompt, end="", flush=True)
output = self.safe_generate(prompt, callback=callback)
if __name__ == '__main__':
cv = MyConversation()
cv.start_conversation()

View File

@ -0,0 +1,23 @@
.DS_Store
node_modules
/dist
# local env files
.env.local
.env.*.local
# Log files
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

View File

@ -0,0 +1,24 @@
# lollms_webui
## Project setup
```
npm install
```
### Compiles and hot-reloads for development
```
npm run serve
```
### Compiles and minifies for production
```
npm run build
```
### Lints and fixes files
```
npm run lint
```
### Customize configuration
See [Configuration Reference](https://cli.vuejs.org/config/).

View File

@ -0,0 +1,5 @@
module.exports = {
presets: [
'@vue/cli-plugin-babel/preset'
]
}

View File

@ -0,0 +1,19 @@
{
"compilerOptions": {
"target": "es5",
"module": "esnext",
"baseUrl": "./",
"moduleResolution": "node",
"paths": {
"@/*": [
"src/*"
]
},
"lib": [
"esnext",
"dom",
"dom.iterable",
"scripthost"
]
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,45 @@
{
"name": "lollms_webui",
"version": "0.1.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
"dependencies": {
"core-js": "^3.8.3",
"socket.io-client": "^4.6.2",
"tailwindcss": "^3.3.2",
"vue": "^3.2.13"
},
"devDependencies": {
"@babel/core": "^7.12.16",
"@babel/eslint-parser": "^7.12.16",
"@vue/cli-plugin-babel": "~5.0.0",
"@vue/cli-plugin-eslint": "~5.0.0",
"@vue/cli-service": "~5.0.0",
"eslint": "^7.32.0",
"eslint-plugin-vue": "^8.0.3"
},
"eslintConfig": {
"root": true,
"env": {
"node": true
},
"extends": [
"plugin:vue/vue3-essential",
"eslint:recommended"
],
"parserOptions": {
"parser": "@babel/eslint-parser"
},
"rules": {}
},
"browserslist": [
"> 1%",
"last 2 versions",
"not dead",
"not ie 11"
]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@ -0,0 +1,17 @@
<!DOCTYPE html>
<html lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width,initial-scale=1.0">
<link rel="icon" href="<%= BASE_URL %>favicon.ico">
<title><%= htmlWebpackPlugin.options.title %></title>
</head>
<body>
<noscript>
<strong>We're sorry but <%= htmlWebpackPlugin.options.title %> doesn't work properly without JavaScript enabled. Please enable it to continue.</strong>
</noscript>
<div id="app"></div>
<!-- built files will be auto injected -->
</body>
</html>

View File

@ -0,0 +1,124 @@
<template>
<div class="bg-gray-900 text-white min-h-screen p-4">
<h1 class="text-3xl font-bold mb-4">Lord Of Large Language Models</h1>
<div class="mb-4">
<h2 class="text-xl font-bold">Select Binding</h2>
<select v-model="selectedBinding" @change="selectBinding" class="p-2 bg-gray-800 text-white">
<option v-for="binding in bindings" :key="binding.name" :value="binding.name">{{ binding.name }}</option>
</select>
</div>
<div v-if="selectedBinding" class="mb-4">
<h2 class="text-xl font-bold">Select Model</h2>
<select v-model="selectedModel" @change="selectModel" class="p-2 bg-gray-800 text-white">
<option v-for="model in models" :key="model.title" :value="model.title">{{ model.title }}</option>
</select>
</div>
<div v-if="selectedModel" class="mb-4">
<h2 class="text-xl font-bold">Select Personality</h2>
<select v-model="selectedPersonality" @change="selectPersonality" class="p-2 bg-gray-800 text-white">
<option v-for="personality in personalities" :key="personality.name" :value="personality.name">{{ personality.name }}</option>
</select>
</div>
<div>
<h2 class="text-xl font-bold">Chat</h2>
<div class="mb-4">
<div v-for="message in chatMessages" :key="message.id" class="text-white">
<strong>{{ message.sender }}:</strong> {{ message.text }}
</div>
</div>
<div class="flex">
<input type="text" v-model="inputMessage" @keydown.enter="sendMessage" placeholder="Type your message" class="p-2 flex-grow bg-gray-800 text-white mr-2">
<button @click="sendMessage" class="p-2 bg-blue-500 text-white">Send</button>
</div>
</div>
</div>
</template>
<style src="./assets/css/app.css"></style>
<script>
import io from 'socket.io-client';
// Import Tailwind CSS styles
import 'tailwindcss/tailwind.css';
export default {
data() {
return {
socket: null,
bindings: [],
models: [],
personalities: [],
selectedBinding: '',
selectedModel: '',
selectedPersonality: '',
chatMessages: [],
inputMessage: '',
};
},
created() {
this.socket = io('http://localhost:9600');
this.socket.on('connect', () => {
console.log('Connected to server');
this.socket.emit('list_available_bindings');
this.socket.emit('list_available_models');
this.socket.emit('list_available_personalities');
});
// Handle the event emitted when the select_binding is sent
this.socket.on('select_binding', (data) => {
console.log('Received:', data);
if(data["success"]){
console.log('Binding selected:', data);
this.socket.emit('list_available_models');
}
// You can perform any additional actions or update data properties as needed
});
// Handle the event emitted when the select_binding is sent
this.socket.on('select_model', (data) => {
console.log('Received:', data);
if(data["success"]){
console.log('Model selected:', data);
}
// You can perform any additional actions or update data properties as needed
});
this.socket.on('bindings_list', (bindings) => {
this.bindings = bindings["bindings"];
console.log(this.bindings)
});
this.socket.on('available_models_list', (models) => {
if(models["success"]){
this.models = models["available_models"];
}
console.log(this.models)
});
this.socket.on('personalities_list', (personalities) => {
this.personalities = personalities;
});
this.socket.on('text_chunk', (message) => {
this.chatMessages.push(message.chunk);
});
},
methods: {
selectBinding() {
this.socket.emit('select_binding', { binding_name: this.selectedBinding });
},
selectModel() {
this.socket.emit('select_model', { model_name: this.selectedModel });
},
selectPersonality() {
this.socket.emit('activate_personality', { personality_name: this.selectedPersonality });
},
sendMessage() {
const message = {
text: this.inputMessage,
sender: 'User',
};
this.chatMessages.push(message);
this.socket.emit('generate_text', {prompt:message.text, personality:0});
this.inputMessage = '';
},
},
};
</script>

View File

@ -0,0 +1,3 @@
@import 'tailwindcss/base';
@import 'tailwindcss/components';
@import 'tailwindcss/utilities';

View File

@ -0,0 +1,3 @@
@import 'tailwindcss/base';
@import 'tailwindcss/components';
@import 'tailwindcss/utilities';

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

View File

@ -0,0 +1,58 @@
<template>
<div class="hello">
<h1>{{ msg }}</h1>
<p>
For a guide and recipes on how to configure / customize this project,<br>
check out the
<a href="https://cli.vuejs.org" target="_blank" rel="noopener">vue-cli documentation</a>.
</p>
<h3>Installed CLI Plugins</h3>
<ul>
<li><a href="https://github.com/vuejs/vue-cli/tree/dev/packages/%40vue/cli-plugin-babel" target="_blank" rel="noopener">babel</a></li>
<li><a href="https://github.com/vuejs/vue-cli/tree/dev/packages/%40vue/cli-plugin-eslint" target="_blank" rel="noopener">eslint</a></li>
</ul>
<h3>Essential Links</h3>
<ul>
<li><a href="https://vuejs.org" target="_blank" rel="noopener">Core Docs</a></li>
<li><a href="https://forum.vuejs.org" target="_blank" rel="noopener">Forum</a></li>
<li><a href="https://chat.vuejs.org" target="_blank" rel="noopener">Community Chat</a></li>
<li><a href="https://twitter.com/vuejs" target="_blank" rel="noopener">Twitter</a></li>
<li><a href="https://news.vuejs.org" target="_blank" rel="noopener">News</a></li>
</ul>
<h3>Ecosystem</h3>
<ul>
<li><a href="https://router.vuejs.org" target="_blank" rel="noopener">vue-router</a></li>
<li><a href="https://vuex.vuejs.org" target="_blank" rel="noopener">vuex</a></li>
<li><a href="https://github.com/vuejs/vue-devtools#vue-devtools" target="_blank" rel="noopener">vue-devtools</a></li>
<li><a href="https://vue-loader.vuejs.org" target="_blank" rel="noopener">vue-loader</a></li>
<li><a href="https://github.com/vuejs/awesome-vue" target="_blank" rel="noopener">awesome-vue</a></li>
</ul>
</div>
</template>
<script>
export default {
name: 'HelloWorld',
props: {
msg: String
}
}
</script>
<!-- Add "scoped" attribute to limit CSS to this component only -->
<style scoped>
h3 {
margin: 40px 0 0;
}
ul {
list-style-type: none;
padding: 0;
}
li {
display: inline-block;
margin: 0 10px;
}
a {
color: #42b983;
}
</style>

View File

@ -0,0 +1,6 @@
import { createApp } from 'vue'
import App from './App.vue'
import '@/assets/css/app.css';
import './assets/css/tailwind.css';
createApp(App).mount('#app')

View File

@ -0,0 +1,14 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
purge: [
'./src/**/*.vue',
'./src/**/*.html',
// Add any other paths to your Vue components and templates here
],
content: [],
theme: {
extend: {},
},
plugins: [],
}

View File

@ -0,0 +1,12 @@
const { defineConfig } = require('@vue/cli-service')
module.exports = defineConfig({
transpileDependencies: true,
css: {
loaderOptions: {
css: {
// Import the tailwind.css file
import: 'assets/css/tailwind.css'
}
}
}
})

70
lollms/__init__.py Normal file
View File

@ -0,0 +1,70 @@
__author__ = "ParisNeo"
__github__ = "https://github.com/ParisNeo/lollms"
__copyright__ = "Copyright 2023, "
__license__ = "Apache 2.0"
from lollms.binding import LLMBinding, LOLLMSConfig
from lollms.personality import AIPersonality, MSG_TYPE
from lollms.paths import LollmsPaths
#from lollms.binding import LLMBinding
import importlib
from pathlib import Path
class BindingBuilder:
def build_binding(self, bindings_path: Path, cfg: LOLLMSConfig, force_reinstall=False)->LLMBinding:
binding_path = Path(bindings_path) / cfg["binding_name"]
# first find out if there is a requirements.txt file
install_file_name = "install.py"
install_script_path = binding_path / install_file_name
if install_script_path.exists():
module_name = install_file_name[:-3] # Remove the ".py" extension
module_spec = importlib.util.spec_from_file_location(module_name, str(install_script_path))
module = importlib.util.module_from_spec(module_spec)
module_spec.loader.exec_module(module)
if hasattr(module, "Install"):
module.Install(cfg, force_reinstall=force_reinstall)
# define the full absolute path to the module
absolute_path = binding_path.resolve()
# infer the module name from the file path
module_name = binding_path.stem
# use importlib to load the module from the file path
loader = importlib.machinery.SourceFileLoader(module_name, str(absolute_path / "__init__.py"))
binding_module = loader.load_module()
binding_class = getattr(binding_module, binding_module.binding_name)
return binding_class
class ModelBuilder:
def __init__(self, binding_class:LLMBinding, config:LOLLMSConfig):
self.binding_class = binding_class
self.model = None
self.build_model(config)
def build_model(self, cfg: LOLLMSConfig):
self.model = self.binding_class(cfg)
def get_model(self):
return self.model
class PersonalityBuilder:
def __init__(self, lollms_paths:LollmsPaths, config:LOLLMSConfig, model:LLMBinding):
self.config = config
self.lollms_paths = lollms_paths
self.model = model
def build_personality(self, force_reinstall=False):
if len(self.config["personalities"][self.config["active_personality_id"]].split("/"))==3:
self.personality = AIPersonality(self.lollms_paths, self.lollms_paths.personalities_zoo_path / self.config["personalities"][self.config["active_personality_id"]], self.model, force_reinstall= force_reinstall)
else:
self.personality = AIPersonality(self.lollms_paths, self.config["personalities"][self.config["active_personality_id"]], self.model, is_relative_path=False, force_reinstall= force_reinstall)
return self.personality
def get_personality(self):
return self.personality

BIN
lollms/assets/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 459 KiB

293
lollms/binding.py Normal file
View File

@ -0,0 +1,293 @@
######
# Project : GPT4ALL-UI
# File : binding.py
# Author : ParisNeo with the help of the community
# Supported by Nomic-AI
# license : Apache 2.0
# Description :
# This is an interface class for GPT4All-ui bindings.
######
from pathlib import Path
from typing import Callable
from lollms.helpers import BaseConfig, ASCIIColors
from lollms.paths import LollmsPaths
import inspect
import yaml
import sys
from tqdm import tqdm
import urllib.request
import importlib
import shutil
__author__ = "parisneo"
__github__ = "https://github.com/ParisNeo/lollms_bindings_zoo"
__copyright__ = "Copyright 2023, "
__license__ = "Apache 2.0"