mirror of
https://github.com/ParisNeo/lollms.git
synced 2024-12-22 14:02:27 +00:00
Create README
Create README.md upgraded readme upgraded upgraded Added an example of a console chat Upgraded code upgraded submodule upgraded example upgraded console application Added some logo and readme upgraded upgraded updated updated changed logo upgraded upgrade upgraded upgraded upgraded version Added console app upgraded code and service information changed documentation title upgraded code updated zoo Upgraded logo upgradded Update server_endpoints.md Update README.md Update server_endpoints.md Enhanced code enhanced work + added training fixed error in README upgraded readme Fixed console problem enhanced code Added reference to models upgraded version Update README.md upgraded binding Update README.md enhanced server upgraded console and server upgraded tool upgraded upgraded Upgraded to new Version enhanced updated personalities zoo personalities_zoo upgraded readme Possibility to send files to personalities Possibility to send files to personalities upgraded code bugfix updated upgraded upgraded console updated readme version upgrade Update README.md Added menu build at startup change upgraded code now you select a personality of not selected upgraded upgraded documentation upgraded documentation updated Upgraded bugfix now you can build custom personalities updated. now we can use other personalities Bugfix added return changed colors added protection added back to personality installation bugfix typo fixed autogptq fixed autogptq gptq upgraded gptq changed version upgraded console typo Added send file updated send file upgraded personality upgraded image analysis tool updated upgraded version upgraded tool updated gpt4all is now working version update upgraded naming scheme hapen Upgraded path data upgraded version updated upgraded version upgraded install procedures personal path can be changed online upgraded chatgpt upgraded upgraded updated version bugfix upgraded personalities upgraded version enhanced enhanced update bugfix version update Added reset functionality Added settings upgraded enhanced library upgraded models Upgraded upgraded rebased upgraded code fixed gpt4all updated version
This commit is contained in:
parent
a65750e5fc
commit
61a4f15109
21
.gitignore
vendored
21
.gitignore
vendored
@ -158,3 +158,24 @@ cython_debug/
|
|||||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||||
#.idea/
|
#.idea/
|
||||||
|
|
||||||
|
# Custom stuff
|
||||||
|
.installed
|
||||||
|
|
||||||
|
shared/*
|
||||||
|
*.ckpt
|
||||||
|
*.safetensors
|
||||||
|
|
||||||
|
models
|
||||||
|
|
||||||
|
# rest tests
|
||||||
|
*.http
|
||||||
|
|
||||||
|
# shared resources
|
||||||
|
shared
|
||||||
|
src
|
||||||
|
temp
|
||||||
|
outputs
|
||||||
|
|
||||||
|
# Global path configuration
|
||||||
|
global_paths_cfg.yaml
|
16
.gitmodules
vendored
Normal file
16
.gitmodules
vendored
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
[submodule "lollms/bindings_zoo"]
|
||||||
|
path = lollms/bindings_zoo
|
||||||
|
url = https://github.com/ParisNeo/lollms_bindings_zoo.git
|
||||||
|
branch = main
|
||||||
|
[submodule "lollms/personalities_zoo"]
|
||||||
|
path = lollms/personalities_zoo
|
||||||
|
url = https://github.com/ParisNeo/lollms_personalities_zoo.git
|
||||||
|
branch = main
|
||||||
|
[submodule "lollms/bindings_zoo"]
|
||||||
|
path = lollms/bindings_zoo
|
||||||
|
url = https://github.com/ParisNeo/lollms_bindings_zoo.git
|
||||||
|
branch = main
|
||||||
|
[submodule "lollms/personalities_zoo"]
|
||||||
|
path = lollms/personalities_zoo
|
||||||
|
url = https://github.com/ParisNeo/lollms_personalities_zoo.git
|
||||||
|
branch = main
|
3
.vscode/settings.json
vendored
Normal file
3
.vscode/settings.json
vendored
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"ros.distro": "noetic"
|
||||||
|
}
|
9
MANIFEST.in
Normal file
9
MANIFEST.in
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
recursive-include lollms/configs *
|
||||||
|
recursive-include lollms/bindings_zoo *
|
||||||
|
recursive-include lollms/personalities_zoo *
|
||||||
|
global-exclude *.bin
|
||||||
|
global-exclude *.pyc
|
||||||
|
global-exclude local_config.yaml
|
||||||
|
global-exclude .installed
|
||||||
|
global-exclude .git
|
||||||
|
global-exclude .gitignore
|
273
README.md
Normal file
273
README.md
Normal file
@ -0,0 +1,273 @@
|
|||||||
|
# Lord of Large Language Models (LoLLMs)
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://github.com/ParisNeo/lollms/blob/main/lollms/assets/logo.png" alt="Logo" width="200" height="200">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
![GitHub license](https://img.shields.io/github/license/ParisNeo/lollms)
|
||||||
|
![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms)
|
||||||
|
![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms)
|
||||||
|
![GitHub forks](https://img.shields.io/github/forks/ParisNeo/lollms)
|
||||||
|
[![Discord](https://img.shields.io/discord/1092918764925882418?color=7289da&label=Discord&logo=discord&logoColor=ffffff)](https://discord.gg/4rR282WJb6)
|
||||||
|
[![Follow me on Twitter](https://img.shields.io/twitter/follow/SpaceNerduino?style=social)](https://twitter.com/SpaceNerduino)
|
||||||
|
[![Follow Me on YouTube](https://img.shields.io/badge/Follow%20Me%20on-YouTube-red?style=flat&logo=youtube)](https://www.youtube.com/user/Parisneo)
|
||||||
|
|
||||||
|
Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Fully integrated library with access to bindings, personalities and helper tools.
|
||||||
|
- Generate text using large language models.
|
||||||
|
- Supports multiple personalities for generating text with different styles and tones.
|
||||||
|
- Real-time text generation with WebSocket-based communication.
|
||||||
|
- RESTful API for listing personalities and adding new personalities.
|
||||||
|
- Easy integration with various applications and frameworks.
|
||||||
|
- Possibility to send files to personalities
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
You can install LoLLMs using pip, the Python package manager. Open your terminal or command prompt and run the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --upgrade lollms
|
||||||
|
```
|
||||||
|
|
||||||
|
Or if you want to get the latest version from the git:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install --upgrade git+https://github.com/ParisNeo/lollms.git
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
To simply configure your environment run the console app:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lollms-console
|
||||||
|
```
|
||||||
|
|
||||||
|
The first time you will be prompted to select a binding.
|
||||||
|
![image](https://github.com/ParisNeo/lollms/assets/827993/2d7f58fe-089d-4d3e-a21a-0609f8e27969)
|
||||||
|
|
||||||
|
Once the binding is selected, you have to install at least a model. You have two options:
|
||||||
|
|
||||||
|
1- install from internet. Just give the link to a model on hugging face. For example. if you select the default llamacpp python bindings (7), you can install this model:
|
||||||
|
```bash
|
||||||
|
https://huggingface.co/TheBloke/airoboros-7b-gpt4-GGML/resolve/main/airoboros-7b-gpt4.ggmlv3.q4_1.bin
|
||||||
|
```
|
||||||
|
2- install from local drive. Just give the path to a model on your pc. The model will not be copied. We only create a reference to the model. This is useful if you use multiple clients so that you can mutualize your models with other tools.
|
||||||
|
|
||||||
|
|
||||||
|
Now you are ready to use the server.
|
||||||
|
|
||||||
|
## Library example
|
||||||
|
|
||||||
|
Here is the smallest possible example that allows you to use the full potential of the tool with nearly no code
|
||||||
|
```python
|
||||||
|
from lollms.console import Conversation
|
||||||
|
|
||||||
|
cv = Conversation(None)
|
||||||
|
cv.start_conversation()
|
||||||
|
```
|
||||||
|
Now you can reimplement the start_conversation method to do the things you want:
|
||||||
|
```python
|
||||||
|
from lollms.console import Conversation
|
||||||
|
|
||||||
|
class MyConversation(Conversation):
|
||||||
|
def __init__(self, cfg=None):
|
||||||
|
super().__init__(cfg, show_welcome_message=False)
|
||||||
|
|
||||||
|
def start_conversation(self):
|
||||||
|
prompt = "Once apon a time"
|
||||||
|
def callback(text, type=None):
|
||||||
|
print(text, end="", flush=True)
|
||||||
|
return True
|
||||||
|
print(prompt, end="", flush=True)
|
||||||
|
output = self.safe_generate(prompt, callback=callback)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
cv = MyConversation()
|
||||||
|
cv.start_conversation()
|
||||||
|
```
|
||||||
|
|
||||||
|
Or if you want here is a conversation tool written in few lines
|
||||||
|
```python
|
||||||
|
from lollms.console import Conversation
|
||||||
|
|
||||||
|
class MyConversation(Conversation):
|
||||||
|
def __init__(self, cfg=None):
|
||||||
|
super().__init__(cfg, show_welcome_message=False)
|
||||||
|
|
||||||
|
def start_conversation(self):
|
||||||
|
full_discussion=""
|
||||||
|
while True:
|
||||||
|
prompt = input("You: ")
|
||||||
|
if prompt=="exit":
|
||||||
|
return
|
||||||
|
if prompt=="menu":
|
||||||
|
self.menu.main_menu()
|
||||||
|
full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text
|
||||||
|
full_discussion += self.personality.ai_message_prefix
|
||||||
|
def callback(text, type=None):
|
||||||
|
print(text, end="", flush=True)
|
||||||
|
return True
|
||||||
|
print(self.personality.name+": ",end="",flush=True)
|
||||||
|
output = self.safe_generate(full_discussion, callback=callback)
|
||||||
|
full_discussion += output.strip()+self.personality.link_text
|
||||||
|
print()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
cv = MyConversation()
|
||||||
|
cv.start_conversation()
|
||||||
|
```
|
||||||
|
Here we use the safe_generate method that does all the cropping for you ,so you can chat forever and will never run out of context.
|
||||||
|
|
||||||
|
## Socket IO Server Usage
|
||||||
|
|
||||||
|
Once installed, you can start the LoLLMs Server using the `lollms-server` command followed by the desired parameters.
|
||||||
|
|
||||||
|
```
|
||||||
|
lollms-server --host <host> --port <port> --config <config_file> --bindings_path <bindings_path> --personalities_path <personalities_path> --models_path <models_path> --binding_name <binding_name> --model_name <model_name> --personality_full_name <personality_full_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parameters
|
||||||
|
|
||||||
|
- `--host`: The hostname or IP address to bind the server (default: localhost).
|
||||||
|
- `--port`: The port number to run the server (default: 9600).
|
||||||
|
- `--config`: Path to the configuration file (default: None).
|
||||||
|
- `--bindings_path`: The path to the Bindings folder (default: "./bindings_zoo").
|
||||||
|
- `--personalities_path`: The path to the personalities folder (default: "./personalities_zoo").
|
||||||
|
- `--models_path`: The path to the models folder (default: "./models").
|
||||||
|
- `--binding_name`: The default binding to be used (default: "llama_cpp_official").
|
||||||
|
- `--model_name`: The default model name (default: "Manticore-13B.ggmlv3.q4_0.bin").
|
||||||
|
- `--personality_full_name`: The full name of the default personality (default: "personality").
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
Start the server with default settings:
|
||||||
|
|
||||||
|
```
|
||||||
|
lollms-server
|
||||||
|
```
|
||||||
|
|
||||||
|
Start the server on a specific host and port:
|
||||||
|
|
||||||
|
```
|
||||||
|
lollms-server --host 0.0.0.0 --port 5000
|
||||||
|
```
|
||||||
|
## API Endpoints
|
||||||
|
|
||||||
|
### WebSocket Events
|
||||||
|
|
||||||
|
- `connect`: Triggered when a client connects to the server.
|
||||||
|
- `disconnect`: Triggered when a client disconnects from the server.
|
||||||
|
- `list_personalities`: List all available personalities.
|
||||||
|
- `add_personality`: Add a new personality to the server.
|
||||||
|
- `generate_text`: Generate text based on the provided prompt and selected personality.
|
||||||
|
|
||||||
|
For more details refer to the [API documentation](doc/server_endpoints.md)
|
||||||
|
|
||||||
|
### RESTful API
|
||||||
|
|
||||||
|
- `GET /personalities`: List all available personalities.
|
||||||
|
- `POST /personalities`: Add a new personality to the server.
|
||||||
|
|
||||||
|
Sure! Here are examples of how to communicate with the LoLLMs Server using JavaScript and Python.
|
||||||
|
|
||||||
|
### JavaScript Example
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Establish a WebSocket connection with the server
|
||||||
|
const socket = io.connect('http://localhost:9600');
|
||||||
|
|
||||||
|
// Event: When connected to the server
|
||||||
|
socket.on('connect', () => {
|
||||||
|
console.log('Connected to the server');
|
||||||
|
|
||||||
|
// Request the list of available personalities
|
||||||
|
socket.emit('list_personalities');
|
||||||
|
});
|
||||||
|
|
||||||
|
// Event: Receive the list of personalities from the server
|
||||||
|
socket.on('personalities_list', (data) => {
|
||||||
|
const personalities = data.personalities;
|
||||||
|
console.log('Available Personalities:', personalities);
|
||||||
|
|
||||||
|
// Select a personality and send a text generation request
|
||||||
|
const selectedPersonality = personalities[0];
|
||||||
|
const prompt = 'Once upon a time...';
|
||||||
|
socket.emit('generate_text', { personality: selectedPersonality, prompt: prompt });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Event: Receive the generated text from the server
|
||||||
|
socket.on('text_generated', (data) => {
|
||||||
|
const generatedText = data.text;
|
||||||
|
console.log('Generated Text:', generatedText);
|
||||||
|
|
||||||
|
// Do something with the generated text
|
||||||
|
});
|
||||||
|
|
||||||
|
// Event: When disconnected from the server
|
||||||
|
socket.on('disconnect', () => {
|
||||||
|
console.log('Disconnected from the server');
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python Example
|
||||||
|
|
||||||
|
```python
|
||||||
|
import socketio
|
||||||
|
|
||||||
|
# Create a SocketIO client
|
||||||
|
sio = socketio.Client()
|
||||||
|
|
||||||
|
# Event: When connected to the server
|
||||||
|
@sio.on('connect')
|
||||||
|
def on_connect():
|
||||||
|
print('Connected to the server')
|
||||||
|
|
||||||
|
# Request the list of available personalities
|
||||||
|
sio.emit('list_personalities')
|
||||||
|
|
||||||
|
# Event: Receive the list of personalities from the server
|
||||||
|
@sio.on('personalities_list')
|
||||||
|
def on_personalities_list(data):
|
||||||
|
personalities = data['personalities']
|
||||||
|
print('Available Personalities:', personalities)
|
||||||
|
|
||||||
|
# Select a personality and send a text generation request
|
||||||
|
selected_personality = personalities[0]
|
||||||
|
prompt = 'Once upon a time...'
|
||||||
|
sio.emit('generate_text', {'personality': selected_personality, 'prompt': prompt})
|
||||||
|
|
||||||
|
# Event: Receive the generated text from the server
|
||||||
|
@sio.on('text_generated')
|
||||||
|
def on_text_generated(data):
|
||||||
|
generated_text = data['text']
|
||||||
|
print('Generated Text:', generated_text)
|
||||||
|
|
||||||
|
# Do something with the generated text
|
||||||
|
|
||||||
|
# Event: When disconnected from the server
|
||||||
|
@sio.on('disconnect')
|
||||||
|
def on_disconnect():
|
||||||
|
print('Disconnected from the server')
|
||||||
|
|
||||||
|
# Connect to the server
|
||||||
|
sio.connect('http://localhost:9600')
|
||||||
|
|
||||||
|
# Keep the client running
|
||||||
|
sio.wait()
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure to have the necessary dependencies installed for the JavaScript and Python examples. For JavaScript, you need the `socket.io-client` package, and for Python, you need the `python-socketio` package.
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Contributions to the LoLLMs Server project are welcome and appreciated. If you would like to contribute, please follow the guidelines outlined in the [CONTRIBUTING.md](https://github.com/ParisNeo/lollms/blob/main/CONTRIBUTING.md) file.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
LoLLMs Server is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/ParisNeo/lollms/blob/main/LICENSE) file for more information.
|
||||||
|
|
||||||
|
## Repository
|
||||||
|
|
||||||
|
The source code for LoLLMs Server can be found on GitHub
|
309
doc/server_endpoints.md
Normal file
309
doc/server_endpoints.md
Normal file
@ -0,0 +1,309 @@
|
|||||||
|
# Lord Of Large Language Models Socket.io Endpoints Documentation
|
||||||
|
|
||||||
|
<img src="https://github.com/ParisNeo/lollms/blob/main/lollms/assets/logo.png" alt="Logo" width="200" height="200">
|
||||||
|
|
||||||
|
The server provides several Socket.io endpoints that clients can use to interact with the server. The default URL for the server is `http://localhost:9600`, but it can be changed using the configuration file or launch parameters.
|
||||||
|
|
||||||
|
## Endpoints
|
||||||
|
|
||||||
|
### `connect`
|
||||||
|
- Event: `'connect'`
|
||||||
|
- Description: This event is triggered when a client connects to the server.
|
||||||
|
- Actions:
|
||||||
|
- Adds the client to the list of connected clients with a unique session ID.
|
||||||
|
- Prints a message indicating the client's session ID.
|
||||||
|
|
||||||
|
### `disconnect`
|
||||||
|
- Event: `'disconnect'`
|
||||||
|
- Description: This event is triggered when a client disconnects from the server.
|
||||||
|
- Actions:
|
||||||
|
- Removes the client from the list of connected clients, if it exists.
|
||||||
|
- Prints a message indicating the client's session ID.
|
||||||
|
|
||||||
|
#### `list_available_bindings`
|
||||||
|
- Event: `'list_available_bindings'`
|
||||||
|
- Description: This event is triggered when a client requests a list of available bindings.
|
||||||
|
- Parameters: None
|
||||||
|
- Actions:
|
||||||
|
- Initializes an empty list `binding_infs` to store information about each binding.
|
||||||
|
- Iterates over the files and directories in the `self.bindings_path` directory.
|
||||||
|
- For each directory in `self.bindings_path`:
|
||||||
|
- Reads the content of the `binding_card.yaml` file, which contains information about the binding card.
|
||||||
|
- Reads the content of the `models.yaml` file, which contains information about the models associated with the binding.
|
||||||
|
- Creates an entry dictionary that includes the binding's name, card information, and model information.
|
||||||
|
- Appends the entry to the `binding_infs` list.
|
||||||
|
- Emits a response event `'bindings_list'` to the client containing the list of available bindings and their information (`bindings`) as well as a `success` parameter that is `False` when not successful.
|
||||||
|
|
||||||
|
Events generated:
|
||||||
|
- `'bindings_list'`: Sent to the client as a response to the `'list_available_bindings'` request. It contains the list of available bindings along with their associated information (`binding_infs`).
|
||||||
|
|
||||||
|
|
||||||
|
#### `list_available_personalities`
|
||||||
|
- Event: `'list_available_personalities'`
|
||||||
|
- Description: This event is triggered when a client requests a list of available personalities.
|
||||||
|
- Parameters: None
|
||||||
|
- Actions:
|
||||||
|
- Retrieves the path to the personalities folder from the server (`self.personalities_path`).
|
||||||
|
- Initializes an empty dictionary to store the available personalities.
|
||||||
|
- Iterates over each language folder in the personalities folder.
|
||||||
|
- Checks if the current item is a directory.
|
||||||
|
- Initializes an empty dictionary to store the personalities within the language.
|
||||||
|
- Iterates over each category folder within the language folder.
|
||||||
|
- Checks if the current item is a directory.
|
||||||
|
- Initializes an empty list to store the personalities within the category.
|
||||||
|
- Iterates over each personality folder within the category folder.
|
||||||
|
- Checks if the current item is a directory.
|
||||||
|
- Tries to load personality information from the config file (`config.yaml`) within the personality folder.
|
||||||
|
- Retrieves the name, description, author, and version from the config data.
|
||||||
|
- Checks if the `scripts` folder exists within the personality folder to determine if the personality has scripts.
|
||||||
|
- Checks for the existence of logo files named `logo.gif` or `logo.webp` or `logo.png` or `logo.jpg` or `logo.jpeg` or `logo.bmp` within the `assets` folder to determine if the personality has a logo.
|
||||||
|
- Sets the `avatar` field of the personality info based on the available logo file.
|
||||||
|
- Appends the personality info to the list of personalities within the category.
|
||||||
|
- Adds the list of personalities to the dictionary of the current category within the language.
|
||||||
|
- Adds the dictionary of categories to the dictionary of the current language.
|
||||||
|
- Sends a response to the client containing the dictionary of available personalities.
|
||||||
|
|
||||||
|
Events generated:
|
||||||
|
- `'personalities_list'`: Emits an event to the client with the list of available personalities, categorized by language and category. The event data includes the personality information such as name, description, author, version, presence of scripts, and avatar image file path.
|
||||||
|
|
||||||
|
|
||||||
|
#### `list_available_models`
|
||||||
|
- Event: `'list_available_models'`
|
||||||
|
- Description: This event is triggered when a client requests a list of available models.
|
||||||
|
- Parameters: None (except `self` which refers to the class instance)
|
||||||
|
- Actions:
|
||||||
|
- Checks if a binding class is selected. If not, emits an event `'available_models_list'` with a failure response indicating that no binding is selected.
|
||||||
|
- Retrieves the list of available models using the binding class.
|
||||||
|
- Processes each model in the list to extract relevant information such as filename, server, image URL, license, owner, owner link, filesize, description, model type, etc.
|
||||||
|
- Constructs a dictionary representation for each model with the extracted information.
|
||||||
|
- Appends each model dictionary to the `models` list.
|
||||||
|
- Emits an event `'available_models_list'` with a success response containing the list of available models to the client.
|
||||||
|
|
||||||
|
Events generated:
|
||||||
|
- `'available_models_list'`: This event is emitted as a response to the client requesting a list of available models. It contains the success status and a list of available models with their details, such as title, icon, license, owner, owner link, description, installation status, file path, filesize, and model type.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### `list_available_personalities_languages`
|
||||||
|
- Event: `'list_available_personalities_languages'`
|
||||||
|
- Description: This event is triggered when a client requests a list of available personality languages.
|
||||||
|
- Actions:
|
||||||
|
- Attempts to retrieve a list of available personality languages by iterating over the `self.personalities_path` directory.
|
||||||
|
- Sends a response to the client containing the success status and the list of available personality languages.
|
||||||
|
|
||||||
|
Parameters: None
|
||||||
|
|
||||||
|
Events:
|
||||||
|
- `'available_personalities_languages_list'`: This event is emitted as a response to the client after listing the available personality languages.
|
||||||
|
- Data:
|
||||||
|
- `'success'` (boolean): Indicates whether the operation was successful or not.
|
||||||
|
- `'available_personalities_languages'` (list): Contains the available personality languages as a list of strings.
|
||||||
|
|
||||||
|
|
||||||
|
#### `list_available_personalities_categories`
|
||||||
|
- Event: `'list_available_personalities_categories'`
|
||||||
|
- Description: This event is triggered when a client requests a list of available personality categories based on a specified language.
|
||||||
|
- Parameters:
|
||||||
|
- `data`: A dictionary containing the following parameter:
|
||||||
|
- `language`: The language for which to retrieve available personality categories.
|
||||||
|
|
||||||
|
- Actions:
|
||||||
|
- Extracts the `language` parameter from the request data.
|
||||||
|
- Attempts to retrieve the available personality categories for the specified language.
|
||||||
|
- Emits an event `'available_personalities_categories_list'` to the client.
|
||||||
|
- If successful, sends a response with a list of available personality categories in the `'available_personalities_categories'` field of the event data.
|
||||||
|
- If an error occurs, sends a response with an error message in the `'error'` field of the event data.
|
||||||
|
|
||||||
|
Events:
|
||||||
|
- Event: `'available_personalities_categories_list'`
|
||||||
|
- Description: This event is emitted in response to the `list_available_personalities_categories` event.
|
||||||
|
- Data:
|
||||||
|
- If successful:
|
||||||
|
- `'success'` (boolean): Indicates whether the retrieval of available personality categories was successful.
|
||||||
|
- `'available_personalities_categories'` (list): A list of available personality categories.
|
||||||
|
- If an error occurs:
|
||||||
|
- `'success'` (boolean): Indicates whether an error occurred during the retrieval of available personality categories.
|
||||||
|
- `'error'` (string): The error message describing the encountered error.
|
||||||
|
|
||||||
|
|
||||||
|
#### `list_available_personalities_names`
|
||||||
|
- Event: `'list_available_personalities_names'`
|
||||||
|
- Description: This event is triggered when a client requests a list of available personality names based on the specified language and category.
|
||||||
|
- Parameters:
|
||||||
|
- `language` (string): The language for which the available personality names are requested.
|
||||||
|
- `category` (string): The category for which the available personality names are requested.
|
||||||
|
- Actions:
|
||||||
|
- Extracts the `language` and `category` parameters from the request data.
|
||||||
|
- Retrieves the list of available personalities by iterating over the directory specified by the `language` and `category` parameters.
|
||||||
|
- Sends a response to the client containing the list of available personality names.
|
||||||
|
- Event Generated: `'list_available_personalities_names_list'`
|
||||||
|
- Description: This event is emitted as a response to the `list_available_personalities_names` request, providing the list of available personality names.
|
||||||
|
- Parameters:
|
||||||
|
- `success` (bool): Indicates the success or failure of the request.
|
||||||
|
- `list_available_personalities_names` (list): The list of available personality names.
|
||||||
|
- `error` (string, optional): If the request fails, this parameter contains the error message.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### `select_binding`
|
||||||
|
- Event: `'select_binding'`
|
||||||
|
- Description: This event is triggered when a client selects a binding.
|
||||||
|
- Parameters:
|
||||||
|
- `data['binding_name']`: The name of the binding selected by the client.
|
||||||
|
|
||||||
|
Actions:
|
||||||
|
- Creates a deep copy of the `self.config` dictionary and assigns it to `self.cp_config` variable.
|
||||||
|
- Updates the `"binding_name"` value in `self.cp_config` with the selected binding name obtained from `data['binding_name']`.
|
||||||
|
- Attempts to build a binding instance using the `self.bindings_path` and `self.cp_config`.
|
||||||
|
- If successful, updates `self.binding_class` with the created binding instance and updates `self.config` with `self.cp_config`.
|
||||||
|
- Sends a response to the client indicating the success of the binding selection along with the selected binding name.
|
||||||
|
- If an exception occurs during the binding creation process, the exception is printed and a response is sent to the client indicating the failure of the binding selection along with the selected binding name and the error message.
|
||||||
|
|
||||||
|
Events generated:
|
||||||
|
- `'select_binding'`: This event is emitted to the client to provide a response regarding the binding selection. It contains the following data:
|
||||||
|
- `'success'`: A boolean value indicating the success or failure of the binding selection.
|
||||||
|
- `'binding_name'`: The name of the selected binding.
|
||||||
|
- If the binding selection fails, it also includes:
|
||||||
|
- `'error'`: An error message explaining the reason for the failure.
|
||||||
|
|
||||||
|
|
||||||
|
#### `select_model`
|
||||||
|
- Event: `'select_model'`
|
||||||
|
- Description: This event is triggered when a client requests to select a model.
|
||||||
|
- Parameters:
|
||||||
|
- `data['model_name']` (string): The name of the model to select.
|
||||||
|
- Actions:
|
||||||
|
- Extracts the model name from the request data.
|
||||||
|
- Checks if a binding class is available (`self.binding_class`).
|
||||||
|
- If no binding class is available, emits a `'select_model'` event with a failure response, indicating that a binding needs to be selected first.
|
||||||
|
- Returns and exits the function.
|
||||||
|
- Creates a deep copy of the configuration (`self.config`) and assigns it to `self.cp_config`.
|
||||||
|
- Sets the `"model_name"` property of `self.cp_config` to the selected model name.
|
||||||
|
- Tries to create an instance of the binding class (`self.binding_class`) with `self.cp_config`.
|
||||||
|
- If successful, assigns the created binding instance to `self.current_model`.
|
||||||
|
- Emits a `'select_model'` event with a success response, indicating that the model selection was successful.
|
||||||
|
- Returns and exits the function.
|
||||||
|
- If an exception occurs during model creation, prints the exception and emits a `'select_model'` event with a failure response, indicating that a binding needs to be selected first.
|
||||||
|
|
||||||
|
Events generated:
|
||||||
|
- `'select_model'` (success response):
|
||||||
|
- Emits to the client a success response indicating that the model selection was successful.
|
||||||
|
- Parameters:
|
||||||
|
- `'success'` (boolean): `True` to indicate success.
|
||||||
|
- `'model_name'` (string): The selected model name.
|
||||||
|
- `'select_model'` (failure response):
|
||||||
|
- Emits to the client a failure response indicating that a binding needs to be selected first or an error occurred during model creation.
|
||||||
|
- Parameters:
|
||||||
|
- `'success'` (boolean): `False` to indicate failure.
|
||||||
|
- `'model_name'` (string): The selected model name.
|
||||||
|
- `'error'` (string): An error message providing additional details.
|
||||||
|
|
||||||
|
|
||||||
|
#### `add_personality`
|
||||||
|
- Event: `'add_personality'`
|
||||||
|
- Description: This event is triggered when a client requests to add a new personality.
|
||||||
|
- Parameters:
|
||||||
|
- `data`: A dictionary containing the following key-value pairs:
|
||||||
|
- `'path'`: The path to the personality file.
|
||||||
|
- Actions:
|
||||||
|
- Extracts the personality path from the `data` dictionary.
|
||||||
|
- Attempts to create a new `AIPersonality` instance with the provided path.
|
||||||
|
- Appends the created personality to the `self.personalities` list.
|
||||||
|
- Appends the personality path to the `self.config["personalities"]` list.
|
||||||
|
- Saves the updated configuration using `self.config.save_config()`.
|
||||||
|
- Sends a response to the client indicating the success of the personality addition along with the name and ID of the added personality.
|
||||||
|
- Events Generated:
|
||||||
|
- `'personality_added'`: This event is emitted to the client to indicate the successful addition of the personality. The emitted data is a dictionary with the following key-value pairs:
|
||||||
|
- `'success'`: `True` to indicate success.
|
||||||
|
- `'name'`: The name of the added personality.
|
||||||
|
- `'id'`: The ID of the added personality in the `self.personalities` list.
|
||||||
|
- `'personality_add_failed'`: This event is emitted to the client if an exception occurs during the personality addition. The emitted data is a dictionary with the following key-value pairs:
|
||||||
|
- `'success'`: `False` to indicate failure.
|
||||||
|
- `'error'`: A string containing the error message explaining the cause of the failure.
|
||||||
|
|
||||||
|
|
||||||
|
#### `activate_personality`
|
||||||
|
- Event: `'activate_personality'`
|
||||||
|
- Description: This event is triggered when a client requests to activate a personality.
|
||||||
|
- Actions:
|
||||||
|
- Extracts the personality ID from the request data.
|
||||||
|
- Checks if the personality ID is valid (within the range of `self.personalities`).
|
||||||
|
- Sets the `self.active_personality` to the selected personality.
|
||||||
|
- Sends a response to the client indicating the success of the personality activation along with the name and ID of the activated personality.
|
||||||
|
- Updates the default personality ID in `self.config["active_personality_id"]`.
|
||||||
|
- Saves the updated configuration using `self.config.save_config()`.
|
||||||
|
- Event Generated:
|
||||||
|
- `'activate_personality'`: Emits the event to the client with the following data:
|
||||||
|
- `'success'`: Indicates whether the personality activation was successful (`True` or `False`).
|
||||||
|
- `'name'`: The name of the activated personality.
|
||||||
|
- `'id'`: The ID (index) of the activated personality in the `self.personalities` list.
|
||||||
|
|
||||||
|
#### `list_active_personalities`
|
||||||
|
- Event: `'list_active_personalities'`
|
||||||
|
- Description: This event is triggered when a client requests a list of active personalities.
|
||||||
|
- Parameters: None
|
||||||
|
- Actions:
|
||||||
|
- Retrieves the names of all the active personalities from the `self.personalities` list.
|
||||||
|
- Sends a response to the client containing the list of active personality names.
|
||||||
|
- Event Generated: `'active_personalities_list'`
|
||||||
|
- Event Data:
|
||||||
|
- `'success'`: A boolean value indicating the success of the operation.
|
||||||
|
- `'personalities'`: A list of strings representing the names of the active personalities.
|
||||||
|
|
||||||
|
Please note that the `'list_active_personalities'` event does not require any parameters when triggering the endpoint. It simply returns the list of active personalities to the client.
|
||||||
|
|
||||||
|
#### `activate_personality`
|
||||||
|
- Event: `'activate_personality'`
|
||||||
|
- Description: This event is triggered when a client requests to activate a personality.
|
||||||
|
- Parameters:
|
||||||
|
- `data['id']` (integer): The ID of the personality to activate.
|
||||||
|
- Actions:
|
||||||
|
- Extracts the personality ID from the request data.
|
||||||
|
- Checks if the personality ID is valid by comparing it with the length of the `self.personalities` list.
|
||||||
|
- If the personality ID is valid:
|
||||||
|
- Sets the `self.active_personality` to the personality at the specified ID.
|
||||||
|
- Sends a response to the client indicating the success of the personality activation, along with the name and ID of the activated personality.
|
||||||
|
- Updates the `active_personality_id` in the `self.config` object with the activated personality's ID.
|
||||||
|
- Saves the updated configuration.
|
||||||
|
- If the personality ID is not valid:
|
||||||
|
- Sends a response to the client indicating the failure of the personality activation, along with an error message.
|
||||||
|
|
||||||
|
Generated Events:
|
||||||
|
- `'activate_personality'`: This event is emitted to the client after successfully activating a personality.
|
||||||
|
- Parameters:
|
||||||
|
- `{'success': True, 'name': self.active_personality, 'id': len(self.personalities) - 1}`:
|
||||||
|
- `'success'` (boolean): Indicates whether the personality activation was successful.
|
||||||
|
- `'name'` (string): The name of the activated personality.
|
||||||
|
- `'id'` (integer): The ID of the activated personality.
|
||||||
|
- `'personality_add_failed'`: This event is emitted to the client if the personality ID provided is not valid.
|
||||||
|
- Parameters:
|
||||||
|
- `{'success': False, 'error': 'Personality ID not valid'}`:
|
||||||
|
- `'success'` (boolean): Indicates whether the personality activation failed.
|
||||||
|
- `'error'` (string): The error message indicating the reason for the failure.
|
||||||
|
|
||||||
|
|
||||||
|
#### `generate_text`
|
||||||
|
- Event: `'generate_text'`
|
||||||
|
- Description: This event is triggered when a client requests text generation.
|
||||||
|
- Parameters:
|
||||||
|
- `data`: A dictionary containing the following fields:
|
||||||
|
- `prompt` (string): The text prompt for text generation.
|
||||||
|
- `personality` (integer): The index of the selected personality for conditioning the text generation.
|
||||||
|
- Actions:
|
||||||
|
- Retrieves the selected model and client ID from the server.
|
||||||
|
- Extracts the prompt and selected personality index from the request data.
|
||||||
|
- Initializes an empty answer list for text chunks.
|
||||||
|
- Retrieves the full discussion blocks from the client's data.
|
||||||
|
- Defines a callback function to handle generated text chunks.
|
||||||
|
- Preprocesses the prompt based on the selected personality's configuration, if applicable.
|
||||||
|
- Constructs the full discussion text by combining the personality's conditioning, prompt, and AI message prefix.
|
||||||
|
- Prints the input prompt for debugging purposes.
|
||||||
|
- If a personality processor is available and has a custom workflow, runs the processor's workflow with the prompt and full discussion text, providing the callback function for text chunk emission.
|
||||||
|
- If no custom workflow is available, generates text using the selected model with the full discussion text, specifying the number of predictions.
|
||||||
|
- Appends the generated text to the full discussion blocks.
|
||||||
|
- Prints a success message for debugging purposes.
|
||||||
|
- Emits the generated text to the client through the `'text_generated'` event.
|
||||||
|
|
||||||
|
Events generated:
|
||||||
|
- `'text_chunk'`: Generated text chunks are emitted to the client through this event during the text generation process.
|
||||||
|
- `'text_generated'`: Once the text generation process is complete, the final generated text is emitted to the client through this event.
|
27
examples/chat_forever/console.py
Normal file
27
examples/chat_forever/console.py
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
from lollms.console import Conversation
|
||||||
|
|
||||||
|
class MyConversation(Conversation):
|
||||||
|
def __init__(self, cfg=None):
|
||||||
|
super().__init__(cfg, show_welcome_message=False)
|
||||||
|
|
||||||
|
def start_conversation(self):
|
||||||
|
full_discussion=""
|
||||||
|
while True:
|
||||||
|
prompt = input("You: ")
|
||||||
|
if prompt=="exit":
|
||||||
|
return
|
||||||
|
if prompt=="menu":
|
||||||
|
self.menu.main_menu()
|
||||||
|
full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text
|
||||||
|
full_discussion += self.personality.ai_message_prefix
|
||||||
|
def callback(text, type=None):
|
||||||
|
print(text, end="", flush=True)
|
||||||
|
return True
|
||||||
|
print(self.personality.name+": ",end="",flush=True)
|
||||||
|
output = self.safe_generate(full_discussion, callback=callback)
|
||||||
|
full_discussion += output.strip()+self.personality.link_text
|
||||||
|
print()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
cv = MyConversation()
|
||||||
|
cv.start_conversation()
|
43
examples/lllm_qt_client/README.md
Normal file
43
examples/lllm_qt_client/README.md
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
# AIPersonality Server and PyQt Client
|
||||||
|
|
||||||
|
This is a Python project that consists of a server and a PyQt client for interacting with the AIPersonality text generation model. The server is built using Flask and Flask-SocketIO, while the client is implemented using PyQt5.
|
||||||
|
|
||||||
|
## Server
|
||||||
|
|
||||||
|
The server code is located in the file `lllm_server.py`. It sets up a Flask application with Flask-SocketIO to establish a WebSocket connection with clients. The server receives text generation requests from clients, generates text based on the given prompt, and sends the generated text back to the clients.
|
||||||
|
|
||||||
|
To run the server, execute the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python server.py --host localhost --port 9600 --config configs/config.yaml --bindings_path bindings_zoo
|
||||||
|
```
|
||||||
|
You can customize the host, port, configuration file, and bindings path by providing appropriate command-line arguments.
|
||||||
|
|
||||||
|
## Client
|
||||||
|
The client code is implemented using PyQt5 and can be found in the file client.py. It provides a graphical user interface (GUI) for interacting with the server. The client connects to the server using WebSocket and allows users to enter a prompt and generate text based on that prompt.
|
||||||
|
|
||||||
|
To run the client, execute the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pyaipersonality-server --host 0.0.0.0 --port 9600
|
||||||
|
```
|
||||||
|
The client GUI will appear, and you can enter a prompt in the text area. Click the "Generate Text" button to send the prompt to the server for text generation. The generated text will be displayed in the text area.
|
||||||
|
|
||||||
|
Make sure you have the necessary dependencies installed, such as Flask, Flask-SocketIO, Flask-CORS, pyaipersonality, and PyQt5, before running the server and client.
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
The project depends on the following Python packages:
|
||||||
|
|
||||||
|
- Flask
|
||||||
|
- Flask-SocketIO
|
||||||
|
- Flask-CORS
|
||||||
|
- pyaipersonality
|
||||||
|
- PyQt5
|
||||||
|
|
||||||
|
You can install the dependencies using pip:
|
||||||
|
```bash
|
||||||
|
pip install flask flask-socketio flask-cors pyaipersonality pyqt5
|
||||||
|
```
|
||||||
|
|
||||||
|
# License
|
||||||
|
PyAIPersonality is licensed under the Apache 2.0 license. See the `LICENSE` file for more information.
|
4
examples/lllm_qt_client/assets/connected.svg
Normal file
4
examples/lllm_qt_client/assets/connected.svg
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
<?xml version="1.0"?>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 50 50">
|
||||||
|
<path d="M 44 1.59375 L 33.5625 12 L 31.3125 9.75 C 28.9695 7.41 25.18675 7.41 22.84375 9.75 L 18.5 14.125 L 17.1875 12.8125 A 1.0001 1.0001 0 0 0 16.375 12.5 A 1.0001 1.0001 0 0 0 15.78125 14.21875 L 35.78125 34.21875 A 1.0001 1.0001 0 1 0 37.1875 32.8125 L 35.875 31.5 L 40.25 27.15625 C 42.594 24.81425 42.592 21.0315 40.25 18.6875 L 40.25 18.65625 L 38 16.40625 L 48.40625 6 L 44 1.59375 z M 13.40625 15.46875 A 1.0001 1.0001 0 0 0 12.8125 17.1875 L 14.125 18.5 L 9.75 22.84375 C 7.406 25.18575 7.408 28.99975 9.75 31.34375 L 12 33.59375 L 1.59375 44 L 6 48.40625 L 16.40625 38 L 18.65625 40.25 C 20.99925 42.59 24.81325 42.59 27.15625 40.25 L 31.5 35.875 L 32.8125 37.1875 A 1.0001 1.0001 0 1 0 34.21875 35.78125 L 14.21875 15.78125 A 1.0001 1.0001 0 0 0 13.5 15.46875 A 1.0001 1.0001 0 0 0 13.40625 15.46875 z"/>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 913 B |
4
examples/lllm_qt_client/assets/disconnected.svg
Normal file
4
examples/lllm_qt_client/assets/disconnected.svg
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
<?xml version="1.0"?>
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 50 50">
|
||||||
|
<path style="text-indent:0;text-align:start;line-height:normal;text-transform:none;block-progression:tb;-inkscape-font-specification:Sans" d="M 43.6875 2 L 38.65625 7.0625 L 36.34375 4.75 C 34.00075 2.41 30.18675 2.41 27.84375 4.75 L 23.03125 9.59375 L 21.71875 8.28125 A 1.0001 1.0001 0 0 0 20.78125 8 A 1.0001 1.0001 0 0 0 20.28125 9.71875 L 25.0625 14.5 L 18.9375 20.65625 L 20.34375 22.0625 L 26.5 15.9375 L 34.0625 23.5 L 27.9375 29.65625 L 29.34375 31.0625 L 35.5 24.9375 L 40.28125 29.71875 A 1.016466 1.016466 0 1 0 41.71875 28.28125 L 40.40625 26.96875 L 45.25 22.15625 C 47.594 19.81425 47.592 16.0315 45.25 13.6875 L 45.25 13.65625 L 42.9375 11.34375 L 48 6.3125 L 43.6875 2 z M 8.90625 19.96875 A 1.0001 1.0001 0 0 0 8.78125 20 A 1.0001 1.0001 0 0 0 8.28125 21.71875 L 9.59375 23.03125 L 4.75 27.84375 C 2.406 30.18575 2.408 33.99975 4.75 36.34375 L 7.0625 38.625 L 2 43.6875 L 6.3125 48 L 11.375 42.9375 L 13.65625 45.25 C 15.99925 47.59 19.81325 47.59 22.15625 45.25 L 26.96875 40.40625 L 28.28125 41.71875 A 1.016466 1.016466 0 1 0 29.71875 40.28125 L 9.71875 20.28125 A 1.0001 1.0001 0 0 0 8.90625 19.96875 z" overflow="visible" font-family="Sans"/>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 1.2 KiB |
184
examples/lllm_qt_client/lllm_qt_client.py
Normal file
184
examples/lllm_qt_client/lllm_qt_client.py
Normal file
@ -0,0 +1,184 @@
|
|||||||
|
import sys
|
||||||
|
from PyQt5.QtGui import QIcon
|
||||||
|
from PyQt5.QtCore import QObject, pyqtSignal, pyqtSlot
|
||||||
|
from PyQt5.QtWidgets import QApplication, QMainWindow, QTextEdit,QHBoxLayout, QLineEdit, QVBoxLayout, QWidget, QToolBar, QAction, QPushButton, QStatusBar, QComboBox
|
||||||
|
from PyQt5.QtSvg import QSvgWidget
|
||||||
|
from socketio.client import Client
|
||||||
|
from socketio.exceptions import ConnectionError
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
class ServerConnector(QObject):
|
||||||
|
text_chunk_received = pyqtSignal(str)
|
||||||
|
text_generated = pyqtSignal(str)
|
||||||
|
connection_failed = pyqtSignal()
|
||||||
|
connection_status_changed = pyqtSignal(bool)
|
||||||
|
personalities_received = pyqtSignal(list)
|
||||||
|
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super(ServerConnector, self).__init__(parent)
|
||||||
|
self.socketio = Client()
|
||||||
|
self.connected = False
|
||||||
|
self.personalities = []
|
||||||
|
self.selected_personality_id = 0
|
||||||
|
|
||||||
|
self.socketio.on('connect', self.handle_connect)
|
||||||
|
self.socketio.on('text_chunk', self.handle_text_chunk)
|
||||||
|
self.socketio.on('text_generated', self.handle_text_generated)
|
||||||
|
self.socketio.on('active_personalities_list', self.handle_personalities_received)
|
||||||
|
|
||||||
|
def handle_connect(self):
|
||||||
|
self.socketio.emit('connect')
|
||||||
|
self.list_personalities()
|
||||||
|
|
||||||
|
|
||||||
|
def connect_to_server(self):
|
||||||
|
if not self.connected:
|
||||||
|
try:
|
||||||
|
self.socketio.connect('http://localhost:9600')
|
||||||
|
self.connected = True
|
||||||
|
self.connection_status_changed.emit(True)
|
||||||
|
except ConnectionError:
|
||||||
|
self.connection_failed.emit()
|
||||||
|
self.connection_status_changed.emit(False)
|
||||||
|
|
||||||
|
def disconnect_from_server(self):
|
||||||
|
if self.connected:
|
||||||
|
self.socketio.disconnect()
|
||||||
|
self.connected = False
|
||||||
|
self.connection_status_changed.emit(False)
|
||||||
|
|
||||||
|
def list_personalities(self):
|
||||||
|
self.socketio.emit('list_active_personalities')
|
||||||
|
|
||||||
|
@pyqtSlot(str)
|
||||||
|
def generate_text(self, prompt):
|
||||||
|
if not self.connected:
|
||||||
|
self.connection_failed.emit()
|
||||||
|
return
|
||||||
|
|
||||||
|
data = {
|
||||||
|
'client_id': self.socketio.sid,
|
||||||
|
'prompt': prompt,
|
||||||
|
'personality': self.selected_personality_id
|
||||||
|
}
|
||||||
|
self.socketio.emit('generate_text', data)
|
||||||
|
|
||||||
|
def handle_personalities_list(self, data):
|
||||||
|
personalities = data['personalities']
|
||||||
|
self.personalities_list_received.emit(personalities)
|
||||||
|
|
||||||
|
|
||||||
|
def handle_text_chunk(self, data):
|
||||||
|
chunk = data['chunk']
|
||||||
|
self.text_chunk_received.emit(chunk)
|
||||||
|
|
||||||
|
def handle_text_generated(self, data):
|
||||||
|
text = data['text']
|
||||||
|
self.text_generated.emit(text)
|
||||||
|
|
||||||
|
def handle_personalities_received(self, data):
|
||||||
|
personalities = data['personalities']
|
||||||
|
print(f"Received List of personalities:{personalities}")
|
||||||
|
self.personalities = personalities
|
||||||
|
self.personalities_received.emit(personalities)
|
||||||
|
|
||||||
|
class MainWindow(QMainWindow):
|
||||||
|
def __init__(self, parent=None):
|
||||||
|
super(MainWindow, self).__init__(parent)
|
||||||
|
self.setWindowTitle("AIPersonality Client")
|
||||||
|
|
||||||
|
self.user_input_layout = QHBoxLayout()
|
||||||
|
self.user_text = QLineEdit()
|
||||||
|
self.text_edit = QTextEdit()
|
||||||
|
self.toolbar = QToolBar()
|
||||||
|
self.submit_button = QPushButton("Submit")
|
||||||
|
self.user_input_layout.addWidget(self.user_text)
|
||||||
|
self.user_input_layout.addWidget(self.submit_button)
|
||||||
|
|
||||||
|
self.statusbar = QStatusBar()
|
||||||
|
self.personality_combo_box = QComboBox()
|
||||||
|
self.personality_combo_box.setMinimumWidth(500)
|
||||||
|
|
||||||
|
self.connect_action = QAction(QIcon(str(Path(__file__).parent/'assets/connected.svg')), "", self)
|
||||||
|
self.connect_action.setCheckable(True)
|
||||||
|
self.connect_action.toggled.connect(self.toggle_connection)
|
||||||
|
|
||||||
|
self.toolbar.addAction(self.connect_action)
|
||||||
|
self.toolbar.addWidget(self.personality_combo_box)
|
||||||
|
self.addToolBar(self.toolbar)
|
||||||
|
|
||||||
|
layout = QVBoxLayout()
|
||||||
|
layout.addLayout(self.user_input_layout)
|
||||||
|
layout.addWidget(self.text_edit)
|
||||||
|
|
||||||
|
widget = QWidget()
|
||||||
|
widget.setLayout(layout)
|
||||||
|
self.setCentralWidget(widget)
|
||||||
|
|
||||||
|
self.connector = ServerConnector()
|
||||||
|
self.connector.text_chunk_received.connect(self.handle_text_chunk)
|
||||||
|
self.connector.text_generated.connect(self.handle_text_generated)
|
||||||
|
self.connector.connection_failed.connect(self.handle_connection_failed)
|
||||||
|
self.connector.connection_status_changed.connect(self.handle_connection_status_changed)
|
||||||
|
self.connector.personalities_received.connect(self.handle_personalities_received)
|
||||||
|
self.connector.connect_to_server()
|
||||||
|
|
||||||
|
self.submit_button.clicked.connect(self.submit_text)
|
||||||
|
|
||||||
|
self.setStatusBar(self.statusbar)
|
||||||
|
self.update_statusbar()
|
||||||
|
|
||||||
|
@pyqtSlot(bool)
|
||||||
|
def toggle_connection(self, checked):
|
||||||
|
if checked:
|
||||||
|
self.connector.connect_to_server()
|
||||||
|
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/connected.svg')))
|
||||||
|
else:
|
||||||
|
self.connector.disconnect_from_server()
|
||||||
|
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/disconnected.svg')))
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def submit_text(self):
|
||||||
|
prompt = self.user_text.text()
|
||||||
|
self.selected_personality_id = self.personality_combo_box.currentIndex()
|
||||||
|
self.text_edit.insertPlainText("User:"+prompt+"\n"+self.connector.personalities[self.selected_personality_id]+":")
|
||||||
|
self.connector.generate_text(prompt)
|
||||||
|
|
||||||
|
@pyqtSlot(str)
|
||||||
|
def handle_text_chunk(self, chunk):
|
||||||
|
self.text_edit.insertPlainText(chunk)
|
||||||
|
|
||||||
|
@pyqtSlot(str)
|
||||||
|
def handle_text_generated(self, text):
|
||||||
|
self.text_edit.append(text)
|
||||||
|
|
||||||
|
@pyqtSlot()
|
||||||
|
def handle_connection_failed(self):
|
||||||
|
self.text_edit.append("Failed to connect to the server.")
|
||||||
|
|
||||||
|
@pyqtSlot(bool)
|
||||||
|
def handle_connection_status_changed(self, connected):
|
||||||
|
if connected:
|
||||||
|
self.statusbar.showMessage("Connected to the server")
|
||||||
|
else:
|
||||||
|
self.statusbar.showMessage("Disconnected from the server")
|
||||||
|
|
||||||
|
@pyqtSlot(list)
|
||||||
|
def handle_personalities_received(self, personalities):
|
||||||
|
print("Received personalities")
|
||||||
|
self.personality_combo_box.clear()
|
||||||
|
self.personality_combo_box.addItems(personalities)
|
||||||
|
|
||||||
|
def update_statusbar(self):
|
||||||
|
if self.connector.connected:
|
||||||
|
self.statusbar.showMessage("Connected to the server")
|
||||||
|
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/connected.svg')))
|
||||||
|
else:
|
||||||
|
self.statusbar.showMessage("Disconnected from the server")
|
||||||
|
self.connect_action.setIcon(QIcon(str(Path(__file__).parent/'assets/disconnected.svg')))
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
app = QApplication(sys.argv)
|
||||||
|
window = MainWindow()
|
||||||
|
window.show()
|
||||||
|
sys.exit(app.exec_())
|
3
examples/lllm_qt_client/requirements.txt
Normal file
3
examples/lllm_qt_client/requirements.txt
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
Flask_SocketIO==5.3.4
|
||||||
|
PyQt5==5.15.9
|
||||||
|
python-socketio[client]
|
17
examples/simple_story/console.py
Normal file
17
examples/simple_story/console.py
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
from lollms.console import Conversation
|
||||||
|
|
||||||
|
class MyConversation(Conversation):
|
||||||
|
def __init__(self, cfg=None):
|
||||||
|
super().__init__(cfg, show_welcome_message=False)
|
||||||
|
|
||||||
|
def start_conversation(self):
|
||||||
|
prompt = "Once apon a time"
|
||||||
|
def callback(text, type=None):
|
||||||
|
print(text, end="", flush=True)
|
||||||
|
return True
|
||||||
|
print(prompt, end="", flush=True)
|
||||||
|
output = self.safe_generate(prompt, callback=callback)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
cv = MyConversation()
|
||||||
|
cv.start_conversation()
|
23
examples/vujs_web_ui/lollms_webui/.gitignore
vendored
Normal file
23
examples/vujs_web_ui/lollms_webui/.gitignore
vendored
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
.DS_Store
|
||||||
|
node_modules
|
||||||
|
/dist
|
||||||
|
|
||||||
|
|
||||||
|
# local env files
|
||||||
|
.env.local
|
||||||
|
.env.*.local
|
||||||
|
|
||||||
|
# Log files
|
||||||
|
npm-debug.log*
|
||||||
|
yarn-debug.log*
|
||||||
|
yarn-error.log*
|
||||||
|
pnpm-debug.log*
|
||||||
|
|
||||||
|
# Editor directories and files
|
||||||
|
.idea
|
||||||
|
.vscode
|
||||||
|
*.suo
|
||||||
|
*.ntvs*
|
||||||
|
*.njsproj
|
||||||
|
*.sln
|
||||||
|
*.sw?
|
24
examples/vujs_web_ui/lollms_webui/README.md
Normal file
24
examples/vujs_web_ui/lollms_webui/README.md
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# lollms_webui
|
||||||
|
|
||||||
|
## Project setup
|
||||||
|
```
|
||||||
|
npm install
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compiles and hot-reloads for development
|
||||||
|
```
|
||||||
|
npm run serve
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compiles and minifies for production
|
||||||
|
```
|
||||||
|
npm run build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lints and fixes files
|
||||||
|
```
|
||||||
|
npm run lint
|
||||||
|
```
|
||||||
|
|
||||||
|
### Customize configuration
|
||||||
|
See [Configuration Reference](https://cli.vuejs.org/config/).
|
5
examples/vujs_web_ui/lollms_webui/babel.config.js
Normal file
5
examples/vujs_web_ui/lollms_webui/babel.config.js
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
module.exports = {
|
||||||
|
presets: [
|
||||||
|
'@vue/cli-plugin-babel/preset'
|
||||||
|
]
|
||||||
|
}
|
19
examples/vujs_web_ui/lollms_webui/jsconfig.json
Normal file
19
examples/vujs_web_ui/lollms_webui/jsconfig.json
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"target": "es5",
|
||||||
|
"module": "esnext",
|
||||||
|
"baseUrl": "./",
|
||||||
|
"moduleResolution": "node",
|
||||||
|
"paths": {
|
||||||
|
"@/*": [
|
||||||
|
"src/*"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"lib": [
|
||||||
|
"esnext",
|
||||||
|
"dom",
|
||||||
|
"dom.iterable",
|
||||||
|
"scripthost"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
11920
examples/vujs_web_ui/lollms_webui/package-lock.json
generated
Normal file
11920
examples/vujs_web_ui/lollms_webui/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
45
examples/vujs_web_ui/lollms_webui/package.json
Normal file
45
examples/vujs_web_ui/lollms_webui/package.json
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
{
|
||||||
|
"name": "lollms_webui",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"private": true,
|
||||||
|
"scripts": {
|
||||||
|
"serve": "vue-cli-service serve",
|
||||||
|
"build": "vue-cli-service build",
|
||||||
|
"lint": "vue-cli-service lint"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"core-js": "^3.8.3",
|
||||||
|
"socket.io-client": "^4.6.2",
|
||||||
|
"tailwindcss": "^3.3.2",
|
||||||
|
"vue": "^3.2.13"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@babel/core": "^7.12.16",
|
||||||
|
"@babel/eslint-parser": "^7.12.16",
|
||||||
|
"@vue/cli-plugin-babel": "~5.0.0",
|
||||||
|
"@vue/cli-plugin-eslint": "~5.0.0",
|
||||||
|
"@vue/cli-service": "~5.0.0",
|
||||||
|
"eslint": "^7.32.0",
|
||||||
|
"eslint-plugin-vue": "^8.0.3"
|
||||||
|
},
|
||||||
|
"eslintConfig": {
|
||||||
|
"root": true,
|
||||||
|
"env": {
|
||||||
|
"node": true
|
||||||
|
},
|
||||||
|
"extends": [
|
||||||
|
"plugin:vue/vue3-essential",
|
||||||
|
"eslint:recommended"
|
||||||
|
],
|
||||||
|
"parserOptions": {
|
||||||
|
"parser": "@babel/eslint-parser"
|
||||||
|
},
|
||||||
|
"rules": {}
|
||||||
|
},
|
||||||
|
"browserslist": [
|
||||||
|
"> 1%",
|
||||||
|
"last 2 versions",
|
||||||
|
"not dead",
|
||||||
|
"not ie 11"
|
||||||
|
]
|
||||||
|
}
|
BIN
examples/vujs_web_ui/lollms_webui/public/favicon.ico
Normal file
BIN
examples/vujs_web_ui/lollms_webui/public/favicon.ico
Normal file
Binary file not shown.
After Width: | Height: | Size: 4.2 KiB |
17
examples/vujs_web_ui/lollms_webui/public/index.html
Normal file
17
examples/vujs_web_ui/lollms_webui/public/index.html
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width,initial-scale=1.0">
|
||||||
|
<link rel="icon" href="<%= BASE_URL %>favicon.ico">
|
||||||
|
<title><%= htmlWebpackPlugin.options.title %></title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<noscript>
|
||||||
|
<strong>We're sorry but <%= htmlWebpackPlugin.options.title %> doesn't work properly without JavaScript enabled. Please enable it to continue.</strong>
|
||||||
|
</noscript>
|
||||||
|
<div id="app"></div>
|
||||||
|
<!-- built files will be auto injected -->
|
||||||
|
</body>
|
||||||
|
</html>
|
124
examples/vujs_web_ui/lollms_webui/src/App.vue
Normal file
124
examples/vujs_web_ui/lollms_webui/src/App.vue
Normal file
@ -0,0 +1,124 @@
|
|||||||
|
<template>
|
||||||
|
<div class="bg-gray-900 text-white min-h-screen p-4">
|
||||||
|
<h1 class="text-3xl font-bold mb-4">Lord Of Large Language Models</h1>
|
||||||
|
<div class="mb-4">
|
||||||
|
<h2 class="text-xl font-bold">Select Binding</h2>
|
||||||
|
<select v-model="selectedBinding" @change="selectBinding" class="p-2 bg-gray-800 text-white">
|
||||||
|
<option v-for="binding in bindings" :key="binding.name" :value="binding.name">{{ binding.name }}</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
<div v-if="selectedBinding" class="mb-4">
|
||||||
|
<h2 class="text-xl font-bold">Select Model</h2>
|
||||||
|
<select v-model="selectedModel" @change="selectModel" class="p-2 bg-gray-800 text-white">
|
||||||
|
<option v-for="model in models" :key="model.title" :value="model.title">{{ model.title }}</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
<div v-if="selectedModel" class="mb-4">
|
||||||
|
<h2 class="text-xl font-bold">Select Personality</h2>
|
||||||
|
<select v-model="selectedPersonality" @change="selectPersonality" class="p-2 bg-gray-800 text-white">
|
||||||
|
<option v-for="personality in personalities" :key="personality.name" :value="personality.name">{{ personality.name }}</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<h2 class="text-xl font-bold">Chat</h2>
|
||||||
|
<div class="mb-4">
|
||||||
|
<div v-for="message in chatMessages" :key="message.id" class="text-white">
|
||||||
|
<strong>{{ message.sender }}:</strong> {{ message.text }}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="flex">
|
||||||
|
<input type="text" v-model="inputMessage" @keydown.enter="sendMessage" placeholder="Type your message" class="p-2 flex-grow bg-gray-800 text-white mr-2">
|
||||||
|
<button @click="sendMessage" class="p-2 bg-blue-500 text-white">Send</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<style src="./assets/css/app.css"></style>
|
||||||
|
<script>
|
||||||
|
import io from 'socket.io-client';
|
||||||
|
// Import Tailwind CSS styles
|
||||||
|
import 'tailwindcss/tailwind.css';
|
||||||
|
|
||||||
|
export default {
|
||||||
|
data() {
|
||||||
|
return {
|
||||||
|
socket: null,
|
||||||
|
bindings: [],
|
||||||
|
models: [],
|
||||||
|
personalities: [],
|
||||||
|
selectedBinding: '',
|
||||||
|
selectedModel: '',
|
||||||
|
selectedPersonality: '',
|
||||||
|
chatMessages: [],
|
||||||
|
inputMessage: '',
|
||||||
|
};
|
||||||
|
},
|
||||||
|
created() {
|
||||||
|
this.socket = io('http://localhost:9600');
|
||||||
|
this.socket.on('connect', () => {
|
||||||
|
console.log('Connected to server');
|
||||||
|
this.socket.emit('list_available_bindings');
|
||||||
|
this.socket.emit('list_available_models');
|
||||||
|
this.socket.emit('list_available_personalities');
|
||||||
|
});
|
||||||
|
// Handle the event emitted when the select_binding is sent
|
||||||
|
this.socket.on('select_binding', (data) => {
|
||||||
|
console.log('Received:', data);
|
||||||
|
if(data["success"]){
|
||||||
|
console.log('Binding selected:', data);
|
||||||
|
this.socket.emit('list_available_models');
|
||||||
|
}
|
||||||
|
// You can perform any additional actions or update data properties as needed
|
||||||
|
});
|
||||||
|
// Handle the event emitted when the select_binding is sent
|
||||||
|
this.socket.on('select_model', (data) => {
|
||||||
|
console.log('Received:', data);
|
||||||
|
if(data["success"]){
|
||||||
|
console.log('Model selected:', data);
|
||||||
|
}
|
||||||
|
// You can perform any additional actions or update data properties as needed
|
||||||
|
});
|
||||||
|
|
||||||
|
this.socket.on('bindings_list', (bindings) => {
|
||||||
|
this.bindings = bindings["bindings"];
|
||||||
|
console.log(this.bindings)
|
||||||
|
});
|
||||||
|
|
||||||
|
this.socket.on('available_models_list', (models) => {
|
||||||
|
if(models["success"]){
|
||||||
|
this.models = models["available_models"];
|
||||||
|
}
|
||||||
|
console.log(this.models)
|
||||||
|
});
|
||||||
|
|
||||||
|
this.socket.on('personalities_list', (personalities) => {
|
||||||
|
this.personalities = personalities;
|
||||||
|
});
|
||||||
|
|
||||||
|
this.socket.on('text_chunk', (message) => {
|
||||||
|
this.chatMessages.push(message.chunk);
|
||||||
|
});
|
||||||
|
},
|
||||||
|
methods: {
|
||||||
|
selectBinding() {
|
||||||
|
this.socket.emit('select_binding', { binding_name: this.selectedBinding });
|
||||||
|
},
|
||||||
|
selectModel() {
|
||||||
|
this.socket.emit('select_model', { model_name: this.selectedModel });
|
||||||
|
},
|
||||||
|
selectPersonality() {
|
||||||
|
this.socket.emit('activate_personality', { personality_name: this.selectedPersonality });
|
||||||
|
},
|
||||||
|
sendMessage() {
|
||||||
|
const message = {
|
||||||
|
text: this.inputMessage,
|
||||||
|
sender: 'User',
|
||||||
|
};
|
||||||
|
this.chatMessages.push(message);
|
||||||
|
this.socket.emit('generate_text', {prompt:message.text, personality:0});
|
||||||
|
this.inputMessage = '';
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
</script>
|
3
examples/vujs_web_ui/lollms_webui/src/assets/css/app.css
Normal file
3
examples/vujs_web_ui/lollms_webui/src/assets/css/app.css
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
@import 'tailwindcss/base';
|
||||||
|
@import 'tailwindcss/components';
|
||||||
|
@import 'tailwindcss/utilities';
|
@ -0,0 +1,3 @@
|
|||||||
|
@import 'tailwindcss/base';
|
||||||
|
@import 'tailwindcss/components';
|
||||||
|
@import 'tailwindcss/utilities';
|
BIN
examples/vujs_web_ui/lollms_webui/src/assets/logo.png
Normal file
BIN
examples/vujs_web_ui/lollms_webui/src/assets/logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 6.7 KiB |
@ -0,0 +1,58 @@
|
|||||||
|
<template>
|
||||||
|
<div class="hello">
|
||||||
|
<h1>{{ msg }}</h1>
|
||||||
|
<p>
|
||||||
|
For a guide and recipes on how to configure / customize this project,<br>
|
||||||
|
check out the
|
||||||
|
<a href="https://cli.vuejs.org" target="_blank" rel="noopener">vue-cli documentation</a>.
|
||||||
|
</p>
|
||||||
|
<h3>Installed CLI Plugins</h3>
|
||||||
|
<ul>
|
||||||
|
<li><a href="https://github.com/vuejs/vue-cli/tree/dev/packages/%40vue/cli-plugin-babel" target="_blank" rel="noopener">babel</a></li>
|
||||||
|
<li><a href="https://github.com/vuejs/vue-cli/tree/dev/packages/%40vue/cli-plugin-eslint" target="_blank" rel="noopener">eslint</a></li>
|
||||||
|
</ul>
|
||||||
|
<h3>Essential Links</h3>
|
||||||
|
<ul>
|
||||||
|
<li><a href="https://vuejs.org" target="_blank" rel="noopener">Core Docs</a></li>
|
||||||
|
<li><a href="https://forum.vuejs.org" target="_blank" rel="noopener">Forum</a></li>
|
||||||
|
<li><a href="https://chat.vuejs.org" target="_blank" rel="noopener">Community Chat</a></li>
|
||||||
|
<li><a href="https://twitter.com/vuejs" target="_blank" rel="noopener">Twitter</a></li>
|
||||||
|
<li><a href="https://news.vuejs.org" target="_blank" rel="noopener">News</a></li>
|
||||||
|
</ul>
|
||||||
|
<h3>Ecosystem</h3>
|
||||||
|
<ul>
|
||||||
|
<li><a href="https://router.vuejs.org" target="_blank" rel="noopener">vue-router</a></li>
|
||||||
|
<li><a href="https://vuex.vuejs.org" target="_blank" rel="noopener">vuex</a></li>
|
||||||
|
<li><a href="https://github.com/vuejs/vue-devtools#vue-devtools" target="_blank" rel="noopener">vue-devtools</a></li>
|
||||||
|
<li><a href="https://vue-loader.vuejs.org" target="_blank" rel="noopener">vue-loader</a></li>
|
||||||
|
<li><a href="https://github.com/vuejs/awesome-vue" target="_blank" rel="noopener">awesome-vue</a></li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
export default {
|
||||||
|
name: 'HelloWorld',
|
||||||
|
props: {
|
||||||
|
msg: String
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<!-- Add "scoped" attribute to limit CSS to this component only -->
|
||||||
|
<style scoped>
|
||||||
|
h3 {
|
||||||
|
margin: 40px 0 0;
|
||||||
|
}
|
||||||
|
ul {
|
||||||
|
list-style-type: none;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
li {
|
||||||
|
display: inline-block;
|
||||||
|
margin: 0 10px;
|
||||||
|
}
|
||||||
|
a {
|
||||||
|
color: #42b983;
|
||||||
|
}
|
||||||
|
</style>
|
6
examples/vujs_web_ui/lollms_webui/src/main.js
Normal file
6
examples/vujs_web_ui/lollms_webui/src/main.js
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
import { createApp } from 'vue'
|
||||||
|
import App from './App.vue'
|
||||||
|
import '@/assets/css/app.css';
|
||||||
|
import './assets/css/tailwind.css';
|
||||||
|
|
||||||
|
createApp(App).mount('#app')
|
14
examples/vujs_web_ui/lollms_webui/tailwind.config.js
Normal file
14
examples/vujs_web_ui/lollms_webui/tailwind.config.js
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
/** @type {import('tailwindcss').Config} */
|
||||||
|
module.exports = {
|
||||||
|
purge: [
|
||||||
|
'./src/**/*.vue',
|
||||||
|
'./src/**/*.html',
|
||||||
|
// Add any other paths to your Vue components and templates here
|
||||||
|
],
|
||||||
|
content: [],
|
||||||
|
theme: {
|
||||||
|
extend: {},
|
||||||
|
},
|
||||||
|
plugins: [],
|
||||||
|
}
|
||||||
|
|
12
examples/vujs_web_ui/lollms_webui/vue.config.js
Normal file
12
examples/vujs_web_ui/lollms_webui/vue.config.js
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
const { defineConfig } = require('@vue/cli-service')
|
||||||
|
module.exports = defineConfig({
|
||||||
|
transpileDependencies: true,
|
||||||
|
css: {
|
||||||
|
loaderOptions: {
|
||||||
|
css: {
|
||||||
|
// Import the tailwind.css file
|
||||||
|
import: 'assets/css/tailwind.css'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
70
lollms/__init__.py
Normal file
70
lollms/__init__.py
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
|
||||||
|
|
||||||
|
__author__ = "ParisNeo"
|
||||||
|
__github__ = "https://github.com/ParisNeo/lollms"
|
||||||
|
__copyright__ = "Copyright 2023, "
|
||||||
|
__license__ = "Apache 2.0"
|
||||||
|
|
||||||
|
|
||||||
|
from lollms.binding import LLMBinding, LOLLMSConfig
|
||||||
|
from lollms.personality import AIPersonality, MSG_TYPE
|
||||||
|
from lollms.paths import LollmsPaths
|
||||||
|
#from lollms.binding import LLMBinding
|
||||||
|
import importlib
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
class BindingBuilder:
|
||||||
|
def build_binding(self, bindings_path: Path, cfg: LOLLMSConfig, force_reinstall=False)->LLMBinding:
|
||||||
|
binding_path = Path(bindings_path) / cfg["binding_name"]
|
||||||
|
# first find out if there is a requirements.txt file
|
||||||
|
install_file_name = "install.py"
|
||||||
|
install_script_path = binding_path / install_file_name
|
||||||
|
if install_script_path.exists():
|
||||||
|
module_name = install_file_name[:-3] # Remove the ".py" extension
|
||||||
|
module_spec = importlib.util.spec_from_file_location(module_name, str(install_script_path))
|
||||||
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
|
module_spec.loader.exec_module(module)
|
||||||
|
if hasattr(module, "Install"):
|
||||||
|
module.Install(cfg, force_reinstall=force_reinstall)
|
||||||
|
# define the full absolute path to the module
|
||||||
|
absolute_path = binding_path.resolve()
|
||||||
|
# infer the module name from the file path
|
||||||
|
module_name = binding_path.stem
|
||||||
|
# use importlib to load the module from the file path
|
||||||
|
loader = importlib.machinery.SourceFileLoader(module_name, str(absolute_path / "__init__.py"))
|
||||||
|
binding_module = loader.load_module()
|
||||||
|
binding_class = getattr(binding_module, binding_module.binding_name)
|
||||||
|
return binding_class
|
||||||
|
|
||||||
|
|
||||||
|
class ModelBuilder:
|
||||||
|
def __init__(self, binding_class:LLMBinding, config:LOLLMSConfig):
|
||||||
|
self.binding_class = binding_class
|
||||||
|
self.model = None
|
||||||
|
self.build_model(config)
|
||||||
|
|
||||||
|
def build_model(self, cfg: LOLLMSConfig):
|
||||||
|
self.model = self.binding_class(cfg)
|
||||||
|
|
||||||
|
def get_model(self):
|
||||||
|
return self.model
|
||||||
|
|
||||||
|
|
||||||
|
class PersonalityBuilder:
|
||||||
|
def __init__(self, lollms_paths:LollmsPaths, config:LOLLMSConfig, model:LLMBinding):
|
||||||
|
self.config = config
|
||||||
|
self.lollms_paths = lollms_paths
|
||||||
|
self.model = model
|
||||||
|
|
||||||
|
|
||||||
|
def build_personality(self, force_reinstall=False):
|
||||||
|
if len(self.config["personalities"][self.config["active_personality_id"]].split("/"))==3:
|
||||||
|
self.personality = AIPersonality(self.lollms_paths, self.lollms_paths.personalities_zoo_path / self.config["personalities"][self.config["active_personality_id"]], self.model, force_reinstall= force_reinstall)
|
||||||
|
else:
|
||||||
|
self.personality = AIPersonality(self.lollms_paths, self.config["personalities"][self.config["active_personality_id"]], self.model, is_relative_path=False, force_reinstall= force_reinstall)
|
||||||
|
return self.personality
|
||||||
|
|
||||||
|
def get_personality(self):
|
||||||
|
return self.personality
|
||||||
|
|
BIN
lollms/assets/logo.png
Normal file
BIN
lollms/assets/logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 459 KiB |
293
lollms/binding.py
Normal file
293
lollms/binding.py
Normal file
@ -0,0 +1,293 @@
|
|||||||
|
######
|
||||||
|
# Project : GPT4ALL-UI
|
||||||
|
# File : binding.py
|
||||||
|
# Author : ParisNeo with the help of the community
|
||||||
|
# Supported by Nomic-AI
|
||||||
|
# license : Apache 2.0
|
||||||
|
# Description :
|
||||||
|
# This is an interface class for GPT4All-ui bindings.
|
||||||
|
######
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Callable
|
||||||
|
from lollms.helpers import BaseConfig, ASCIIColors
|
||||||
|
from lollms.paths import LollmsPaths
|
||||||
|
|
||||||
|
import inspect
|
||||||
|
import yaml
|
||||||
|
import sys
|
||||||
|
from tqdm import tqdm
|
||||||
|
import urllib.request
|
||||||
|
import importlib
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
|
||||||
|
__author__ = "parisneo"
|
||||||
|
__github__ = "https://github.com/ParisNeo/lollms_bindings_zoo"
|
||||||
|
__copyright__ = "Copyright 2023, "
|
||||||
|
__license__ = "Apache 2.0"
|
||||||
|
|
||||||
|
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
DEFAULT_CONFIG = {
|
||||||
|
# =================== Lord Of Large Language Models Configuration file ===========================
|
||||||
|
"version": 5,
|
||||||
|
"binding_name": "llama_cpp_official",
|
||||||
|
"model_name": "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin",
|
||||||
|
|
||||||
|
# Host information
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 9600,
|
||||||
|
|
||||||
|
# Genreration parameters
|
||||||
|
"seed": -1,
|
||||||
|
"n_predict": 1024,
|
||||||
|
"ctx_size": 2048,
|
||||||
|
"temperature": 0.9,
|
||||||
|
"top_k": 50,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"repeat_last_n": 40,
|
||||||
|
"repeat_penalty": 1.2,
|
||||||
|
|
||||||
|
"n_threads": 8,
|
||||||
|
|
||||||
|
#Personality parameters
|
||||||
|
"personalities": ["english/generic/lollms"],
|
||||||
|
"active_personality_id": 0,
|
||||||
|
"override_personality_model_parameters": False, #if true the personality parameters are overriden by those of the configuration (may affect personality behaviour)
|
||||||
|
|
||||||
|
"user_name": "user",
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class LOLLMSConfig(BaseConfig):
|
||||||
|
def __init__(self, file_path=None, lollms_paths:LollmsPaths = None):
|
||||||
|
super().__init__(["file_path", "config", "lollms_paths"])
|
||||||
|
|
||||||
|
if file_path:
|
||||||
|
self.file_path = Path(file_path)
|
||||||
|
else:
|
||||||
|
self.file_path = None
|
||||||
|
|
||||||
|
if file_path is not None:
|
||||||
|
self.load_config(file_path)
|
||||||
|
else:
|
||||||
|
self.config = DEFAULT_CONFIG.copy()
|
||||||
|
|
||||||
|
if lollms_paths is None:
|
||||||
|
self.lollms_paths = LollmsPaths()
|
||||||
|
else:
|
||||||
|
self.lollms_paths = lollms_paths
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def autoload(lollms_paths, config_path:str=None):
|
||||||
|
# Configuration loading part
|
||||||
|
original_cfg_path = lollms_paths.default_cfg_path
|
||||||
|
if config_path is None:
|
||||||
|
local = lollms_paths.personal_configuration_path / "local_config.yaml"
|
||||||
|
if not local.exists():
|
||||||
|
shutil.copy(original_cfg_path, local)
|
||||||
|
cfg_path = local
|
||||||
|
else:
|
||||||
|
cfg_path = config_path
|
||||||
|
|
||||||
|
if cfg_path.exists():
|
||||||
|
original_config = LOLLMSConfig(original_cfg_path,lollms_paths)
|
||||||
|
config = LOLLMSConfig(cfg_path,lollms_paths)
|
||||||
|
if "version" not in config or int(config["version"])<int(original_config["version"]):
|
||||||
|
#Upgrade old configuration files to new format
|
||||||
|
ASCIIColors.error("Configuration file is very old.\nReplacing with default configuration")
|
||||||
|
_, added, removed = config.sync_cfg(original_config)
|
||||||
|
print(f"Added entries : {added}, removed entries:{removed}")
|
||||||
|
config.save_config(cfg_path)
|
||||||
|
else:
|
||||||
|
config = LOLLMSConfig()
|
||||||
|
return config
|
||||||
|
|
||||||
|
def sync_cfg(self, default_config):
|
||||||
|
"""Syncs a configuration with the default configuration
|
||||||
|
|
||||||
|
Args:
|
||||||
|
default_config (_type_): _description_
|
||||||
|
config (_type_): _description_
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
_type_: _description_
|
||||||
|
"""
|
||||||
|
added_entries = []
|
||||||
|
removed_entries = []
|
||||||
|
|
||||||
|
# Ensure all fields from default_config exist in config
|
||||||
|
for key, value in default_config.config.items():
|
||||||
|
if key not in self:
|
||||||
|
self[key] = value
|
||||||
|
added_entries.append(key)
|
||||||
|
|
||||||
|
# Remove fields from config that don't exist in default_config
|
||||||
|
for key in list(self.config.keys()):
|
||||||
|
if key not in default_config.config:
|
||||||
|
del self.config[key]
|
||||||
|
removed_entries.append(key)
|
||||||
|
|
||||||
|
self["version"]=default_config["version"]
|
||||||
|
|
||||||
|
return self, added_entries, removed_entries
|
||||||
|
|
||||||
|
def get_model_path_infos(self):
|
||||||
|
return f"personal_models_path: {self.lollms_paths.personal_models_path}\nBinding name:{self.binding_name}\nModel name:{self.model_name}"
|
||||||
|
|
||||||
|
def get_personality_path_infos(self):
|
||||||
|
return f"personalities_zoo_path: {self.lollms_paths.personalities_zoo_path}\nPersonalities:{self.personalities}\nActive personality id:{self.active_personality_id}"
|
||||||
|
|
||||||
|
def get_model_full_path(self):
|
||||||
|
try:
|
||||||
|
return self.lollms_paths.personal_models_path/self.binding_name/self.model_name
|
||||||
|
except:
|
||||||
|
return None
|
||||||
|
def check_model_existance(self):
|
||||||
|
try:
|
||||||
|
model_path = self.lollms_paths.personal_models_path/self.binding_name/self.model_name
|
||||||
|
return model_path.exists()
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"Exception in checking model existance: {ex}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def download_model(self, url, binding, callback = None):
|
||||||
|
folder_path = self.lollms_paths.personal_models_path/self.binding_name
|
||||||
|
model_name = url.split("/")[-1]
|
||||||
|
model_full_path = (folder_path / model_name)
|
||||||
|
if binding is not None and hasattr(binding,'download_model'):
|
||||||
|
binding.download_model(url, model_full_path, callback)
|
||||||
|
else:
|
||||||
|
|
||||||
|
# Check if file already exists in folder
|
||||||
|
if model_full_path.exists():
|
||||||
|
print("File already exists in folder")
|
||||||
|
else:
|
||||||
|
# Create folder if it doesn't exist
|
||||||
|
folder_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
progress_bar = tqdm(total=None, unit="B", unit_scale=True, desc=f"Downloading {url.split('/')[-1]}")
|
||||||
|
# Define callback function for urlretrieve
|
||||||
|
def report_progress(block_num, block_size, total_size):
|
||||||
|
progress_bar.total=total_size
|
||||||
|
progress_bar.update(block_size)
|
||||||
|
# Download file from URL to folder
|
||||||
|
try:
|
||||||
|
urllib.request.urlretrieve(url, folder_path / url.split("/")[-1], reporthook=report_progress if callback is None else callback)
|
||||||
|
print("File downloaded successfully!")
|
||||||
|
except Exception as e:
|
||||||
|
print("Error downloading file:", e)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def reference_model(self, path):
|
||||||
|
path = str(path).replace("\\","/")
|
||||||
|
folder_path = self.lollms_paths.personal_models_path/self.binding_name
|
||||||
|
model_name = path.split("/")[-1]+".reference"
|
||||||
|
model_full_path = (folder_path / model_name)
|
||||||
|
|
||||||
|
# Check if file already exists in folder
|
||||||
|
if model_full_path.exists():
|
||||||
|
print("File already exists in folder")
|
||||||
|
else:
|
||||||
|
# Create folder if it doesn't exist
|
||||||
|
folder_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(model_full_path,"w") as f:
|
||||||
|
f.write(path)
|
||||||
|
print("Reference created, please make sure you don't delete the file or you will have broken link")
|
||||||
|
|
||||||
|
|
||||||
|
class BindingInstaller:
|
||||||
|
def __init__(self, config: LOLLMSConfig) -> None:
|
||||||
|
self.config = config
|
||||||
|
|
||||||
|
|
||||||
|
class LLMBinding:
|
||||||
|
|
||||||
|
file_extension='*.bin'
|
||||||
|
binding_path = Path(__file__).parent
|
||||||
|
def __init__(self, config:LOLLMSConfig, inline:bool) -> None:
|
||||||
|
self.config = config
|
||||||
|
self.inline = inline
|
||||||
|
|
||||||
|
def load_config_file(self, path):
|
||||||
|
"""
|
||||||
|
Load the content of local_config.yaml file.
|
||||||
|
|
||||||
|
The function reads the content of the local_config.yaml file and returns it as a Python dictionary.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
None
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
dict: A dictionary containing the loaded data from the local_config.yaml file.
|
||||||
|
"""
|
||||||
|
with open(path, 'r') as file:
|
||||||
|
data = yaml.safe_load(file)
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
def generate(self,
|
||||||
|
prompt:str,
|
||||||
|
n_predict: int = 128,
|
||||||
|
callback: Callable[[str], None] = None,
|
||||||
|
verbose: bool = False,
|
||||||
|
**gpt_params ):
|
||||||
|
"""Generates text out of a prompt
|
||||||
|
This should ber implemented by child class
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt (str): The prompt to use for generation
|
||||||
|
n_predict (int, optional): Number of tokens to prodict. Defaults to 128.
|
||||||
|
callback (Callable[[str], None], optional): A callback function that is called everytime a new text element is generated. Defaults to None.
|
||||||
|
verbose (bool, optional): If true, the code will spit many informations about the generation process. Defaults to False.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
def tokenize(self, prompt:str):
|
||||||
|
"""
|
||||||
|
Tokenizes the given prompt using the model's tokenizer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt (str): The input prompt to be tokenized.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list: A list of tokens representing the tokenized prompt.
|
||||||
|
"""
|
||||||
|
return prompt.split(" ")
|
||||||
|
|
||||||
|
def detokenize(self, tokens_list:list):
|
||||||
|
"""
|
||||||
|
Detokenizes the given list of tokens using the model's tokenizer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
tokens_list (list): A list of tokens to be detokenized.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The detokenized text as a string.
|
||||||
|
"""
|
||||||
|
return " ".join(tokens_list)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def list_models(config:dict, root_path="."):
|
||||||
|
"""Lists the models for this binding
|
||||||
|
"""
|
||||||
|
root_path = Path(root_path)
|
||||||
|
models_dir =(root_path/'models')/config["binding_name"] # replace with the actual path to the models folder
|
||||||
|
return [f.name for f in models_dir.glob(LLMBinding.file_extension)]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def install_binding(binding_path, config:LOLLMSConfig):
|
||||||
|
install_file_name = "install.py"
|
||||||
|
install_script_path = binding_path / install_file_name
|
||||||
|
if install_script_path.exists():
|
||||||
|
module_name = install_file_name[:-3] # Remove the ".py" extension
|
||||||
|
module_spec = importlib.util.spec_from_file_location(module_name, str(install_script_path))
|
||||||
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
|
module_spec.loader.exec_module(module)
|
||||||
|
if hasattr(module, "Install"):
|
||||||
|
module.Install(config)
|
||||||
|
|
||||||
|
# To implement by children
|
||||||
|
# @staticmethod
|
||||||
|
# def get_available_models():
|
||||||
|
|
1
lollms/bindings_zoo
Submodule
1
lollms/bindings_zoo
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 3e3f2904d97368ca57ce7f382c629d39b439de23
|
1
lollms/configs/.gitignore
vendored
Normal file
1
lollms/configs/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
local_config.yaml
|
28
lollms/configs/config.yaml
Normal file
28
lollms/configs/config.yaml
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
# =================== Lord Of Large Language Models Configuration file ===========================
|
||||||
|
version: 6
|
||||||
|
binding_name: llama_cpp_official
|
||||||
|
model_name: null
|
||||||
|
|
||||||
|
# Host information
|
||||||
|
host: localhost
|
||||||
|
port: 9600
|
||||||
|
|
||||||
|
# Genreration parameters
|
||||||
|
seed: -1
|
||||||
|
n_predict: 1024
|
||||||
|
ctx_size: 2048
|
||||||
|
temperature: 0.9
|
||||||
|
top_k: 50
|
||||||
|
top_p: 0.95
|
||||||
|
repeat_last_n: 40
|
||||||
|
repeat_penalty: 1.2
|
||||||
|
|
||||||
|
n_threads: 8
|
||||||
|
|
||||||
|
#Personality parameters
|
||||||
|
personalities: ["english/generic/lollms"]
|
||||||
|
active_personality_id: 0
|
||||||
|
override_personality_model_parameters: false #if true the personality parameters are overriden by those of the configuration (may affect personality behaviour)
|
||||||
|
|
||||||
|
user_name: user
|
||||||
|
|
563
lollms/console.py
Normal file
563
lollms/console.py
Normal file
@ -0,0 +1,563 @@
|
|||||||
|
from lollms.personality import AIPersonality, MSG_TYPE
|
||||||
|
from lollms.binding import LOLLMSConfig, LLMBinding
|
||||||
|
from lollms.helpers import ASCIIColors
|
||||||
|
from lollms.paths import LollmsPaths
|
||||||
|
import shutil
|
||||||
|
import yaml
|
||||||
|
from pathlib import Path
|
||||||
|
import sys
|
||||||
|
import pkg_resources
|
||||||
|
import argparse
|
||||||
|
from tqdm import tqdm
|
||||||
|
from lollms import BindingBuilder, ModelBuilder, PersonalityBuilder
|
||||||
|
|
||||||
|
|
||||||
|
class MainMenu:
|
||||||
|
def __init__(self, conversation):
|
||||||
|
self.binding_infs = []
|
||||||
|
self.conversation = conversation
|
||||||
|
|
||||||
|
def show_logo(self):
|
||||||
|
print(f"{ASCIIColors.color_bright_yellow}")
|
||||||
|
print("█ █ █ █▄ ▄█▄ ▄█ ")
|
||||||
|
print("█ ▄▀▀▄ █ █ █ ▀ ▀ █ ▄▀▀▄ ")
|
||||||
|
print("█ █ █ █ █ █ █ ▀▄▄ ")
|
||||||
|
print("█▄▄▄▄ ▀▄▄▀ █▄▄▄▄▄ █▄▄▄▄ █ █ ▄▄▄▀ ")
|
||||||
|
print(f"{ASCIIColors.color_reset}")
|
||||||
|
print(f"{ASCIIColors.color_red}Version: {ASCIIColors.color_green}{pkg_resources.get_distribution('lollms').version}")
|
||||||
|
print(f"{ASCIIColors.color_red}By : {ASCIIColors.color_green}ParisNeo")
|
||||||
|
print(f"{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
def show_commands_list(self):
|
||||||
|
print()
|
||||||
|
print("Commands:")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} menu: shows main menu")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} help: shows this info")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} reset: resets the context")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} <empty prompt>: forces the model to continue generating")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} context_infos: current context size and space left before cropping")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} start_log: starts logging the discussion to a text file")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} stop_log: stops logging the discussion to a text file")
|
||||||
|
print(f" {ASCIIColors.color_red}├{ASCIIColors.color_reset} send_file: uploads a file to the AI")
|
||||||
|
print(f" {ASCIIColors.color_red}└{ASCIIColors.color_reset} exit: exists the console")
|
||||||
|
|
||||||
|
def show_menu(self, options):
|
||||||
|
print("Menu:")
|
||||||
|
for index, option in enumerate(options):
|
||||||
|
print(f"{ASCIIColors.color_green}{index + 1} -{ASCIIColors.color_reset} {option}")
|
||||||
|
choice = input("Enter your choice: ")
|
||||||
|
return int(choice) if choice.isdigit() else -1
|
||||||
|
|
||||||
|
def select_binding(self):
|
||||||
|
bindings_list = []
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current binding: {ASCIIColors.color_reset}{self.conversation.config['binding_name']}")
|
||||||
|
for p in self.conversation.lollms_paths.bindings_zoo_path.iterdir():
|
||||||
|
if p.is_dir():
|
||||||
|
with open(p/"binding_card.yaml", "r") as f:
|
||||||
|
card = yaml.safe_load(f)
|
||||||
|
with open(p/"models.yaml", "r") as f:
|
||||||
|
models = yaml.safe_load(f)
|
||||||
|
entry=f"{card['name']} (by {card['author']})"
|
||||||
|
bindings_list.append(entry)
|
||||||
|
entry={
|
||||||
|
"name":p.name,
|
||||||
|
"card":card,
|
||||||
|
"models":models
|
||||||
|
}
|
||||||
|
self.binding_infs.append(entry)
|
||||||
|
bindings_list += ["Back"]
|
||||||
|
choice = self.show_menu(bindings_list)
|
||||||
|
if 1 <= choice <= len(bindings_list)-1:
|
||||||
|
print(f"You selected binding: {ASCIIColors.color_green}{self.binding_infs[choice - 1]['name']}{ASCIIColors.color_reset}")
|
||||||
|
self.conversation.config['binding_name']=self.binding_infs[choice - 1]['name']
|
||||||
|
self.conversation.load_binding()
|
||||||
|
self.conversation.config.save_config()
|
||||||
|
elif choice <= len(bindings_list):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Invalid choice!")
|
||||||
|
|
||||||
|
def select_model(self):
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current model: {ASCIIColors.color_reset}{self.conversation.config['model_name']}")
|
||||||
|
models_dir:Path = (self.conversation.lollms_paths.personal_models_path/self.conversation.config['binding_name'])
|
||||||
|
models_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
models_list = [m.name for m in models_dir.iterdir() if m.name.lower() not in [".ds_dtore","thumb.db"]] + ["Install model", "Change binding", "Back"]
|
||||||
|
choice = self.show_menu(models_list)
|
||||||
|
if 1 <= choice <= len(models_list)-3:
|
||||||
|
print(f"You selected model: {ASCIIColors.color_green}{models_list[choice - 1]}{ASCIIColors.color_reset}")
|
||||||
|
self.conversation.config['model_name']=models_list[choice - 1]
|
||||||
|
self.conversation.load_model()
|
||||||
|
self.conversation.config.save_config()
|
||||||
|
elif choice <= len(models_list)-2:
|
||||||
|
self.install_model()
|
||||||
|
elif choice <= len(models_list)-1:
|
||||||
|
self.select_binding()
|
||||||
|
self.select_model()
|
||||||
|
elif choice <= len(models_list):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Invalid choice!")
|
||||||
|
|
||||||
|
def install_model(self):
|
||||||
|
models_list = ["Install model from internet","Install model from local file","Back"]
|
||||||
|
choice = self.show_menu(models_list)
|
||||||
|
if 1 <= choice <= len(models_list)-2:
|
||||||
|
url = input("Give a URL to the model to be downloaded :")
|
||||||
|
def progress_callback(blocks, block_size, total_size):
|
||||||
|
tqdm_bar.total=total_size
|
||||||
|
tqdm_bar.update(block_size)
|
||||||
|
|
||||||
|
# Usage example
|
||||||
|
with tqdm(total=100, unit="%", desc="Download Progress", ncols=80) as tqdm_bar:
|
||||||
|
self.conversation.config.download_model(url,self.conversation.binding_class, progress_callback)
|
||||||
|
self.select_model()
|
||||||
|
elif choice <= len(models_list)-1:
|
||||||
|
path = Path(input("Give a path to the model to be used on your PC:"))
|
||||||
|
if path.exists():
|
||||||
|
self.conversation.config.reference_model(path)
|
||||||
|
self.select_model()
|
||||||
|
elif choice <= len(models_list):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Invalid choice!")
|
||||||
|
|
||||||
|
def select_personality(self):
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current personality: {ASCIIColors.color_reset}{self.conversation.config['personalities'][self.conversation.config['active_personality_id']]}")
|
||||||
|
personality_languages = [p.stem for p in self.conversation.lollms_paths.personalities_zoo_path.iterdir() if p.is_dir()] + ["Back"]
|
||||||
|
print("Select language")
|
||||||
|
choice = self.show_menu(personality_languages)
|
||||||
|
if 1 <= choice <= len(personality_languages)-1:
|
||||||
|
language = personality_languages[choice - 1]
|
||||||
|
print(f"You selected language: {ASCIIColors.color_green}{language}{ASCIIColors.color_reset}")
|
||||||
|
personality_categories = [p.stem for p in (self.conversation.lollms_paths.personalities_zoo_path/language).iterdir() if p.is_dir()]+["Back"]
|
||||||
|
print("Select category")
|
||||||
|
choice = self.show_menu(personality_categories)
|
||||||
|
if 1 <= choice <= len(personality_categories):
|
||||||
|
category = personality_categories[choice - 1]
|
||||||
|
print(f"You selected category: {ASCIIColors.color_green}{category}{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
personality_names = [p.stem for p in (self.conversation.lollms_paths.personalities_zoo_path/language/category).iterdir() if p.is_dir()]+["Back"]
|
||||||
|
print("Select personality")
|
||||||
|
choice = self.show_menu(personality_names)
|
||||||
|
if 1 <= choice <= len(personality_names)-1:
|
||||||
|
name = personality_names[choice - 1]
|
||||||
|
print(f"You selected personality: {ASCIIColors.color_green}{name}{ASCIIColors.color_reset}")
|
||||||
|
self.conversation.config["personalities"]=[f"{language}/{category}/{name}"]
|
||||||
|
self.conversation.load_personality()
|
||||||
|
self.conversation.config.save_config()
|
||||||
|
print("Personality saved successfully!")
|
||||||
|
elif 1 <= choice <= len(personality_names):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Invalid choice!")
|
||||||
|
elif 1 <= choice <= len(personality_categories):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Invalid choice!")
|
||||||
|
elif 1 <= choice <= len(personality_languages):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Invalid choice!")
|
||||||
|
|
||||||
|
def reinstall_binding(self):
|
||||||
|
conversation = self.conversation
|
||||||
|
try:
|
||||||
|
conversation.binding_class = BindingBuilder().build_binding(conversation.lollms_paths.bindings_zoo_path, conversation.config, force_reinstall=True)
|
||||||
|
except Exception as ex:
|
||||||
|
print(ex)
|
||||||
|
print(f"Couldn't find binding. Please verify your configuration file at {conversation.config.file_path} or use the next menu to select a valid binding")
|
||||||
|
self.select_binding()
|
||||||
|
|
||||||
|
def reinstall_personality(self):
|
||||||
|
conversation = self.conversation
|
||||||
|
try:
|
||||||
|
conversation.personality = PersonalityBuilder(conversation.lollms_paths, conversation.config, conversation.model).build_personality(force_reinstall=True)
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load personality. Please verify your configuration file at {conversation.configuration_path} or use the next menu to select a valid personality")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{conversation.config.get_personality_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.select_model()
|
||||||
|
|
||||||
|
def main_menu(self):
|
||||||
|
while True:
|
||||||
|
print("\nMain Menu:")
|
||||||
|
print(f"{ASCIIColors.color_green}1 -{ASCIIColors.color_reset} Select Binding")
|
||||||
|
print(f"{ASCIIColors.color_green}2 -{ASCIIColors.color_reset} Select Model")
|
||||||
|
print(f"{ASCIIColors.color_green}3 -{ASCIIColors.color_reset} Select Personality")
|
||||||
|
print(f"{ASCIIColors.color_green}4 -{ASCIIColors.color_reset} Reinstall Binding")
|
||||||
|
print(f"{ASCIIColors.color_green}5 -{ASCIIColors.color_reset} Reinstall Personality")
|
||||||
|
print(f"{ASCIIColors.color_green}0 -{ASCIIColors.color_reset} Exit")
|
||||||
|
choice = input("Enter your choice: ").strip()
|
||||||
|
if choice == "1":
|
||||||
|
self.select_binding()
|
||||||
|
elif choice == "2":
|
||||||
|
self.select_model()
|
||||||
|
elif choice == "3":
|
||||||
|
self.select_personality()
|
||||||
|
elif choice == "4":
|
||||||
|
self.reinstall_binding()
|
||||||
|
elif choice == "5":
|
||||||
|
self.reinstall_personality()
|
||||||
|
elif choice == "0":
|
||||||
|
print("Back to main app...")
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
print("Invalid choice! Try again.")
|
||||||
|
|
||||||
|
class Conversation:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
configuration_path:str|Path=None,
|
||||||
|
show_logo:bool=True,
|
||||||
|
show_commands_list:bool=False,
|
||||||
|
show_personality_infos:bool=True,
|
||||||
|
show_model_infos:bool=True,
|
||||||
|
show_welcome_message:bool=True
|
||||||
|
):
|
||||||
|
|
||||||
|
# Fore it to be a path
|
||||||
|
self.configuration_path = configuration_path
|
||||||
|
self.is_logging = False
|
||||||
|
self.log_file_path = ""
|
||||||
|
|
||||||
|
self.bot_says = ""
|
||||||
|
# get paths
|
||||||
|
self.lollms_paths = LollmsPaths.find_paths(force_local=False)
|
||||||
|
|
||||||
|
# Build menu
|
||||||
|
self.menu = MainMenu(self)
|
||||||
|
|
||||||
|
|
||||||
|
# Configuration loading part
|
||||||
|
self.config = LOLLMSConfig.autoload(self.lollms_paths, configuration_path)
|
||||||
|
|
||||||
|
|
||||||
|
# load binding
|
||||||
|
self.load_binding()
|
||||||
|
|
||||||
|
# Load model
|
||||||
|
self.load_model()
|
||||||
|
# cfg.binding_name = llm_binding.binding_folder_name
|
||||||
|
# cfg.model_name = model_name
|
||||||
|
|
||||||
|
# Load personality
|
||||||
|
try:
|
||||||
|
self.load_personality()
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"No personality selected. Please select one from the zoo. {ex}")
|
||||||
|
self.menu.select_personality()
|
||||||
|
|
||||||
|
if show_logo:
|
||||||
|
self.menu.show_logo()
|
||||||
|
if show_commands_list:
|
||||||
|
self.menu.show_commands_list()
|
||||||
|
|
||||||
|
if show_personality_infos:
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current personality : {ASCIIColors.color_reset}{self.personality}")
|
||||||
|
print(f"{ASCIIColors.color_green}Version : {ASCIIColors.color_reset}{self.personality.version}")
|
||||||
|
print(f"{ASCIIColors.color_green}Author : {ASCIIColors.color_reset}{self.personality.author}")
|
||||||
|
print(f"{ASCIIColors.color_green}Description : {ASCIIColors.color_reset}{self.personality.personality_description}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
if show_model_infos:
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current binding : {ASCIIColors.color_reset}{self.config['binding_name']}")
|
||||||
|
print(f"{ASCIIColors.color_green}Current model : {ASCIIColors.color_reset}{self.config['model_name']}")
|
||||||
|
print(f"{ASCIIColors.color_green}Personal data path : {ASCIIColors.color_reset}{self.lollms_paths.personal_path}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
# If there is a disclaimer, show it
|
||||||
|
if self.personality.disclaimer != "":
|
||||||
|
print(f"\n{ASCIIColors.color_red}Disclaimer")
|
||||||
|
print(self.personality.disclaimer)
|
||||||
|
print(f"{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
if show_welcome_message and self.personality.welcome_message:
|
||||||
|
print(self.personality.name+": ", end="")
|
||||||
|
print(self.personality.welcome_message)
|
||||||
|
|
||||||
|
def ask_override_file(self):
|
||||||
|
user_input = input("Would you like to override the existing file? (Y/N): ")
|
||||||
|
user_input = user_input.lower()
|
||||||
|
if user_input == "y" or user_input == "yes":
|
||||||
|
print("File will be overridden.")
|
||||||
|
return True
|
||||||
|
elif user_input == "n" or user_input == "no":
|
||||||
|
print("File will not be overridden.")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
print("Invalid input. Please enter 'Y' or 'N'.")
|
||||||
|
# Call the function again recursively to prompt the user for valid input
|
||||||
|
return self.ask_override_file()
|
||||||
|
|
||||||
|
def start_log(self, file_name):
|
||||||
|
if Path(file_name).is_absolute():
|
||||||
|
self.log_file_path = Path(file_name)
|
||||||
|
else:
|
||||||
|
home_dir = Path.home()/"Documents/lollms/logs"
|
||||||
|
home_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.log_file_path = home_dir/file_name
|
||||||
|
if self.log_file_path.exists():
|
||||||
|
if not self.ask_override_file():
|
||||||
|
print("Canceled")
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
with(open(self.log_file_path, "w") as f):
|
||||||
|
self.header = f"""------------------------
|
||||||
|
Log file for lollms discussion
|
||||||
|
Participating personalities:
|
||||||
|
{self.config['personalities']}
|
||||||
|
------------------------
|
||||||
|
"""
|
||||||
|
f.write(self.header)
|
||||||
|
self.is_logging = True
|
||||||
|
return True
|
||||||
|
except:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def log(self, text, append=False):
|
||||||
|
try:
|
||||||
|
with(open(self.log_file_path, "a" if append else "w") as f):
|
||||||
|
f.write(text) if append else f.write(self.header+self.personality.personality_conditioning+text)
|
||||||
|
return True
|
||||||
|
except:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def stop_log(self):
|
||||||
|
self.is_logging = False
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def load_binding(self):
|
||||||
|
if self.config.binding_name is None:
|
||||||
|
print(f"No bounding selected")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_binding()
|
||||||
|
# cfg.download_model(url)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
self.binding_class = BindingBuilder().build_binding(self.lollms_paths.bindings_zoo_path, self.config)
|
||||||
|
except Exception as ex:
|
||||||
|
print(ex)
|
||||||
|
print(f"Couldn't find binding. Please verify your configuration file at {self.configuration_path} or use the next menu to select a valid binding")
|
||||||
|
self.menu.select_binding()
|
||||||
|
|
||||||
|
def load_model(self):
|
||||||
|
try:
|
||||||
|
self.model = ModelBuilder(self.binding_class, self.config).get_model()
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load model. Please verify your configuration file at {self.configuration_path} or use the next menu to select a valid model")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{self.config.get_model_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_model()
|
||||||
|
|
||||||
|
|
||||||
|
def load_personality(self):
|
||||||
|
try:
|
||||||
|
self.personality = PersonalityBuilder(self.lollms_paths, self.config, self.model).build_personality()
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load personality. Please verify your configuration file at {self.configuration_path} or use the next menu to select a valid personality")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{self.config.get_personality_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_model()
|
||||||
|
self.cond_tk = self.personality.model.tokenize(self.personality.personality_conditioning)
|
||||||
|
self.n_cond_tk = len(self.cond_tk)
|
||||||
|
|
||||||
|
def reset_context(self):
|
||||||
|
if self.personality.include_welcome_message_in_disucssion:
|
||||||
|
full_discussion = (
|
||||||
|
self.personality.ai_message_prefix +
|
||||||
|
self.personality.welcome_message +
|
||||||
|
self.personality.link_text
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
full_discussion = ""
|
||||||
|
return full_discussion
|
||||||
|
|
||||||
|
def safe_generate(self, full_discussion:str, n_predict=None, callback=None):
|
||||||
|
"""safe_generate
|
||||||
|
|
||||||
|
Args:
|
||||||
|
full_discussion (string): A prompt or a long discussion to use for generation
|
||||||
|
callback (_type_, optional): A callback to call for each received token. Defaults to None.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: Model output
|
||||||
|
"""
|
||||||
|
if n_predict == None:
|
||||||
|
n_predict =self.personality.model_n_predicts
|
||||||
|
tk = self.personality.model.tokenize(full_discussion)
|
||||||
|
n_tokens = len(tk)
|
||||||
|
fd = self.personality.model.detokenize(tk[-min(self.config.ctx_size-self.n_cond_tk,n_tokens):])
|
||||||
|
self.bot_says = ""
|
||||||
|
output = self.personality.model.generate(self.personality.personality_conditioning+fd, n_predict=n_predict, callback=callback)
|
||||||
|
return output
|
||||||
|
|
||||||
|
def remove_text_from_string(self, string, text_to_find):
|
||||||
|
"""
|
||||||
|
Removes everything from the first occurrence of the specified text in the string (case-insensitive).
|
||||||
|
|
||||||
|
Parameters:
|
||||||
|
string (str): The original string.
|
||||||
|
text_to_find (str): The text to find in the string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The updated string.
|
||||||
|
"""
|
||||||
|
index = string.lower().find(text_to_find.lower())
|
||||||
|
|
||||||
|
if index != -1:
|
||||||
|
string = string[:index]
|
||||||
|
|
||||||
|
return string
|
||||||
|
|
||||||
|
def start_conversation(self):
|
||||||
|
full_discussion = self.reset_context()
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
prompt = input(f"{ASCIIColors.color_green}You: {ASCIIColors.color_reset}")
|
||||||
|
if prompt == "exit":
|
||||||
|
return
|
||||||
|
if prompt == "menu":
|
||||||
|
self.menu.main_menu()
|
||||||
|
continue
|
||||||
|
if prompt == "reset":
|
||||||
|
self.reset_context()
|
||||||
|
print(f"{ASCIIColors.color_red}Context reset issued{ASCIIColors.color_reset}")
|
||||||
|
continue
|
||||||
|
if prompt == "start_log":
|
||||||
|
fp = input("Please enter a log file path (ex: log.txt): ")
|
||||||
|
self.start_log(fp)
|
||||||
|
print(f"{ASCIIColors.color_red}Started logging to : {self.log_file_path}{ASCIIColors.color_reset}")
|
||||||
|
continue
|
||||||
|
if prompt == "stop_log":
|
||||||
|
self.stop_log()
|
||||||
|
print(f"{ASCIIColors.color_red}Log stopped{ASCIIColors.color_reset}")
|
||||||
|
continue
|
||||||
|
if prompt == "send_file":
|
||||||
|
if self.personality.processor is None:
|
||||||
|
print(f"{ASCIIColors.color_red}This personality doesn't support file reception{ASCIIColors.color_reset}")
|
||||||
|
continue
|
||||||
|
fp = input("Please enter a file path: ")
|
||||||
|
# Remove double quotes using string slicing
|
||||||
|
if fp.startswith('"') and fp.endswith('"'):
|
||||||
|
fp = fp[1:-1]
|
||||||
|
if self.personality.processor.add_file(fp):
|
||||||
|
print(f"{ASCIIColors.color_green}File imported{ASCIIColors.color_reset}")
|
||||||
|
else:
|
||||||
|
print(f"{ASCIIColors.color_red}Couldn't load file{ASCIIColors.color_reset}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
|
||||||
|
if prompt == "context_infos":
|
||||||
|
tokens = self.personality.model.tokenize(full_discussion)
|
||||||
|
print(f"{ASCIIColors.color_green}Current context has {len(tokens)} tokens/ {self.config.ctx_size}{ASCIIColors.color_reset}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if prompt != '':
|
||||||
|
if self.personality.processor is not None and self.personality.processor_cfg["process_model_input"]:
|
||||||
|
preprocessed_prompt = self.personality.processor.process_model_input(prompt)
|
||||||
|
else:
|
||||||
|
preprocessed_prompt = prompt
|
||||||
|
|
||||||
|
if self.personality.processor is not None and self.personality.processor_cfg["custom_workflow"]:
|
||||||
|
full_discussion += (
|
||||||
|
self.personality.user_message_prefix +
|
||||||
|
preprocessed_prompt
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
full_discussion += (
|
||||||
|
self.personality.user_message_prefix +
|
||||||
|
preprocessed_prompt +
|
||||||
|
self.personality.link_text +
|
||||||
|
self.personality.ai_message_prefix
|
||||||
|
)
|
||||||
|
|
||||||
|
def callback(text, type:MSG_TYPE=None):
|
||||||
|
if type == MSG_TYPE.MSG_TYPE_CHUNK:
|
||||||
|
# Replace stdout with the default stdout
|
||||||
|
sys.stdout = sys.__stdout__
|
||||||
|
print(text, end="", flush=True)
|
||||||
|
bot_says = self.bot_says + text
|
||||||
|
antiprompt = self.personality.detect_antiprompt(bot_says)
|
||||||
|
if antiprompt:
|
||||||
|
self.bot_says = self.remove_text_from_string(bot_says,antiprompt)
|
||||||
|
print("Detected hallucination")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
self.bot_says = bot_says
|
||||||
|
return True
|
||||||
|
|
||||||
|
tk = self.personality.model.tokenize(full_discussion)
|
||||||
|
n_tokens = len(tk)
|
||||||
|
fd = self.personality.model.detokenize(tk[-min(self.config.ctx_size-self.n_cond_tk,n_tokens):])
|
||||||
|
|
||||||
|
print(f"{ASCIIColors.color_red}{self.personality.name}:{ASCIIColors.color_reset}", end='', flush=True)
|
||||||
|
if self.personality.processor is not None and self.personality.processor_cfg["custom_workflow"]:
|
||||||
|
output = self.personality.processor.run_workflow(prompt, previous_discussion_text=self.personality.personality_conditioning+fd, callback=callback)
|
||||||
|
print(output)
|
||||||
|
|
||||||
|
else:
|
||||||
|
output = self.personality.model.generate(self.personality.personality_conditioning+fd, n_predict=self.personality.model_n_predicts, callback=callback)
|
||||||
|
full_discussion += output.strip()
|
||||||
|
print()
|
||||||
|
|
||||||
|
if self.personality.processor is not None and self.personality.processor_cfg["process_model_output"]:
|
||||||
|
output = self.personality.processor.process_model_output(output)
|
||||||
|
|
||||||
|
self.log(full_discussion)
|
||||||
|
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("Keyboard interrupt detected.\nBye")
|
||||||
|
break
|
||||||
|
|
||||||
|
print("Done")
|
||||||
|
print(f"{self.personality}")
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# Create the argument parser
|
||||||
|
parser = argparse.ArgumentParser(description='App Description')
|
||||||
|
|
||||||
|
# Add the configuration path argument
|
||||||
|
parser.add_argument('--configuration_path', default=None,
|
||||||
|
help='Path to the configuration file')
|
||||||
|
|
||||||
|
|
||||||
|
parser.add_argument('--reset_personal_path', action='store_true', help='Reset the personal path')
|
||||||
|
parser.add_argument('--reset_config', action='store_true', help='Reset the configurations')
|
||||||
|
|
||||||
|
# Parse the command-line arguments
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.reset_personal_path:
|
||||||
|
LollmsPaths.reset_configs()
|
||||||
|
|
||||||
|
if args.reset_config:
|
||||||
|
cfg_path = LollmsPaths.find_paths().personal_configuration_path / "local_config.yaml"
|
||||||
|
try:
|
||||||
|
cfg_path.unlink()
|
||||||
|
ASCIIColors.success("LOLLMS configuration reset successfully")
|
||||||
|
except:
|
||||||
|
ASCIIColors.success("Couldn't reset LOLLMS configuration")
|
||||||
|
|
||||||
|
|
||||||
|
# Parse the command-line arguments
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
configuration_path = args.configuration_path
|
||||||
|
|
||||||
|
conversation = Conversation(configuration_path=configuration_path, show_commands_list=True)
|
||||||
|
conversation.start_conversation()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
109
lollms/helpers.py
Normal file
109
lollms/helpers.py
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
from pathlib import Path
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class ASCIIColors:
|
||||||
|
# Reset
|
||||||
|
color_reset = '\u001b[0m'
|
||||||
|
|
||||||
|
# Regular colors
|
||||||
|
color_black = '\u001b[30m'
|
||||||
|
color_red = '\u001b[31m'
|
||||||
|
color_green = '\u001b[32m'
|
||||||
|
color_yellow = '\u001b[33m'
|
||||||
|
color_blue = '\u001b[34m'
|
||||||
|
color_magenta = '\u001b[35m'
|
||||||
|
color_cyan = '\u001b[36m'
|
||||||
|
color_white = '\u001b[37m'
|
||||||
|
color_orange = '\u001b[38;5;202m'
|
||||||
|
|
||||||
|
# Bright colors
|
||||||
|
color_bright_black = '\u001b[30;1m'
|
||||||
|
color_bright_red = '\u001b[31;1m'
|
||||||
|
color_bright_green = '\u001b[32;1m'
|
||||||
|
color_bright_yellow = '\u001b[33;1m'
|
||||||
|
color_bright_blue = '\u001b[34;1m'
|
||||||
|
color_bright_magenta = '\u001b[35;1m'
|
||||||
|
color_bright_cyan = '\u001b[36;1m'
|
||||||
|
color_bright_white = '\u001b[37;1m'
|
||||||
|
color_bright_orange = '\u001b[38;5;208m'
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def print(text, color=color_bright_red):
|
||||||
|
print(f"{color}{text}{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def warning(text):
|
||||||
|
print(f"{ASCIIColors.color_bright_orange}{text}{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def error(text):
|
||||||
|
print(f"{ASCIIColors.color_bright_red}{text}{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def success(text):
|
||||||
|
print(f"{ASCIIColors.color_green}{text}{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def info(text):
|
||||||
|
print(f"{ASCIIColors.color_blue}{text}{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
class BaseConfig():
|
||||||
|
def __init__(self, exceptional_keys=[], config = None):
|
||||||
|
self.exceptional_keys = exceptional_keys
|
||||||
|
self.config = config
|
||||||
|
|
||||||
|
|
||||||
|
def to_dict(self):
|
||||||
|
return self.config
|
||||||
|
|
||||||
|
def __getitem__(self, key):
|
||||||
|
if self.config is None:
|
||||||
|
raise ValueError("No configuration loaded.")
|
||||||
|
return self.config[key]
|
||||||
|
|
||||||
|
def __getattr__(self, key):
|
||||||
|
if key == "exceptional_keys":
|
||||||
|
return super().__getattribute__(key)
|
||||||
|
if key in self.exceptional_keys+ ["config"] or key.startswith("__"):
|
||||||
|
return super().__getattribute__(key)
|
||||||
|
else:
|
||||||
|
if self.config is None:
|
||||||
|
raise ValueError("No configuration loaded.")
|
||||||
|
return self.config[key]
|
||||||
|
|
||||||
|
|
||||||
|
def __setattr__(self, key, value):
|
||||||
|
if key == "exceptional_keys":
|
||||||
|
return super().__setattr__(key, value)
|
||||||
|
if key in self.exceptional_keys+ ["config"] or key.startswith("__"):
|
||||||
|
super().__setattr__(key, value)
|
||||||
|
else:
|
||||||
|
if self.config is None:
|
||||||
|
raise ValueError("No configuration loaded.")
|
||||||
|
self.config[key] = value
|
||||||
|
|
||||||
|
def __setitem__(self, key, value):
|
||||||
|
if self.config is None:
|
||||||
|
raise ValueError("No configuration loaded.")
|
||||||
|
self.config[key] = value
|
||||||
|
|
||||||
|
def __contains__(self, item):
|
||||||
|
if self.config is None:
|
||||||
|
raise ValueError("No configuration loaded.")
|
||||||
|
return item in self.config
|
||||||
|
|
||||||
|
def load_config(self, file_path:Path=None):
|
||||||
|
if file_path is None:
|
||||||
|
file_path = self.file_path
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as stream:
|
||||||
|
self.config = yaml.safe_load(stream)
|
||||||
|
|
||||||
|
def save_config(self, file_path:Path=None):
|
||||||
|
if file_path is None:
|
||||||
|
file_path = self.file_path
|
||||||
|
if self.config is None:
|
||||||
|
raise ValueError("No configuration loaded.")
|
||||||
|
with open(file_path, "w") as f:
|
||||||
|
yaml.dump(self.config, f)
|
267
lollms/langchain_integration.py
Normal file
267
lollms/langchain_integration.py
Normal file
@ -0,0 +1,267 @@
|
|||||||
|
"""Wrapper around llama.cpp."""
|
||||||
|
import logging
|
||||||
|
from typing import Any, Dict, Generator, List, Optional
|
||||||
|
|
||||||
|
from pydantic import Field, root_validator
|
||||||
|
|
||||||
|
from langchain.callbacks.manager import CallbackManagerForLLMRun
|
||||||
|
from langchain.llms.base import LLM
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class LLMModel(LLM):
|
||||||
|
"""Wrapper around the llama.cpp model.
|
||||||
|
|
||||||
|
To use, you should have the llama-cpp-python library installed, and provide the
|
||||||
|
path to the Llama model as a named parameter to the constructor.
|
||||||
|
Check out: https://github.com/abetlen/llama-cpp-python
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain.llms import LlamaCppEmbeddings
|
||||||
|
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
|
||||||
|
"""
|
||||||
|
|
||||||
|
client: Any #: :meta private:
|
||||||
|
model_path: str
|
||||||
|
"""The path to the Llama model file."""
|
||||||
|
|
||||||
|
lora_base: Optional[str] = None
|
||||||
|
"""The path to the Llama LoRA base model."""
|
||||||
|
|
||||||
|
lora_path: Optional[str] = None
|
||||||
|
"""The path to the Llama LoRA. If None, no LoRa is loaded."""
|
||||||
|
|
||||||
|
n_ctx: int = Field(512, alias="n_ctx")
|
||||||
|
"""Token context window."""
|
||||||
|
|
||||||
|
n_parts: int = Field(-1, alias="n_parts")
|
||||||
|
"""Number of parts to split the model into.
|
||||||
|
If -1, the number of parts is automatically determined."""
|
||||||
|
|
||||||
|
seed: int = Field(-1, alias="seed")
|
||||||
|
"""Seed. If -1, a random seed is used."""
|
||||||
|
|
||||||
|
f16_kv: bool = Field(True, alias="f16_kv")
|
||||||
|
"""Use half-precision for key/value cache."""
|
||||||
|
|
||||||
|
logits_all: bool = Field(False, alias="logits_all")
|
||||||
|
"""Return logits for all tokens, not just the last token."""
|
||||||
|
|
||||||
|
vocab_only: bool = Field(False, alias="vocab_only")
|
||||||
|
"""Only load the vocabulary, no weights."""
|
||||||
|
|
||||||
|
use_mlock: bool = Field(False, alias="use_mlock")
|
||||||
|
"""Force system to keep model in RAM."""
|
||||||
|
|
||||||
|
n_threads: Optional[int] = Field(None, alias="n_threads")
|
||||||
|
"""Number of threads to use.
|
||||||
|
If None, the number of threads is automatically determined."""
|
||||||
|
|
||||||
|
n_batch: Optional[int] = Field(8, alias="n_batch")
|
||||||
|
"""Number of tokens to process in parallel.
|
||||||
|
Should be a number between 1 and n_ctx."""
|
||||||
|
|
||||||
|
n_gpu_layers: Optional[int] = Field(None, alias="n_gpu_layers")
|
||||||
|
"""Number of layers to be loaded into gpu memory. Default None."""
|
||||||
|
|
||||||
|
suffix: Optional[str] = Field(None)
|
||||||
|
"""A suffix to append to the generated text. If None, no suffix is appended."""
|
||||||
|
|
||||||
|
max_tokens: Optional[int] = 256
|
||||||
|
"""The maximum number of tokens to generate."""
|
||||||
|
|
||||||
|
temperature: Optional[float] = 0.8
|
||||||
|
"""The temperature to use for sampling."""
|
||||||
|
|
||||||
|
top_p: Optional[float] = 0.95
|
||||||
|
"""The top-p value to use for sampling."""
|
||||||
|
|
||||||
|
logprobs: Optional[int] = Field(None)
|
||||||
|
"""The number of logprobs to return. If None, no logprobs are returned."""
|
||||||
|
|
||||||
|
echo: Optional[bool] = False
|
||||||
|
"""Whether to echo the prompt."""
|
||||||
|
|
||||||
|
stop: Optional[List[str]] = []
|
||||||
|
"""A list of strings to stop generation when encountered."""
|
||||||
|
|
||||||
|
repeat_penalty: Optional[float] = 1.1
|
||||||
|
"""The penalty to apply to repeated tokens."""
|
||||||
|
|
||||||
|
top_k: Optional[int] = 40
|
||||||
|
"""The top-k value to use for sampling."""
|
||||||
|
|
||||||
|
last_n_tokens_size: Optional[int] = 64
|
||||||
|
"""The number of tokens to look back when applying the repeat_penalty."""
|
||||||
|
|
||||||
|
use_mmap: Optional[bool] = True
|
||||||
|
"""Whether to keep the model loaded in RAM"""
|
||||||
|
|
||||||
|
streaming: bool = True
|
||||||
|
"""Whether to stream the results, token by token."""
|
||||||
|
|
||||||
|
@root_validator()
|
||||||
|
def validate_environment(cls, values: Dict) -> Dict:
|
||||||
|
"""Validate that llama-cpp-python library is installed."""
|
||||||
|
model = values["model"]
|
||||||
|
model_param_names = [
|
||||||
|
"lora_path",
|
||||||
|
"lora_base",
|
||||||
|
"n_ctx",
|
||||||
|
"n_parts",
|
||||||
|
"seed",
|
||||||
|
"f16_kv",
|
||||||
|
"logits_all",
|
||||||
|
"vocab_only",
|
||||||
|
"use_mlock",
|
||||||
|
"n_threads",
|
||||||
|
"n_batch",
|
||||||
|
"use_mmap",
|
||||||
|
"last_n_tokens_size",
|
||||||
|
]
|
||||||
|
model_params = {k: values[k] for k in model_param_names}
|
||||||
|
# For backwards compatibility, only include if non-null.
|
||||||
|
if values["n_gpu_layers"] is not None:
|
||||||
|
model_params["n_gpu_layers"] = values["n_gpu_layers"]
|
||||||
|
|
||||||
|
values["client"] = model
|
||||||
|
|
||||||
|
return values
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _default_params(self) -> Dict[str, Any]:
|
||||||
|
"""Get the default parameters for calling llama_cpp."""
|
||||||
|
return {
|
||||||
|
"suffix": self.suffix,
|
||||||
|
"max_tokens": self.max_tokens,
|
||||||
|
"temperature": self.temperature,
|
||||||
|
"top_p": self.top_p,
|
||||||
|
"logprobs": self.logprobs,
|
||||||
|
"echo": self.echo,
|
||||||
|
"stop_sequences": self.stop, # key here is convention among LLM classes
|
||||||
|
"repeat_penalty": self.repeat_penalty,
|
||||||
|
"top_k": self.top_k,
|
||||||
|
}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _identifying_params(self) -> Dict[str, Any]:
|
||||||
|
"""Get the identifying parameters."""
|
||||||
|
return {**{"model_path": self.model_path}, **self._default_params}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _llm_type(self) -> str:
|
||||||
|
"""Return type of llm."""
|
||||||
|
return "lollms_generic_llm"
|
||||||
|
|
||||||
|
def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Performs sanity check, preparing paramaters in format needed by llama_cpp.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
stop (Optional[List[str]]): List of stop sequences for llama_cpp.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary containing the combined parameters.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Raise error if stop sequences are in both input and default params
|
||||||
|
if self.stop and stop is not None:
|
||||||
|
raise ValueError("`stop` found in both the input and default params.")
|
||||||
|
|
||||||
|
params = self._default_params
|
||||||
|
|
||||||
|
# llama_cpp expects the "stop" key not this, so we remove it:
|
||||||
|
params.pop("stop_sequences")
|
||||||
|
|
||||||
|
# then sets it as configured, or default to an empty list:
|
||||||
|
params["stop"] = self.stop or stop or []
|
||||||
|
|
||||||
|
return params
|
||||||
|
|
||||||
|
def _call(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
stop: Optional[List[str]] = None,
|
||||||
|
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||||
|
) -> str:
|
||||||
|
"""Call the model and return the output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The prompt to use for generation.
|
||||||
|
stop: A list of strings to stop generation when encountered.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The generated text.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain.llms import LlamaCpp
|
||||||
|
llm = LlamaCpp(model_path="/path/to/local/llama/model.bin")
|
||||||
|
llm("This is a prompt.")
|
||||||
|
"""
|
||||||
|
if self.streaming:
|
||||||
|
# If streaming is enabled, we use the stream
|
||||||
|
# method that yields as they are generated
|
||||||
|
# and return the combined strings from the first choices's text:
|
||||||
|
combined_text_output = ""
|
||||||
|
for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):
|
||||||
|
combined_text_output += token["choices"][0]["text"]
|
||||||
|
return combined_text_output
|
||||||
|
else:
|
||||||
|
params = self._get_parameters(stop)
|
||||||
|
result = self.client(prompt=prompt, **params)
|
||||||
|
return result["choices"][0]["text"]
|
||||||
|
|
||||||
|
def stream(
|
||||||
|
self,
|
||||||
|
prompt: str,
|
||||||
|
stop: Optional[List[str]] = None,
|
||||||
|
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||||
|
) -> Generator[Dict, None, None]:
|
||||||
|
"""Yields results objects as they are generated in real time.
|
||||||
|
|
||||||
|
BETA: this is a beta feature while we figure out the right abstraction:
|
||||||
|
Once that happens, this interface could change.
|
||||||
|
|
||||||
|
It also calls the callback manager's on_llm_new_token event with
|
||||||
|
similar parameters to the OpenAI LLM class method of the same name.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompt: The prompts to pass into the model.
|
||||||
|
stop: Optional list of stop words to use when generating.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A generator representing the stream of tokens being generated.
|
||||||
|
|
||||||
|
Yields:
|
||||||
|
A dictionary like objects containing a string token and metadata.
|
||||||
|
See llama-cpp-python docs and below for more.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain.llms import LlamaCpp
|
||||||
|
llm = LlamaCpp(
|
||||||
|
model_path="/path/to/local/model.bin",
|
||||||
|
temperature = 0.5
|
||||||
|
)
|
||||||
|
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
|
||||||
|
stop=["'","\n"]):
|
||||||
|
result = chunk["choices"][0]
|
||||||
|
print(result["text"], end='', flush=True)
|
||||||
|
|
||||||
|
"""
|
||||||
|
params = self._get_parameters(stop)
|
||||||
|
result = self.client(prompt=prompt, stream=True, **params)
|
||||||
|
for chunk in result:
|
||||||
|
token = chunk["choices"][0]["text"]
|
||||||
|
log_probs = chunk["choices"][0].get("logprobs", None)
|
||||||
|
if run_manager:
|
||||||
|
run_manager.on_llm_new_token(
|
||||||
|
token=token, verbose=self.verbose, log_probs=log_probs
|
||||||
|
)
|
||||||
|
yield chunk
|
178
lollms/paths.py
Normal file
178
lollms/paths.py
Normal file
@ -0,0 +1,178 @@
|
|||||||
|
from pathlib import Path
|
||||||
|
import shutil
|
||||||
|
from lollms.helpers import ASCIIColors
|
||||||
|
from lollms.helpers import BaseConfig
|
||||||
|
|
||||||
|
lollms_path = Path(__file__).parent
|
||||||
|
lollms_default_cfg_path = lollms_path / "configs/config.yaml"
|
||||||
|
lollms_bindings_zoo_path = lollms_path / "bindings_zoo"
|
||||||
|
lollms_personalities_zoo_path = lollms_path / "personalities_zoo"
|
||||||
|
|
||||||
|
|
||||||
|
# Now we speify the personal folders
|
||||||
|
class LollmsPaths:
|
||||||
|
def __init__(self, lollms_path=None, personal_path=None, custom_default_cfg_path=None):
|
||||||
|
if lollms_path is None:
|
||||||
|
lollms_path = Path(__file__).parent
|
||||||
|
else:
|
||||||
|
lollms_path = Path(lollms_path)
|
||||||
|
if personal_path is None:
|
||||||
|
personal_path = Path.home() / "Documents/lollms"
|
||||||
|
else:
|
||||||
|
personal_path = Path(personal_path)
|
||||||
|
|
||||||
|
if custom_default_cfg_path is not None:
|
||||||
|
self.default_cfg_path = Path(custom_default_cfg_path)
|
||||||
|
else:
|
||||||
|
self.default_cfg_path = lollms_path / "configs/config.yaml"
|
||||||
|
self.bindings_zoo_path = lollms_path / "bindings_zoo"
|
||||||
|
self.personalities_zoo_path = lollms_path / "personalities_zoo"
|
||||||
|
|
||||||
|
self.personal_path = personal_path
|
||||||
|
self.personal_configuration_path = personal_path / "configs"
|
||||||
|
self.personal_models_path = personal_path / "models"
|
||||||
|
|
||||||
|
self.create_directories()
|
||||||
|
self.copy_default_config()
|
||||||
|
|
||||||
|
def change_personal_path(self, path):
|
||||||
|
self.personal_path = path
|
||||||
|
|
||||||
|
def create_directories(self):
|
||||||
|
self.personal_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.personal_configuration_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.personal_models_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
def copy_default_config(self):
|
||||||
|
local_config_path = self.personal_configuration_path / "local_config.yaml"
|
||||||
|
if not local_config_path.exists():
|
||||||
|
shutil.copy(self.default_cfg_path, str(local_config_path))
|
||||||
|
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def find_paths(force_local=False, custom_default_cfg_path=None):
|
||||||
|
lollms_path = Path(__file__).parent
|
||||||
|
global_paths_cfg = Path("./global_paths_cfg.yaml")
|
||||||
|
if global_paths_cfg.exists():
|
||||||
|
try:
|
||||||
|
cfg = BaseConfig()
|
||||||
|
cfg.load_config(global_paths_cfg)
|
||||||
|
lollms_path = cfg.lollms_path
|
||||||
|
lollms_personal_path = cfg.lollms_personal_path
|
||||||
|
return LollmsPaths(lollms_path, lollms_personal_path, custom_default_cfg_path=custom_default_cfg_path)
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"{ASCIIColors.color_red}Global paths configuration file found but seems to be corrupted{ASCIIColors.color_reset}")
|
||||||
|
print("Couldn't find your personal data path!")
|
||||||
|
cfg.lollms_path = Path(__file__).parent
|
||||||
|
cfg.lollms_personal_path = input("Please specify the folder where your configuration files, your models and your custom personalities need to be stored:")
|
||||||
|
cfg.save_config(global_paths_cfg)
|
||||||
|
lollms_path = cfg.lollms_path
|
||||||
|
lollms_personal_path = cfg.lollms_personal_path
|
||||||
|
return LollmsPaths(lollms_path, lollms_personal_path, custom_default_cfg_path=custom_default_cfg_path)
|
||||||
|
else:
|
||||||
|
# if the app is not forcing a specific path, then try to find out if the default installed library has specified a default path
|
||||||
|
global_paths_cfg = lollms_path/"global_paths_cfg.yaml"
|
||||||
|
if global_paths_cfg.exists():
|
||||||
|
try:
|
||||||
|
cfg = BaseConfig()
|
||||||
|
cfg.load_config(global_paths_cfg)
|
||||||
|
lollms_path = cfg.lollms_path
|
||||||
|
lollms_personal_path = cfg.lollms_personal_path
|
||||||
|
return LollmsPaths(lollms_path, lollms_personal_path, custom_default_cfg_path=custom_default_cfg_path)
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"{ASCIIColors.color_red}Global paths configuration file found but seems to be corrupted{ASCIIColors.color_reset}")
|
||||||
|
print("Couldn't find your personal data path!")
|
||||||
|
cfg.lollms_path = Path(__file__).parent
|
||||||
|
cfg.lollms_personal_path = input("Please specify the folder where your configuration files, your models and your custom personalities need to be stored:")
|
||||||
|
cfg.save_config(global_paths_cfg)
|
||||||
|
lollms_path = cfg.lollms_path
|
||||||
|
lollms_personal_path = cfg.lollms_personal_path
|
||||||
|
return LollmsPaths(lollms_path, lollms_personal_path, custom_default_cfg_path=custom_default_cfg_path)
|
||||||
|
else: # First time
|
||||||
|
print(f"{ASCIIColors.color_green}Welcome! It seems this is your first use of the new lollms app.{ASCIIColors.color_reset}")
|
||||||
|
print(f"To make it clear where your data are stored, we now give the user the choice where to put its data.")
|
||||||
|
print(f"This allows you to mutualize models which are heavy, between multiple lollms compatible apps.")
|
||||||
|
print(f"You can change this at any tome using the lollms-update_path script or by simply change the content of the global_paths_cfg.yaml file.")
|
||||||
|
print(f"Please provide a folder to store your configurations files, your models and your personal data (database, custom personalities etc).")
|
||||||
|
cfg = BaseConfig(config={
|
||||||
|
"lollms_path":str(Path(__file__).parent),
|
||||||
|
"lollms_personal_path":str(Path.home()/"Documents/lollms")
|
||||||
|
})
|
||||||
|
|
||||||
|
cfg.lollms_personal_path = input(f"Folder path: ({cfg.lollms_personal_path}):")
|
||||||
|
if cfg.lollms_personal_path=="":
|
||||||
|
cfg.lollms_personal_path = str(Path.home()/"Documents/lollms")
|
||||||
|
|
||||||
|
print(f"Selected: {cfg.lollms_personal_path}")
|
||||||
|
pp= Path(cfg.lollms_personal_path)
|
||||||
|
if not pp.exists():
|
||||||
|
try:
|
||||||
|
pp.mkdir(parents=True)
|
||||||
|
except:
|
||||||
|
print(f"{ASCIIColors.color_red}It seams there is an error in the path you rovided{ASCIIColors.color_reset}")
|
||||||
|
return None
|
||||||
|
if force_local:
|
||||||
|
global_paths_cfg = Path("./global_paths_cfg.yaml")
|
||||||
|
else:
|
||||||
|
global_paths_cfg = lollms_path/"global_paths_cfg.yaml"
|
||||||
|
cfg.save_config(global_paths_cfg)
|
||||||
|
|
||||||
|
return LollmsPaths(cfg.lollms_path, cfg.lollms_personal_path, custom_default_cfg_path=custom_default_cfg_path)
|
||||||
|
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def reset_configs():
|
||||||
|
lollms_path = Path(__file__).parent
|
||||||
|
global_paths_cfg = Path("./global_paths_cfg.yaml")
|
||||||
|
if global_paths_cfg.exists():
|
||||||
|
ASCIIColors.error("Resetting local settings")
|
||||||
|
global_paths_cfg.unlink()
|
||||||
|
return
|
||||||
|
global_paths_cfg = lollms_path/"global_paths_cfg.yaml"
|
||||||
|
if global_paths_cfg.exists():
|
||||||
|
ASCIIColors.error("Resetting global settings")
|
||||||
|
global_paths_cfg.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
# Try to find out if the application has a global paths config
|
||||||
|
# If the application has a local configuration file that points us to the paths configuration then load it
|
||||||
|
"""
|
||||||
|
global_paths_cfg = Path("./global_paths_cfg.yaml")
|
||||||
|
if global_paths_cfg.exists():
|
||||||
|
cfg = BaseConfig()
|
||||||
|
cfg.load_config(global_paths_cfg)
|
||||||
|
try:
|
||||||
|
lollms_personal_path = cfg.global_path
|
||||||
|
except Exception as ex:
|
||||||
|
print("Couldn't find your global path!")
|
||||||
|
cfg.global_path = input("Please specify the folder where your configuration files, your models and your custom personalities need to be stored:")
|
||||||
|
lollms_personal_path = cfg.global_path
|
||||||
|
cfg.save_config(global_paths_cfg)
|
||||||
|
else:
|
||||||
|
# if the app is not forcing a specific path, then try to find out if the default installed library has specified a default path
|
||||||
|
global_paths_cfg = lollms_path/"global_paths_cfg.yaml"
|
||||||
|
if global_paths_cfg.exists():
|
||||||
|
cfg = BaseConfig()
|
||||||
|
cfg.load_config(global_paths_cfg)
|
||||||
|
try:
|
||||||
|
lollms_personal_path = cfg.global_path
|
||||||
|
except Exception as ex:
|
||||||
|
print("Couldn't find your global path!")
|
||||||
|
cfg.global_path = input("Please specify the folder where your configuration files, your models and your custom personalities need to be stored:")
|
||||||
|
lollms_personal_path = cfg.global_path
|
||||||
|
cfg.save_config(global_paths_cfg)
|
||||||
|
|
||||||
|
lollms_personal_path = Path.home()/"Documents/lollms"
|
||||||
|
|
||||||
|
lollms_personal_configuration_path = lollms_personal_path/"configs"
|
||||||
|
lollms_personal_models_path = lollms_personal_path/"models"
|
||||||
|
|
||||||
|
lollms_personal_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
lollms_personal_configuration_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
lollms_personal_models_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if not(lollms_personal_configuration_path/"local_config.yaml").exists():
|
||||||
|
shutil.copy(lollms_path / "configs/config.yaml", str(lollms_personal_configuration_path/"local_config.yaml"))
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
1
lollms/personalities_zoo
Submodule
1
lollms/personalities_zoo
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit e20c0687ce98f4ed80cf6e1bc7ddcc144be83154
|
989
lollms/personality.py
Normal file
989
lollms/personality.py
Normal file
@ -0,0 +1,989 @@
|
|||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from lollms.paths import LollmsPaths
|
||||||
|
from lollms.binding import LLMBinding
|
||||||
|
|
||||||
|
import pkg_resources
|
||||||
|
from pathlib import Path
|
||||||
|
from PIL import Image
|
||||||
|
from typing import Optional, List
|
||||||
|
import re
|
||||||
|
from datetime import datetime
|
||||||
|
import importlib
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import yaml
|
||||||
|
from enum import Enum
|
||||||
|
|
||||||
|
class MSG_TYPE(Enum):
|
||||||
|
MSG_TYPE_CHUNK=0
|
||||||
|
MSG_TYPE_FULL=1
|
||||||
|
MSG_TYPE_META=2
|
||||||
|
MSG_TYPE_REF=3
|
||||||
|
MSG_TYPE_CODE=4
|
||||||
|
MSG_TYPE_UI=5
|
||||||
|
|
||||||
|
|
||||||
|
class APScript:
|
||||||
|
"""
|
||||||
|
Template class for implementing personality processor classes in the APScript framework.
|
||||||
|
|
||||||
|
This class provides a basic structure and placeholder methods for processing model inputs and outputs.
|
||||||
|
Personality-specific processor classes should inherit from this class and override the necessary methods.
|
||||||
|
|
||||||
|
Methods:
|
||||||
|
__init__():
|
||||||
|
Initializes the APScript object.
|
||||||
|
|
||||||
|
run_workflow(generate_fn, prompt):
|
||||||
|
Runs the workflow for processing the model input and output.
|
||||||
|
|
||||||
|
process_model_input(text):
|
||||||
|
Process the model input.
|
||||||
|
|
||||||
|
process_model_output(text):
|
||||||
|
Process the model output.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
None
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
```
|
||||||
|
# Create a personality-specific processor class that inherits from APScript
|
||||||
|
class MyPersonalityProcessor(APScript):
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
def process_model_input(self, text):
|
||||||
|
# Implement the desired behavior for processing the model input
|
||||||
|
# and return the processed model input
|
||||||
|
|
||||||
|
def process_model_output(self, text):
|
||||||
|
# Implement the desired behavior for processing the model output
|
||||||
|
# and return the processed model output
|
||||||
|
|
||||||
|
# Create an instance of the personality processor
|
||||||
|
my_processor = MyPersonalityProcessor()
|
||||||
|
|
||||||
|
# Define the generate function and prompt
|
||||||
|
def generate_fn(prompt):
|
||||||
|
# Implement the logic to generate model output based on the prompt
|
||||||
|
# and return the generated text
|
||||||
|
|
||||||
|
prompt = "Enter your input: "
|
||||||
|
|
||||||
|
# Run the workflow
|
||||||
|
my_processor.run_workflow(generate_fn, prompt)
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
def __init__(self, personality=None) -> None:
|
||||||
|
self.files=[]
|
||||||
|
self.personality = personality
|
||||||
|
|
||||||
|
def install_personality(self, personality_path, force_reinstall=False):
|
||||||
|
install_file_name = "install.py"
|
||||||
|
install_script_path = personality_path/ "scripts" / install_file_name
|
||||||
|
if install_script_path.exists():
|
||||||
|
module_name = install_file_name[:-3] # Remove the ".py" extension
|
||||||
|
module_spec = importlib.util.spec_from_file_location(module_name, str(install_script_path))
|
||||||
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
|
module_spec.loader.exec_module(module)
|
||||||
|
if hasattr(module, "Install"):
|
||||||
|
module.Install(self.personality,force_reinstall=force_reinstall)
|
||||||
|
def add_file(self, path):
|
||||||
|
self.files.append(path)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def remove_file(self, path):
|
||||||
|
self.files.remove(path)
|
||||||
|
|
||||||
|
def load_config_file(self, path):
|
||||||
|
"""
|
||||||
|
Load the content of local_config.yaml file.
|
||||||
|
|
||||||
|
The function reads the content of the local_config.yaml file and returns it as a Python dictionary.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
None
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
dict: A dictionary containing the loaded data from the local_config.yaml file.
|
||||||
|
"""
|
||||||
|
with open(path, 'r') as file:
|
||||||
|
data = yaml.safe_load(file)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def remove_text_from_string(self, string, text_to_find):
|
||||||
|
"""
|
||||||
|
Removes everything from the first occurrence of the specified text in the string (case-insensitive).
|
||||||
|
|
||||||
|
Parameters:
|
||||||
|
string (str): The original string.
|
||||||
|
text_to_find (str): The text to find in the string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The updated string.
|
||||||
|
"""
|
||||||
|
index = string.lower().find(text_to_find.lower())
|
||||||
|
|
||||||
|
if index != -1:
|
||||||
|
string = string[:index]
|
||||||
|
|
||||||
|
return string
|
||||||
|
|
||||||
|
def process(self, text:str, message_type:MSG_TYPE):
|
||||||
|
bot_says = self.bot_says + text
|
||||||
|
antiprompt = self.personality.detect_antiprompt(bot_says)
|
||||||
|
if antiprompt:
|
||||||
|
self.bot_says = self.remove_text_from_string(bot_says,antiprompt)
|
||||||
|
print("Detected hallucination")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
self.bot_says = bot_says
|
||||||
|
return True
|
||||||
|
|
||||||
|
def generate(self, prompt, max_size):
|
||||||
|
self.bot_says = ""
|
||||||
|
return self.personality.model.generate(
|
||||||
|
prompt,
|
||||||
|
max_size,
|
||||||
|
self.process,
|
||||||
|
temperature=self.personality.model_temperature,
|
||||||
|
top_k=self.personality.model_top_k,
|
||||||
|
top_p=self.personality.model_top_p,
|
||||||
|
repeat_penalty=self.personality.model_repeat_penalty,
|
||||||
|
).strip()
|
||||||
|
|
||||||
|
def run_workflow(self, prompt:str, previous_discussion_text:str="", callback=None):
|
||||||
|
"""
|
||||||
|
Runs the workflow for processing the model input and output.
|
||||||
|
|
||||||
|
This method should be called to execute the processing workflow.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
generate_fn (function): A function that generates model output based on the input prompt.
|
||||||
|
The function should take a single argument (prompt) and return the generated text.
|
||||||
|
prompt (str): The input prompt for the model.
|
||||||
|
previous_discussion_text (str, optional): The text of the previous discussion. Default is an empty string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
None
|
||||||
|
"""
|
||||||
|
return None
|
||||||
|
|
||||||
|
def process_model_input(self, text:str):
|
||||||
|
"""
|
||||||
|
Process the model input.
|
||||||
|
|
||||||
|
This method should be overridden in the personality-specific processor class to define
|
||||||
|
the desired behavior for processing the model input.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text (str): The model input text.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Any: The processed model input.
|
||||||
|
"""
|
||||||
|
return None
|
||||||
|
|
||||||
|
def process_model_output(self, text:str):
|
||||||
|
"""
|
||||||
|
Process the model output.
|
||||||
|
|
||||||
|
This method should be overridden in the personality-specific processor class to define
|
||||||
|
the desired behavior for processing the model output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text (str): The model output text.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Any: The processed model output.
|
||||||
|
"""
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def is_package_installed(package_name):
|
||||||
|
try:
|
||||||
|
dist = pkg_resources.get_distribution(package_name)
|
||||||
|
return True
|
||||||
|
except pkg_resources.DistributionNotFound:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def install_package(package_name):
|
||||||
|
try:
|
||||||
|
# Check if the package is already installed
|
||||||
|
__import__(package_name)
|
||||||
|
print(f"{package_name} is already installed.")
|
||||||
|
except ImportError:
|
||||||
|
print(f"{package_name} is not installed. Installing...")
|
||||||
|
|
||||||
|
# Install the package using pip
|
||||||
|
subprocess.check_call(["pip", "install", package_name])
|
||||||
|
|
||||||
|
print(f"{package_name} has been successfully installed.")
|
||||||
|
|
||||||
|
|
||||||
|
class AIPersonality:
|
||||||
|
|
||||||
|
# Extra
|
||||||
|
Conditionning_commands={
|
||||||
|
"date_time": datetime.now().strftime("%A, %B %d, %Y %I:%M:%S %p"), # Replaces {{date}} with actual date
|
||||||
|
"date": datetime.now().strftime("%A, %B %d, %Y"), # Replaces {{date}} with actual date
|
||||||
|
"time": datetime.now().strftime("%H:%M:%S"), # Replaces {{time}} with actual time
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
lollms_paths:LollmsPaths,
|
||||||
|
personality_package_path: str|Path = None,
|
||||||
|
model:LLMBinding=None,
|
||||||
|
run_scripts=True,
|
||||||
|
is_relative_path=True,
|
||||||
|
force_reinstall=False
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Initialize an AIPersonality instance.
|
||||||
|
|
||||||
|
Parameters:
|
||||||
|
personality_package_path (str or Path): The path to the folder containing the personality package.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If the provided path is not a folder or does not contain a config.yaml file.
|
||||||
|
"""
|
||||||
|
self.lollms_paths = lollms_paths
|
||||||
|
self.model = model
|
||||||
|
|
||||||
|
self.files = []
|
||||||
|
|
||||||
|
self.force_reinstall = force_reinstall
|
||||||
|
|
||||||
|
# First setup a default personality
|
||||||
|
# Version
|
||||||
|
self._version = pkg_resources.get_distribution('lollms').version
|
||||||
|
|
||||||
|
self.run_scripts = run_scripts
|
||||||
|
|
||||||
|
#General information
|
||||||
|
self._author: str = "ParisNeo"
|
||||||
|
self._name: str = "lollms"
|
||||||
|
self._user_name: str = "user"
|
||||||
|
self._language: str = "english"
|
||||||
|
self._category: str = "General"
|
||||||
|
|
||||||
|
# Conditionning
|
||||||
|
self._personality_description: str = "This personality is a helpful and Kind AI ready to help you solve your problems"
|
||||||
|
self._personality_conditioning: str = """## Instructions:
|
||||||
|
lollms (Lord of LLMs) is a smart and helpful Assistant built by the computer geek ParisNeo.
|
||||||
|
It is compatible with many bindings to LLM models such as llama, gpt4all, gptj, autogptq etc.
|
||||||
|
It can discuss with humans and assist them on many subjects.
|
||||||
|
It runs locally on your machine. No need to connect to the internet.
|
||||||
|
It answers the questions with precise details
|
||||||
|
Its performance depends on the underlying model size and training.
|
||||||
|
Try to answer with as much details as you can
|
||||||
|
Date: {{date}}
|
||||||
|
"""
|
||||||
|
self._welcome_message: str = "Welcome! I am lollms (Lord of LLMs) A free and open assistant built by ParisNeo. What can I do for you today?"
|
||||||
|
self._include_welcome_message_in_disucssion: bool = True
|
||||||
|
self._user_message_prefix: str = "## Human: "
|
||||||
|
self._link_text: str = "\n"
|
||||||
|
self._ai_message_prefix: str = "## lollms:"
|
||||||
|
self._anti_prompts:list = ["## Human","## lollms","##Human","##Assistant","##lollms"]
|
||||||
|
|
||||||
|
# Extra
|
||||||
|
self._dependencies: List[str] = []
|
||||||
|
|
||||||
|
# Disclaimer
|
||||||
|
self._disclaimer: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
# Default model parameters
|
||||||
|
self._model_temperature: float = 0.8 # higher: more creative, lower more deterministic
|
||||||
|
self._model_n_predicts: int = 2048 # higher: generates many words, lower generates
|
||||||
|
self._model_top_k: int = 50
|
||||||
|
self._model_top_p: float = 0.95
|
||||||
|
self._model_repeat_penalty: float = 1.3
|
||||||
|
self._model_repeat_last_n: int = 40
|
||||||
|
|
||||||
|
self._processor_cfg: dict = {}
|
||||||
|
|
||||||
|
self._logo: Optional[Image.Image] = None
|
||||||
|
self._processor = None
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if personality_package_path is None:
|
||||||
|
self.config = {}
|
||||||
|
self.assets_list = []
|
||||||
|
self.personality_package_path = None
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
if is_relative_path:
|
||||||
|
self.personality_package_path = self.lollms_paths.personalities_zoo_path/personality_package_path
|
||||||
|
else:
|
||||||
|
self.personality_package_path = Path(personality_package_path)
|
||||||
|
|
||||||
|
# Validate that the path exists
|
||||||
|
if not self.personality_package_path.exists():
|
||||||
|
raise ValueError("The provided path does not exist.")
|
||||||
|
|
||||||
|
# Validate that the path format is OK with at least a config.yaml file present in the folder
|
||||||
|
if not self.personality_package_path.is_dir():
|
||||||
|
raise ValueError("The provided path is not a folder.")
|
||||||
|
|
||||||
|
# Open and store the personality
|
||||||
|
self.load_personality(personality_package_path)
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return f"{self.language}/{self.category}/{self.name}"
|
||||||
|
|
||||||
|
|
||||||
|
def load_personality(self, package_path=None):
|
||||||
|
"""
|
||||||
|
Load personality parameters from a YAML configuration file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
package_path (str or Path): The path to the package directory.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If the configuration file does not exist.
|
||||||
|
"""
|
||||||
|
if package_path is None:
|
||||||
|
package_path = self.personality_package_path
|
||||||
|
else:
|
||||||
|
package_path = Path(package_path)
|
||||||
|
|
||||||
|
# Verify that there is at least a configuration file
|
||||||
|
config_file = package_path / "config.yaml"
|
||||||
|
if not config_file.exists():
|
||||||
|
raise ValueError(f"The provided folder {package_path} does not exist.")
|
||||||
|
|
||||||
|
with open(config_file, "r") as f:
|
||||||
|
config = yaml.safe_load(f)
|
||||||
|
|
||||||
|
secret_file = package_path / "secret.yaml"
|
||||||
|
if secret_file.exists():
|
||||||
|
with open(secret_file, "r") as f:
|
||||||
|
self._secret_cfg = yaml.safe_load(f)
|
||||||
|
else:
|
||||||
|
self._secret_cfg = None
|
||||||
|
|
||||||
|
|
||||||
|
# Load parameters from the configuration file
|
||||||
|
self._version = config.get("version", self._version)
|
||||||
|
self._author = config.get("author", self._author)
|
||||||
|
self._name = config.get("name", self._name)
|
||||||
|
self._user_name = config.get("user_name", self._user_name)
|
||||||
|
self._language = config.get("language", self._language)
|
||||||
|
self._category = config.get("category", self._category)
|
||||||
|
self._personality_description = config.get("personality_description", self._personality_description)
|
||||||
|
self._personality_conditioning = config.get("personality_conditioning", self._personality_conditioning)
|
||||||
|
self._welcome_message = config.get("welcome_message", self._welcome_message)
|
||||||
|
self._include_welcome_message_in_disucssion = config.get("include_welcome_message_in_disucssion", self._include_welcome_message_in_disucssion)
|
||||||
|
|
||||||
|
self._user_message_prefix = config.get("user_message_prefix", self._user_message_prefix)
|
||||||
|
self._link_text = config.get("link_text", self._link_text)
|
||||||
|
self._ai_message_prefix = config.get("ai_message_prefix", self._ai_message_prefix)
|
||||||
|
self._anti_prompts = config.get("anti_prompts", self._anti_prompts)
|
||||||
|
self._dependencies = config.get("dependencies", self._dependencies)
|
||||||
|
self._disclaimer = config.get("disclaimer", self._disclaimer)
|
||||||
|
self._model_temperature = config.get("model_temperature", self._model_temperature)
|
||||||
|
self._model_n_predicts = config.get("model_n_predicts", self._model_n_predicts)
|
||||||
|
self._model_top_k = config.get("model_top_k", self._model_top_k)
|
||||||
|
self._model_top_p = config.get("model_top_p", self._model_top_p)
|
||||||
|
self._model_repeat_penalty = config.get("model_repeat_penalty", self._model_repeat_penalty)
|
||||||
|
self._model_repeat_last_n = config.get("model_repeat_last_n", self._model_repeat_last_n)
|
||||||
|
|
||||||
|
# Script parameters (for example keys to connect to search engine or any other usage)
|
||||||
|
self._processor_cfg = config.get("processor_cfg", self._processor_cfg)
|
||||||
|
|
||||||
|
|
||||||
|
#set package path
|
||||||
|
self.personality_package_path = package_path
|
||||||
|
|
||||||
|
# Check for a logo file
|
||||||
|
self.logo_path = self.personality_package_path / "assets" / "logo.png"
|
||||||
|
if self.logo_path.is_file():
|
||||||
|
self._logo = Image.open(self.logo_path)
|
||||||
|
|
||||||
|
# Get the assets folder path
|
||||||
|
self.assets_path = self.personality_package_path / "assets"
|
||||||
|
# Get the scripts folder path
|
||||||
|
self.scripts_path = self.personality_package_path / "scripts"
|
||||||
|
|
||||||
|
# If not exist recreate
|
||||||
|
self.assets_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# If not exist recreate
|
||||||
|
self.scripts_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
|
if self.run_scripts:
|
||||||
|
#If it has an install script then execute it.
|
||||||
|
install_file_name = "install.py"
|
||||||
|
self.install_script_path = self.scripts_path / install_file_name
|
||||||
|
if self.install_script_path.exists():
|
||||||
|
module_name = install_file_name[:-3] # Remove the ".py" extension
|
||||||
|
module_spec = importlib.util.spec_from_file_location(module_name, str(self.install_script_path))
|
||||||
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
|
module_spec.loader.exec_module(module)
|
||||||
|
if hasattr(module, "Install"):
|
||||||
|
self._install = module.Install(self, force_reinstall=self.force_reinstall)
|
||||||
|
else:
|
||||||
|
self._install = None
|
||||||
|
|
||||||
|
#Install requirements
|
||||||
|
for entry in self._dependencies:
|
||||||
|
if not is_package_installed(entry):
|
||||||
|
install_package(entry)
|
||||||
|
|
||||||
|
# Search for any processor code
|
||||||
|
processor_file_name = "processor.py"
|
||||||
|
self.processor_script_path = self.scripts_path / processor_file_name
|
||||||
|
if self.processor_script_path.exists():
|
||||||
|
module_name = processor_file_name[:-3] # Remove the ".py" extension
|
||||||
|
module_spec = importlib.util.spec_from_file_location(module_name, str(self.processor_script_path))
|
||||||
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
|
module_spec.loader.exec_module(module)
|
||||||
|
if hasattr(module, "Processor"):
|
||||||
|
self._processor = module.Processor(self)
|
||||||
|
else:
|
||||||
|
self._processor = None
|
||||||
|
else:
|
||||||
|
self._processor = None
|
||||||
|
# Get a list of all files in the assets folder
|
||||||
|
contents = [str(file) for file in self.assets_path.iterdir() if file.is_file()]
|
||||||
|
|
||||||
|
self._assets_list = contents
|
||||||
|
return config
|
||||||
|
|
||||||
|
def save_personality(self, package_path=None):
|
||||||
|
"""
|
||||||
|
Save the personality parameters to a YAML configuration file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
package_path (str or Path): The path to the package directory.
|
||||||
|
"""
|
||||||
|
if package_path is None:
|
||||||
|
package_path = self.personality_package_path
|
||||||
|
else:
|
||||||
|
package_path = Path(package_path)
|
||||||
|
|
||||||
|
# Building output path
|
||||||
|
config_file = package_path / "config.yaml"
|
||||||
|
assets_folder = package_path / "assets"
|
||||||
|
|
||||||
|
# Create assets folder if it doesn't exist
|
||||||
|
if not assets_folder.exists():
|
||||||
|
assets_folder.mkdir(exist_ok=True, parents=True)
|
||||||
|
|
||||||
|
# Create the configuration dictionary
|
||||||
|
config = {
|
||||||
|
"author": self._author,
|
||||||
|
"version": self._version,
|
||||||
|
"name": self._name,
|
||||||
|
"user_name": self._user_name,
|
||||||
|
"language": self._language,
|
||||||
|
"category": self._category,
|
||||||
|
"personality_description": self._personality_description,
|
||||||
|
"personality_conditioning": self._personality_conditioning,
|
||||||
|
"welcome_message": self._welcome_message,
|
||||||
|
"include_welcome_message_in_disucssion": self._include_welcome_message_in_disucssion,
|
||||||
|
"user_message_prefix": self._user_message_prefix,
|
||||||
|
"link_text": self._link_text,
|
||||||
|
"ai_message_prefix": self._ai_message_prefix,
|
||||||
|
"anti_prompts": self._anti_prompts,
|
||||||
|
"dependencies": self._dependencies,
|
||||||
|
"disclaimer": self._disclaimer,
|
||||||
|
"model_temperature": self._model_temperature,
|
||||||
|
"model_n_predicts": self._model_n_predicts,
|
||||||
|
"model_top_k": self._model_top_k,
|
||||||
|
"model_top_p": self._model_top_p,
|
||||||
|
"model_repeat_penalty": self._model_repeat_penalty,
|
||||||
|
"model_repeat_last_n": self._model_repeat_last_n
|
||||||
|
}
|
||||||
|
|
||||||
|
# Save the configuration to the YAML file
|
||||||
|
with open(config_file, "w") as f:
|
||||||
|
yaml.dump(config, f)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def as_dict(self):
|
||||||
|
"""
|
||||||
|
Convert the personality parameters to a dictionary.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
dict: The personality parameters as a dictionary.
|
||||||
|
"""
|
||||||
|
return {
|
||||||
|
"author": self._author,
|
||||||
|
"version": self._version,
|
||||||
|
"name": self._name,
|
||||||
|
"user_name": self._user_name,
|
||||||
|
"language": self._language,
|
||||||
|
"category": self._category,
|
||||||
|
"personality_description": self._personality_description,
|
||||||
|
"personality_conditioning": self._personality_conditioning,
|
||||||
|
"welcome_message": self._welcome_message,
|
||||||
|
"include_welcome_message_in_disucssion": self._include_welcome_message_in_disucssion,
|
||||||
|
"user_message_prefix": self._user_message_prefix,
|
||||||
|
"link_text": self._link_text,
|
||||||
|
"ai_message_prefix": self._ai_message_prefix,
|
||||||
|
"anti_prompts": self._anti_prompts,
|
||||||
|
"dependencies": self._dependencies,
|
||||||
|
"disclaimer": self._disclaimer,
|
||||||
|
"model_temperature": self._model_temperature,
|
||||||
|
"model_n_predicts": self._model_n_predicts,
|
||||||
|
"model_top_k": self._model_top_k,
|
||||||
|
"model_top_p": self._model_top_p,
|
||||||
|
"model_repeat_penalty": self._model_repeat_penalty,
|
||||||
|
"model_repeat_last_n": self._model_repeat_last_n,
|
||||||
|
"assets_list":self._assets_list
|
||||||
|
}
|
||||||
|
|
||||||
|
# ========================================== Properties ===========================================
|
||||||
|
@property
|
||||||
|
def logo(self):
|
||||||
|
"""
|
||||||
|
Get the personality logo.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
PIL.Image.Image: The personality logo as a Pillow Image object.
|
||||||
|
"""
|
||||||
|
if hasattr(self, '_logo'):
|
||||||
|
return self._logo
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
@property
|
||||||
|
def version(self):
|
||||||
|
"""Get the version of the package."""
|
||||||
|
return self._version
|
||||||
|
|
||||||
|
@version.setter
|
||||||
|
def version(self, value):
|
||||||
|
"""Set the version of the package."""
|
||||||
|
self._version = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def author(self):
|
||||||
|
"""Get the author of the package."""
|
||||||
|
return self._author
|
||||||
|
|
||||||
|
@author.setter
|
||||||
|
def author(self, value):
|
||||||
|
"""Set the author of the package."""
|
||||||
|
self._author = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
"""Get the name."""
|
||||||
|
return self._name
|
||||||
|
|
||||||
|
@name.setter
|
||||||
|
def name(self, value: str):
|
||||||
|
"""Set the name."""
|
||||||
|
self._name = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_name(self) -> str:
|
||||||
|
"""Get the user name."""
|
||||||
|
return self._user_name
|
||||||
|
|
||||||
|
@user_name.setter
|
||||||
|
def user_name(self, value: str):
|
||||||
|
"""Set the user name."""
|
||||||
|
self._user_name = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def language(self) -> str:
|
||||||
|
"""Get the language."""
|
||||||
|
return self._language
|
||||||
|
|
||||||
|
@language.setter
|
||||||
|
def language(self, value: str):
|
||||||
|
"""Set the language."""
|
||||||
|
self._language = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def category(self) -> str:
|
||||||
|
"""Get the category."""
|
||||||
|
return self._category
|
||||||
|
|
||||||
|
@category.setter
|
||||||
|
def category(self, value: str):
|
||||||
|
"""Set the category."""
|
||||||
|
self._category = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def personality_description(self) -> str:
|
||||||
|
"""
|
||||||
|
Getter for the personality description.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The personality description of the AI assistant.
|
||||||
|
"""
|
||||||
|
return self._personality_description
|
||||||
|
|
||||||
|
@personality_description.setter
|
||||||
|
def personality_description(self, description: str):
|
||||||
|
"""
|
||||||
|
Setter for the personality description.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
description (str): The new personality description for the AI assistant.
|
||||||
|
"""
|
||||||
|
self._personality_description = description
|
||||||
|
|
||||||
|
@property
|
||||||
|
def personality_conditioning(self) -> str:
|
||||||
|
"""
|
||||||
|
Getter for the personality conditioning.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The personality conditioning of the AI assistant.
|
||||||
|
"""
|
||||||
|
return self.replace_keys(self._personality_conditioning, self.Conditionning_commands)
|
||||||
|
|
||||||
|
@personality_conditioning.setter
|
||||||
|
def personality_conditioning(self, conditioning: str):
|
||||||
|
"""
|
||||||
|
Setter for the personality conditioning.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
conditioning (str): The new personality conditioning for the AI assistant.
|
||||||
|
"""
|
||||||
|
self._personality_conditioning = conditioning
|
||||||
|
|
||||||
|
@property
|
||||||
|
def welcome_message(self) -> str:
|
||||||
|
"""
|
||||||
|
Getter for the welcome message.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The welcome message of the AI assistant.
|
||||||
|
"""
|
||||||
|
return self.replace_keys(self._welcome_message, self.Conditionning_commands)
|
||||||
|
|
||||||
|
@welcome_message.setter
|
||||||
|
def welcome_message(self, message: str):
|
||||||
|
"""
|
||||||
|
Setter for the welcome message.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
message (str): The new welcome message for the AI assistant.
|
||||||
|
"""
|
||||||
|
self._welcome_message = message
|
||||||
|
|
||||||
|
@property
|
||||||
|
def include_welcome_message_in_disucssion(self) -> bool:
|
||||||
|
"""
|
||||||
|
Getter for the include welcome message in disucssion.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: whether to add the welcome message to tje discussion or not.
|
||||||
|
"""
|
||||||
|
return self._include_welcome_message_in_disucssion
|
||||||
|
|
||||||
|
@include_welcome_message_in_disucssion.setter
|
||||||
|
def include_welcome_message_in_disucssion(self, message: bool):
|
||||||
|
"""
|
||||||
|
Setter for the welcome message.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
message (str): The new welcome message for the AI assistant.
|
||||||
|
"""
|
||||||
|
self._include_welcome_message_in_disucssion = message
|
||||||
|
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_message_prefix(self) -> str:
|
||||||
|
"""
|
||||||
|
Getter for the user message prefix.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The user message prefix of the AI assistant.
|
||||||
|
"""
|
||||||
|
return self._user_message_prefix
|
||||||
|
|
||||||
|
@user_message_prefix.setter
|
||||||
|
def user_message_prefix(self, prefix: str):
|
||||||
|
"""
|
||||||
|
Setter for the user message prefix.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prefix (str): The new user message prefix for the AI assistant.
|
||||||
|
"""
|
||||||
|
self._user_message_prefix = prefix
|
||||||
|
|
||||||
|
@property
|
||||||
|
def link_text(self) -> str:
|
||||||
|
"""
|
||||||
|
Getter for the link text.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The link text of the AI assistant.
|
||||||
|
"""
|
||||||
|
return self._link_text
|
||||||
|
|
||||||
|
@link_text.setter
|
||||||
|
def link_text(self, text: str):
|
||||||
|
"""
|
||||||
|
Setter for the link text.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text (str): The new link text for the AI assistant.
|
||||||
|
"""
|
||||||
|
self._link_text = text
|
||||||
|
@property
|
||||||
|
def ai_message_prefix(self):
|
||||||
|
"""
|
||||||
|
Get the AI message prefix.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The AI message prefix.
|
||||||
|
"""
|
||||||
|
return self._ai_message_prefix
|
||||||
|
|
||||||
|
@ai_message_prefix.setter
|
||||||
|
def ai_message_prefix(self, prefix):
|
||||||
|
"""
|
||||||
|
Set the AI message prefix.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prefix (str): The AI message prefix to set.
|
||||||
|
"""
|
||||||
|
self._ai_message_prefix = prefix
|
||||||
|
|
||||||
|
@property
|
||||||
|
def anti_prompts(self):
|
||||||
|
"""
|
||||||
|
Get the anti-prompts list.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list: The anti-prompts list.
|
||||||
|
"""
|
||||||
|
return self._anti_prompts
|
||||||
|
|
||||||
|
@anti_prompts.setter
|
||||||
|
def anti_prompts(self, prompts):
|
||||||
|
"""
|
||||||
|
Set the anti-prompts list.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prompts (list): The anti-prompts list to set.
|
||||||
|
"""
|
||||||
|
self._anti_prompts = prompts
|
||||||
|
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> List[str]:
|
||||||
|
"""Getter method for the dependencies attribute.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[str]: The list of dependencies.
|
||||||
|
"""
|
||||||
|
return self._dependencies
|
||||||
|
|
||||||
|
@dependencies.setter
|
||||||
|
def dependencies(self, dependencies: List[str]):
|
||||||
|
"""Setter method for the dependencies attribute.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dependencies (List[str]): The list of dependencies.
|
||||||
|
"""
|
||||||
|
self._dependencies = dependencies
|
||||||
|
|
||||||
|
@property
|
||||||
|
def disclaimer(self) -> str:
|
||||||
|
"""Getter method for the disclaimer attribute.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The disclaimer text.
|
||||||
|
"""
|
||||||
|
return self._disclaimer
|
||||||
|
|
||||||
|
@disclaimer.setter
|
||||||
|
def disclaimer(self, disclaimer: str):
|
||||||
|
"""Setter method for the disclaimer attribute.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
disclaimer (str): The disclaimer text.
|
||||||
|
"""
|
||||||
|
self._disclaimer = disclaimer
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_temperature(self) -> float:
|
||||||
|
"""Get the model's temperature."""
|
||||||
|
return self._model_temperature
|
||||||
|
|
||||||
|
@model_temperature.setter
|
||||||
|
def model_temperature(self, value: float):
|
||||||
|
"""Set the model's temperature.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (float): The new temperature value.
|
||||||
|
"""
|
||||||
|
self._model_temperature = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_n_predicts(self) -> int:
|
||||||
|
"""Get the number of predictions the model generates."""
|
||||||
|
return self._model_n_predicts
|
||||||
|
|
||||||
|
@model_n_predicts.setter
|
||||||
|
def model_n_predicts(self, value: int):
|
||||||
|
"""Set the number of predictions the model generates.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (int): The new number of predictions value.
|
||||||
|
"""
|
||||||
|
self._model_n_predicts = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_top_k(self) -> int:
|
||||||
|
"""Get the model's top-k value."""
|
||||||
|
return self._model_top_k
|
||||||
|
|
||||||
|
@model_top_k.setter
|
||||||
|
def model_top_k(self, value: int):
|
||||||
|
"""Set the model's top-k value.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (int): The new top-k value.
|
||||||
|
"""
|
||||||
|
self._model_top_k = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_top_p(self) -> float:
|
||||||
|
"""Get the model's top-p value."""
|
||||||
|
return self._model_top_p
|
||||||
|
|
||||||
|
@model_top_p.setter
|
||||||
|
def model_top_p(self, value: float):
|
||||||
|
"""Set the model's top-p value.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (float): The new top-p value.
|
||||||
|
"""
|
||||||
|
self._model_top_p = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_repeat_penalty(self) -> float:
|
||||||
|
"""Get the model's repeat penalty value."""
|
||||||
|
return self._model_repeat_penalty
|
||||||
|
|
||||||
|
@model_repeat_penalty.setter
|
||||||
|
def model_repeat_penalty(self, value: float):
|
||||||
|
"""Set the model's repeat penalty value.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (float): The new repeat penalty value.
|
||||||
|
"""
|
||||||
|
self._model_repeat_penalty = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_repeat_last_n(self) -> int:
|
||||||
|
"""Get the number of words to consider for repeat penalty."""
|
||||||
|
return self._model_repeat_last_n
|
||||||
|
|
||||||
|
@model_repeat_last_n.setter
|
||||||
|
def model_repeat_last_n(self, value: int):
|
||||||
|
"""Set the number of words to consider for repeat penalty.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (int): The new number of words value.
|
||||||
|
"""
|
||||||
|
self._model_repeat_last_n = value
|
||||||
|
|
||||||
|
|
||||||
|
@property
|
||||||
|
def assets_list(self) -> list:
|
||||||
|
"""Get the number of words to consider for repeat penalty."""
|
||||||
|
return self._assets_list
|
||||||
|
|
||||||
|
@assets_list.setter
|
||||||
|
def assets_list(self, value: list):
|
||||||
|
"""Set the number of words to consider for repeat penalty.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (int): The new number of words value.
|
||||||
|
"""
|
||||||
|
self._assets_list = value
|
||||||
|
|
||||||
|
@property
|
||||||
|
def processor(self) -> APScript:
|
||||||
|
"""Get the number of words to consider for repeat penalty."""
|
||||||
|
return self._processor
|
||||||
|
|
||||||
|
@processor.setter
|
||||||
|
def processor(self, value: APScript):
|
||||||
|
"""Set the number of words to consider for repeat penalty.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (int): The new number of words value.
|
||||||
|
"""
|
||||||
|
self._processor = value
|
||||||
|
|
||||||
|
|
||||||
|
@property
|
||||||
|
def processor_cfg(self) -> list:
|
||||||
|
"""Get the number of words to consider for repeat penalty."""
|
||||||
|
return self._processor_cfg
|
||||||
|
|
||||||
|
@processor_cfg.setter
|
||||||
|
def processor_cfg(self, value: dict):
|
||||||
|
"""Set the number of words to consider for repeat penalty.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
value (int): The new number of words value.
|
||||||
|
"""
|
||||||
|
self._processor_cfg = value
|
||||||
|
|
||||||
|
# ========================================== Helper methods ==========================================
|
||||||
|
def detect_antiprompt(self, text:str) -> bool:
|
||||||
|
"""
|
||||||
|
Detects if any of the antiprompts in self.anti_prompts are present in the given text.
|
||||||
|
Used for the Hallucination suppression system
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text (str): The text to check for antiprompts.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if any antiprompt is found in the text (ignoring case), False otherwise.
|
||||||
|
"""
|
||||||
|
for prompt in self.anti_prompts:
|
||||||
|
if prompt.lower() in text.lower():
|
||||||
|
return prompt.lower()
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# Helper functions
|
||||||
|
@staticmethod
|
||||||
|
def replace_keys(input_string, replacements):
|
||||||
|
"""
|
||||||
|
Replaces all occurrences of keys in the input string with their corresponding
|
||||||
|
values from the replacements dictionary.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
input_string (str): The input string to replace keys in.
|
||||||
|
replacements (dict): A dictionary of key-value pairs, where the key is the
|
||||||
|
string to be replaced and the value is the replacement string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The input string with all occurrences of keys replaced by their
|
||||||
|
corresponding values.
|
||||||
|
"""
|
||||||
|
pattern = r"\{\{(\w+)\}\}"
|
||||||
|
# The pattern matches "{{key}}" and captures "key" in a group.
|
||||||
|
# The "\w+" matches one or more word characters (letters, digits, or underscore).
|
||||||
|
|
||||||
|
def replace(match):
|
||||||
|
key = match.group(1)
|
||||||
|
return replacements.get(key, match.group(0))
|
||||||
|
|
||||||
|
output_string = re.sub(pattern, replace, input_string)
|
||||||
|
return output_string
|
||||||
|
|
457
lollms/server.py
Normal file
457
lollms/server.py
Normal file
@ -0,0 +1,457 @@
|
|||||||
|
from flask import Flask, render_template, request
|
||||||
|
from flask_socketio import SocketIO, emit
|
||||||
|
from flask_cors import CORS
|
||||||
|
from lollms.personality import AIPersonality, MSG_TYPE
|
||||||
|
from lollms.binding import LOLLMSConfig, LLMBinding
|
||||||
|
from lollms.helpers import ASCIIColors
|
||||||
|
from lollms.console import MainMenu
|
||||||
|
from lollms.paths import LollmsPaths
|
||||||
|
from lollms.console import MainMenu
|
||||||
|
from lollms import BindingBuilder, ModelBuilder, PersonalityBuilder
|
||||||
|
from typing import List, Tuple
|
||||||
|
import importlib
|
||||||
|
from pathlib import Path
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
import yaml
|
||||||
|
import copy
|
||||||
|
|
||||||
|
class LoLLMsServer:
|
||||||
|
def __init__(self):
|
||||||
|
self.app = Flask("LoLLMsServer_Server")
|
||||||
|
#self.app.config['SECRET_KEY'] = 'lollmssecret'
|
||||||
|
CORS(self.app) # Enable CORS for all routes
|
||||||
|
self.socketio = SocketIO(self.app, cors_allowed_origins='*')
|
||||||
|
self.clients = {}
|
||||||
|
self.current_binding = None
|
||||||
|
self.current_model = None
|
||||||
|
self.personalities = []
|
||||||
|
self.answer = ['']
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
self.lollms_paths = LollmsPaths.find_paths(force_local=False)
|
||||||
|
self.menu = MainMenu(self)
|
||||||
|
|
||||||
|
# Set log level to warning
|
||||||
|
self.app.logger.setLevel(logging.WARNING)
|
||||||
|
# Configure a custom logger for Flask-SocketIO
|
||||||
|
self.socketio_log = logging.getLogger('socketio')
|
||||||
|
self.socketio_log.setLevel(logging.WARNING)
|
||||||
|
self.socketio_log.addHandler(logging.StreamHandler())
|
||||||
|
|
||||||
|
self.initialize_routes()
|
||||||
|
self.run()
|
||||||
|
|
||||||
|
|
||||||
|
def load_binding(self):
|
||||||
|
if self.config.binding_name is None:
|
||||||
|
print(f"No bounding selected")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_binding()
|
||||||
|
# cfg.download_model(url)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
self.binding_class = BindingBuilder().build_binding(self.lollms_paths.bindings_zoo_path, self.config)
|
||||||
|
except Exception as ex:
|
||||||
|
print(ex)
|
||||||
|
print(f"Couldn't find binding. Please verify your configuration file at {self.config.file_path} or use the next menu to select a valid binding")
|
||||||
|
self.menu.select_binding()
|
||||||
|
|
||||||
|
def load_model(self):
|
||||||
|
try:
|
||||||
|
self.model = ModelBuilder(self.binding_class, self.config).get_model()
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load model.")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{self.config.get_model_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_model()
|
||||||
|
|
||||||
|
def load_personality(self):
|
||||||
|
try:
|
||||||
|
self.personality = PersonalityBuilder(self.lollms_paths, self.config, self.model).build_personality()
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load personality.")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{self.config.get_personality_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_model()
|
||||||
|
self.cond_tk = self.personality.model.tokenize(self.personality.personality_conditioning)
|
||||||
|
self.n_cond_tk = len(self.cond_tk)
|
||||||
|
|
||||||
|
def initialize_routes(self):
|
||||||
|
@self.socketio.on('connect')
|
||||||
|
def handle_connect():
|
||||||
|
client_id = request.sid
|
||||||
|
self.clients[client_id] = {"namespace": request.namespace, "full_discussion_blocks": []}
|
||||||
|
ASCIIColors.success(f'Client connected with session ID: {client_id}')
|
||||||
|
|
||||||
|
@self.socketio.on('disconnect')
|
||||||
|
def handle_disconnect():
|
||||||
|
client_id = request.sid
|
||||||
|
if client_id in self.clients:
|
||||||
|
del self.clients[client_id]
|
||||||
|
print(f'Client disconnected with session ID: {client_id}')
|
||||||
|
|
||||||
|
|
||||||
|
@self.socketio.on('list_available_bindings')
|
||||||
|
def handle_list_bindings():
|
||||||
|
binding_infs = []
|
||||||
|
for p in self.bindings_path.iterdir():
|
||||||
|
if p.is_dir():
|
||||||
|
with open(p/"binding_card.yaml", "r") as f:
|
||||||
|
card = yaml.safe_load(f)
|
||||||
|
with open(p/"models.yaml", "r") as f:
|
||||||
|
models = yaml.safe_load(f)
|
||||||
|
entry={
|
||||||
|
"name":p.name,
|
||||||
|
"card":card,
|
||||||
|
"models":models
|
||||||
|
}
|
||||||
|
binding_infs.append(entry)
|
||||||
|
|
||||||
|
emit('bindings_list', {'success':True, 'bindings': binding_infs}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('list_available_personalities')
|
||||||
|
def handle_list_available_personalities():
|
||||||
|
personalities_folder = self.personalities_path
|
||||||
|
personalities = {}
|
||||||
|
for language_folder in personalities_folder.iterdir():
|
||||||
|
if language_folder.is_dir():
|
||||||
|
personalities[language_folder.name] = {}
|
||||||
|
for category_folder in language_folder.iterdir():
|
||||||
|
if category_folder.is_dir():
|
||||||
|
personalities[language_folder.name][category_folder.name] = []
|
||||||
|
for personality_folder in category_folder.iterdir():
|
||||||
|
if personality_folder.is_dir():
|
||||||
|
try:
|
||||||
|
personality_info = {"folder":personality_folder.stem}
|
||||||
|
config_path = personality_folder / 'config.yaml'
|
||||||
|
with open(config_path) as config_file:
|
||||||
|
config_data = yaml.load(config_file, Loader=yaml.FullLoader)
|
||||||
|
personality_info['name'] = config_data.get('name',"No Name")
|
||||||
|
personality_info['description'] = config_data.get('personality_description',"")
|
||||||
|
personality_info['author'] = config_data.get('author', 'ParisNeo')
|
||||||
|
personality_info['version'] = config_data.get('version', '1.0.0')
|
||||||
|
scripts_path = personality_folder / 'scripts'
|
||||||
|
personality_info['has_scripts'] = scripts_path.is_dir()
|
||||||
|
assets_path = personality_folder / 'assets'
|
||||||
|
gif_logo_path = assets_path / 'logo.gif'
|
||||||
|
webp_logo_path = assets_path / 'logo.webp'
|
||||||
|
png_logo_path = assets_path / 'logo.png'
|
||||||
|
jpg_logo_path = assets_path / 'logo.jpg'
|
||||||
|
jpeg_logo_path = assets_path / 'logo.jpeg'
|
||||||
|
bmp_logo_path = assets_path / 'logo.bmp'
|
||||||
|
|
||||||
|
personality_info['has_logo'] = png_logo_path.is_file() or gif_logo_path.is_file()
|
||||||
|
|
||||||
|
if gif_logo_path.exists():
|
||||||
|
personality_info['avatar'] = str(gif_logo_path).replace("\\","/")
|
||||||
|
elif webp_logo_path.exists():
|
||||||
|
personality_info['avatar'] = str(webp_logo_path).replace("\\","/")
|
||||||
|
elif png_logo_path.exists():
|
||||||
|
personality_info['avatar'] = str(png_logo_path).replace("\\","/")
|
||||||
|
elif jpg_logo_path.exists():
|
||||||
|
personality_info['avatar'] = str(jpg_logo_path).replace("\\","/")
|
||||||
|
elif jpeg_logo_path.exists():
|
||||||
|
personality_info['avatar'] = str(jpeg_logo_path).replace("\\","/")
|
||||||
|
elif bmp_logo_path.exists():
|
||||||
|
personality_info['avatar'] = str(bmp_logo_path).replace("\\","/")
|
||||||
|
else:
|
||||||
|
personality_info['avatar'] = ""
|
||||||
|
personalities[language_folder.name][category_folder.name].append(personality_info)
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"Couldn't load personality from {personality_folder} [{ex}]")
|
||||||
|
emit('personalities_list', {'personalities': personalities}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('list_available_models')
|
||||||
|
def handle_list_available_models():
|
||||||
|
"""List the available models
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
_type_: _description_
|
||||||
|
"""
|
||||||
|
if self.binding_class is None:
|
||||||
|
emit('available_models_list', {'success':False, 'error': "No binding selected"}, room=request.sid)
|
||||||
|
model_list = self.binding_class.get_available_models()
|
||||||
|
|
||||||
|
models = []
|
||||||
|
for model in model_list:
|
||||||
|
try:
|
||||||
|
filename = model.get('filename',"")
|
||||||
|
server = model.get('server',"")
|
||||||
|
image_url = model.get("icon", '/images/default_model.png')
|
||||||
|
license = model.get("license", 'unknown')
|
||||||
|
owner = model.get("owner", 'unknown')
|
||||||
|
owner_link = model.get("owner_link", 'https://github.com/ParisNeo')
|
||||||
|
filesize = int(model.get('filesize',0))
|
||||||
|
description = model.get('description',"")
|
||||||
|
model_type = model.get("model_type","")
|
||||||
|
if server.endswith("/"):
|
||||||
|
path = f'{server}{filename}'
|
||||||
|
else:
|
||||||
|
path = f'{server}/{filename}'
|
||||||
|
local_path = self.models_path/f'{self.config["binding_name"]}/{filename}'
|
||||||
|
is_installed = local_path.exists() or model_type.lower()=="api"
|
||||||
|
models.append({
|
||||||
|
'title': filename,
|
||||||
|
'icon': image_url, # Replace with the path to the model icon
|
||||||
|
'license': license,
|
||||||
|
'owner': owner,
|
||||||
|
'owner_link': owner_link,
|
||||||
|
'description': description,
|
||||||
|
'isInstalled': is_installed,
|
||||||
|
'path': path,
|
||||||
|
'filesize': filesize,
|
||||||
|
'model_type': model_type
|
||||||
|
})
|
||||||
|
except Exception as ex:
|
||||||
|
print("#################################")
|
||||||
|
print(ex)
|
||||||
|
print("#################################")
|
||||||
|
print(f"Problem with model : {model}")
|
||||||
|
emit('available_models_list', {'success':True, 'available_models': models}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('list_available_personalities_languages')
|
||||||
|
def handle_list_available_personalities_languages():
|
||||||
|
try:
|
||||||
|
languages = [l for l in self.personalities_path.iterdir()]
|
||||||
|
emit('available_personalities_languages_list', {'success': True, 'available_personalities_languages': languages})
|
||||||
|
except Exception as ex:
|
||||||
|
emit('available_personalities_languages_list', {'success': False, 'error':str(ex)})
|
||||||
|
|
||||||
|
@self.socketio.on('list_available_personalities_categories')
|
||||||
|
def handle_list_available_personalities_categories(data):
|
||||||
|
try:
|
||||||
|
language = data["language"]
|
||||||
|
categories = [l for l in (self.personalities_path/language).iterdir()]
|
||||||
|
emit('available_personalities_categories_list', {'success': True, 'available_personalities_categories': categories})
|
||||||
|
except Exception as ex:
|
||||||
|
emit('available_personalities_categories_list', {'success': False, 'error':str(ex)})
|
||||||
|
|
||||||
|
@self.socketio.on('list_available_personalities_names')
|
||||||
|
def handle_list_available_personalities_names(data):
|
||||||
|
try:
|
||||||
|
language = data["language"]
|
||||||
|
category = data["category"]
|
||||||
|
personalities = [l for l in (self.personalities_path/language/category).iterdir()]
|
||||||
|
emit('list_available_personalities_names_list', {'success': True, 'list_available_personalities_names': personalities})
|
||||||
|
except Exception as ex:
|
||||||
|
emit('list_available_personalities_names_list', {'success': False, 'error':str(ex)})
|
||||||
|
|
||||||
|
@self.socketio.on('select_binding')
|
||||||
|
def handle_select_binding(data):
|
||||||
|
self.cp_config = copy.deepcopy(self.config)
|
||||||
|
self.cp_config["binding_name"] = data['binding_name']
|
||||||
|
try:
|
||||||
|
self.binding_class = self.build_binding(self.bindings_path, self.cp_config)
|
||||||
|
self.config = self.cp_config
|
||||||
|
emit('select_binding', {'success':True, 'binding_name': self.cp_config["binding_name"]}, room=request.sid)
|
||||||
|
except Exception as ex:
|
||||||
|
print(ex)
|
||||||
|
emit('select_binding', {'success':False, 'binding_name': self.cp_config["binding_name"], 'error':f"Couldn't load binding:\n{ex}"}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('select_model')
|
||||||
|
def handle_select_model(data):
|
||||||
|
model_name = data['model_name']
|
||||||
|
if self.binding_class is None:
|
||||||
|
emit('select_model', {'success':False, 'model_name': model_name, 'error':f"Please select a binding first"}, room=request.sid)
|
||||||
|
return
|
||||||
|
self.cp_config = copy.deepcopy(self.config)
|
||||||
|
self.cp_config["model_name"] = data['model_name']
|
||||||
|
try:
|
||||||
|
self.current_model = self.binding_class(self.cp_config)
|
||||||
|
emit('select_model', {'success':True, 'model_name': model_name}, room=request.sid)
|
||||||
|
except Exception as ex:
|
||||||
|
print(ex)
|
||||||
|
emit('select_model', {'success':False, 'model_name': model_name, 'error':f"Please select a binding first"}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('add_personality')
|
||||||
|
def handle_add_personality(data):
|
||||||
|
personality_path = data['path']
|
||||||
|
try:
|
||||||
|
personality = AIPersonality(self.lollms_paths, personality_path)
|
||||||
|
self.personalities.append(personality)
|
||||||
|
self.config["personalities"].append(personality_path)
|
||||||
|
emit('personality_added', {'success':True, 'name': personality.name, 'id':len(self.personalities)-1}, room=request.sid)
|
||||||
|
self.config.save_config()
|
||||||
|
except Exception as e:
|
||||||
|
error_message = str(e)
|
||||||
|
emit('personality_add_failed', {'success':False, 'error': error_message}, room=request.sid)
|
||||||
|
|
||||||
|
|
||||||
|
@self.socketio.on('list_active_personalities')
|
||||||
|
def handle_list_active_personalities():
|
||||||
|
personality_names = [p.name for p in self.personalities]
|
||||||
|
emit('active_personalities_list', {'success':True, 'personalities': personality_names}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('activate_personality')
|
||||||
|
def handle_activate_personality(data):
|
||||||
|
personality_id = data['id']
|
||||||
|
if personality_id<len(self.personalities):
|
||||||
|
self.active_personality=self.personalities[personality_id]
|
||||||
|
emit('activate_personality', {'success':True, 'name': self.active_personality, 'id':len(self.personalities)-1}, room=request.sid)
|
||||||
|
self.config["active_personality_id"]=personality_id
|
||||||
|
self.config.save_config()
|
||||||
|
else:
|
||||||
|
emit('personality_add_failed', {'success':False, 'error': "Personality ID not valid"}, room=request.sid)
|
||||||
|
|
||||||
|
@self.socketio.on('generate_text')
|
||||||
|
def handle_generate_text(data):
|
||||||
|
model = self.current_model
|
||||||
|
client_id = request.sid
|
||||||
|
prompt = data['prompt']
|
||||||
|
personality: AIPersonality = self.personalities[data['personality']]
|
||||||
|
personality.model = model
|
||||||
|
cond_tk = personality.model.tokenize(personality.personality_conditioning)
|
||||||
|
n_cond_tk = len(cond_tk)
|
||||||
|
# Placeholder code for text generation
|
||||||
|
# Replace this with your actual text generation logic
|
||||||
|
print(f"Text generation requested by client: {client_id}")
|
||||||
|
|
||||||
|
self.answer[0] = ''
|
||||||
|
full_discussion_blocks = self.clients[client_id]["full_discussion_blocks"]
|
||||||
|
|
||||||
|
if prompt != '':
|
||||||
|
if personality.processor is not None and personality.processor_cfg["process_model_input"]:
|
||||||
|
preprocessed_prompt = personality.processor.process_model_input(prompt)
|
||||||
|
else:
|
||||||
|
preprocessed_prompt = prompt
|
||||||
|
|
||||||
|
if personality.processor is not None and personality.processor_cfg["custom_workflow"]:
|
||||||
|
full_discussion_blocks.append(personality.user_message_prefix)
|
||||||
|
full_discussion_blocks.append(preprocessed_prompt)
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
full_discussion_blocks.append(personality.user_message_prefix)
|
||||||
|
full_discussion_blocks.append(preprocessed_prompt)
|
||||||
|
full_discussion_blocks.append(personality.link_text)
|
||||||
|
full_discussion_blocks.append(personality.ai_message_prefix)
|
||||||
|
|
||||||
|
else:
|
||||||
|
print(output.strip(),end="",flush=True)
|
||||||
|
|
||||||
|
full_discussion = personality.personality_conditioning + ''.join(full_discussion_blocks)
|
||||||
|
|
||||||
|
def callback(text, message_type: MSG_TYPE):
|
||||||
|
if message_type == MSG_TYPE.MSG_TYPE_CHUNK:
|
||||||
|
self.answer[0] = self.answer[0] + text
|
||||||
|
emit('text_chunk', {'chunk': text}, room=client_id)
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
tk = personality.model.tokenize(full_discussion)
|
||||||
|
n_tokens = len(tk)
|
||||||
|
fd = personality.model.detokenize(tk[-min(self.config.ctx_size-n_cond_tk,n_tokens):])
|
||||||
|
|
||||||
|
if personality.processor is not None and personality.processor_cfg["custom_workflow"]:
|
||||||
|
print("processing...", end="", flush=True)
|
||||||
|
generated_text = personality.processor.run_workflow(prompt, previous_discussion_text=personality.personality_conditioning+fd, callback=callback)
|
||||||
|
print(generated_text)
|
||||||
|
else:
|
||||||
|
print("generating...", end="", flush=True)
|
||||||
|
generated_text = personality.model.generate(personality.personality_conditioning+fd, n_predict=personality.model_n_predicts, callback=callback)
|
||||||
|
|
||||||
|
if personality.processor is not None and personality.processor_cfg["process_model_output"]:
|
||||||
|
generated_text = personality.processor.process_model_output(generated_text)
|
||||||
|
|
||||||
|
full_discussion_blocks.append(generated_text.strip())
|
||||||
|
print(f"{ASCIIColors.color_green}ok{ASCIIColors.color_reset}", end="", flush=True)
|
||||||
|
|
||||||
|
# Emit the generated text to the client
|
||||||
|
emit('text_generated', {'text': generated_text}, room=client_id)
|
||||||
|
|
||||||
|
def build_binding(self, bindings_path: Path, cfg: LOLLMSConfig)->LLMBinding:
|
||||||
|
binding_path = Path(bindings_path) / cfg["binding_name"]
|
||||||
|
# first find out if there is a requirements.txt file
|
||||||
|
install_file_name = "install.py"
|
||||||
|
install_script_path = binding_path / install_file_name
|
||||||
|
if install_script_path.exists():
|
||||||
|
module_name = install_file_name[:-3] # Remove the ".py" extension
|
||||||
|
module_spec = importlib.util.spec_from_file_location(module_name, str(install_script_path))
|
||||||
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
|
module_spec.loader.exec_module(module)
|
||||||
|
if hasattr(module, "Install"):
|
||||||
|
module.Install(self.config)
|
||||||
|
# define the full absolute path to the module
|
||||||
|
absolute_path = binding_path.resolve()
|
||||||
|
# infer the module name from the file path
|
||||||
|
module_name = binding_path.stem
|
||||||
|
# use importlib to load the module from the file path
|
||||||
|
loader = importlib.machinery.SourceFileLoader(module_name, str(absolute_path / "__init__.py"))
|
||||||
|
binding_module = loader.load_module()
|
||||||
|
binding_class = getattr(binding_module, binding_module.binding_name)
|
||||||
|
return binding_class
|
||||||
|
|
||||||
|
|
||||||
|
def run(self, host="localhost", port="9600"):
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument('--host', '-hst', default=host, help='Host name')
|
||||||
|
parser.add_argument('--port', '-prt', default=port, help='Port number')
|
||||||
|
|
||||||
|
parser.add_argument('--config', '-cfg', default=None, help='Path to the configuration file')
|
||||||
|
parser.add_argument('--bindings_path', '-bp', default=str(self.lollms_paths.bindings_zoo_path),
|
||||||
|
help='The path to the Bindings folder')
|
||||||
|
parser.add_argument('--personalities_path', '-pp',
|
||||||
|
default=str(self.lollms_paths.personalities_zoo_path),
|
||||||
|
help='The path to the personalities folder')
|
||||||
|
parser.add_argument('--models_path', '-mp', default=str(self.lollms_paths.personal_models_path),
|
||||||
|
help='The path to the models folder')
|
||||||
|
|
||||||
|
parser.add_argument('--binding_name', '-b', default="llama_cpp_official",
|
||||||
|
help='Binding to be used by default')
|
||||||
|
parser.add_argument('--model_name', '-m', default=None,
|
||||||
|
help='Model name')
|
||||||
|
parser.add_argument('--personality_full_name', '-p', default="personality",
|
||||||
|
help='Personality path relative to the personalities folder (language/category/name)')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Configuration loading part
|
||||||
|
self.config = LOLLMSConfig.autoload(self.lollms_paths, args.config)
|
||||||
|
|
||||||
|
if args.binding_name:
|
||||||
|
self.config.binding_name = args.binding_name
|
||||||
|
|
||||||
|
if args.model_name:
|
||||||
|
self.config.model_name = args.model_name
|
||||||
|
|
||||||
|
# Recover bindings path
|
||||||
|
self.personalities_path = Path(args.personalities_path)
|
||||||
|
self.bindings_path = Path(args.bindings_path)
|
||||||
|
self.models_path = Path(args.models_path)
|
||||||
|
if self.config.binding_name is None:
|
||||||
|
self.menu.select_binding()
|
||||||
|
else:
|
||||||
|
self.binding_class = self.build_binding(self.bindings_path, self.config)
|
||||||
|
if self.config.model_name is None:
|
||||||
|
self.menu.select_model()
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
self.current_model = self.binding_class(self.config)
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"{ASCIIColors.color_red}Couldn't load model Please select a valid model{ASCIIColors.color_reset}")
|
||||||
|
print(f"{ASCIIColors.color_red}{ex}{ASCIIColors.color_reset}")
|
||||||
|
self.menu.select_model()
|
||||||
|
|
||||||
|
for p in self.config.personalities:
|
||||||
|
personality = AIPersonality(self.lollms_paths, self.config.lollms_paths.personalities_zoo_path/p, self.current_model)
|
||||||
|
self.personalities.append(personality)
|
||||||
|
|
||||||
|
self.active_personality = self.personalities[self.config.active_personality_id]
|
||||||
|
|
||||||
|
self.menu.show_logo()
|
||||||
|
print(f"{ASCIIColors.color_red}Current personality : {ASCIIColors.color_reset}{self.active_personality}")
|
||||||
|
print("running...")
|
||||||
|
|
||||||
|
self.socketio.run(self.app, host=args.host, port=args.port)
|
||||||
|
|
||||||
|
def main():
|
||||||
|
LoLLMsServer()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
265
lollms/settings.py
Normal file
265
lollms/settings.py
Normal file
@ -0,0 +1,265 @@
|
|||||||
|
from lollms.personality import AIPersonality, MSG_TYPE
|
||||||
|
from lollms.binding import LOLLMSConfig, LLMBinding
|
||||||
|
from lollms.helpers import ASCIIColors
|
||||||
|
from lollms.paths import LollmsPaths
|
||||||
|
import shutil
|
||||||
|
import yaml
|
||||||
|
import importlib
|
||||||
|
from pathlib import Path
|
||||||
|
import sys
|
||||||
|
import pkg_resources
|
||||||
|
import argparse
|
||||||
|
from tqdm import tqdm
|
||||||
|
from lollms import BindingBuilder, ModelBuilder, PersonalityBuilder
|
||||||
|
from lollms.console import MainMenu
|
||||||
|
|
||||||
|
class Settings:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
configuration_path:str|Path=None,
|
||||||
|
show_logo:bool=True,
|
||||||
|
show_commands_list:bool=False,
|
||||||
|
show_personality_infos:bool=True,
|
||||||
|
show_model_infos:bool=True,
|
||||||
|
show_welcome_message:bool=True
|
||||||
|
):
|
||||||
|
|
||||||
|
# Fore it to be a path
|
||||||
|
self.is_logging = False
|
||||||
|
self.log_file_path = ""
|
||||||
|
|
||||||
|
self.bot_says = ""
|
||||||
|
# get paths
|
||||||
|
self.lollms_paths = LollmsPaths.find_paths(force_local=False)
|
||||||
|
|
||||||
|
# Build menu
|
||||||
|
self.menu = MainMenu(self)
|
||||||
|
|
||||||
|
# Change configuration
|
||||||
|
original = self.lollms_paths.default_cfg_path
|
||||||
|
if configuration_path is None:
|
||||||
|
local = self.lollms_paths.personal_configuration_path / "local_config.yaml"
|
||||||
|
else:
|
||||||
|
local = Path(configuration_path)
|
||||||
|
|
||||||
|
if not local.exists():
|
||||||
|
shutil.copy(original, local)
|
||||||
|
self.cfg_path = local
|
||||||
|
|
||||||
|
self.config = LOLLMSConfig(self.cfg_path)
|
||||||
|
# load binding
|
||||||
|
self.load_binding()
|
||||||
|
|
||||||
|
# Load model
|
||||||
|
self.load_model()
|
||||||
|
# cfg.binding_name = llm_binding.binding_folder_name
|
||||||
|
# cfg.model_name = model_name
|
||||||
|
|
||||||
|
# Load personality
|
||||||
|
try:
|
||||||
|
self.load_personality()
|
||||||
|
except Exception as ex:
|
||||||
|
print(f"No personality selected. Please select one from the zoo. {ex}")
|
||||||
|
self.menu.select_personality()
|
||||||
|
|
||||||
|
if show_logo:
|
||||||
|
self.menu.show_logo()
|
||||||
|
if show_commands_list:
|
||||||
|
self.menu.show_commands_list()
|
||||||
|
|
||||||
|
if show_personality_infos:
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current personality : {ASCIIColors.color_reset}{self.personality}")
|
||||||
|
print(f"{ASCIIColors.color_green}Version : {ASCIIColors.color_reset}{self.personality.version}")
|
||||||
|
print(f"{ASCIIColors.color_green}Author : {ASCIIColors.color_reset}{self.personality.author}")
|
||||||
|
print(f"{ASCIIColors.color_green}Description : {ASCIIColors.color_reset}{self.personality.personality_description}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
if show_model_infos:
|
||||||
|
print()
|
||||||
|
print(f"{ASCIIColors.color_green}Current binding : {ASCIIColors.color_reset}{self.config['binding_name']}")
|
||||||
|
print(f"{ASCIIColors.color_green}Current model : {ASCIIColors.color_reset}{self.config['model_name']}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
# If there is a disclaimer, show it
|
||||||
|
if self.personality.disclaimer != "":
|
||||||
|
print(f"\n{ASCIIColors.color_red}Disclaimer")
|
||||||
|
print(self.personality.disclaimer)
|
||||||
|
print(f"{ASCIIColors.color_reset}")
|
||||||
|
|
||||||
|
if show_welcome_message and self.personality.welcome_message:
|
||||||
|
print(self.personality.name+": ", end="")
|
||||||
|
print(self.personality.welcome_message)
|
||||||
|
|
||||||
|
self.menu.main_menu()
|
||||||
|
|
||||||
|
def ask_override_file(self):
|
||||||
|
user_input = input("Would you like to override the existing file? (Y/N): ")
|
||||||
|
user_input = user_input.lower()
|
||||||
|
if user_input == "y" or user_input == "yes":
|
||||||
|
print("File will be overridden.")
|
||||||
|
return True
|
||||||
|
elif user_input == "n" or user_input == "no":
|
||||||
|
print("File will not be overridden.")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
print("Invalid input. Please enter 'Y' or 'N'.")
|
||||||
|
# Call the function again recursively to prompt the user for valid input
|
||||||
|
return self.ask_override_file()
|
||||||
|
|
||||||
|
def start_log(self, file_name):
|
||||||
|
if Path(file_name).is_absolute():
|
||||||
|
self.log_file_path = Path(file_name)
|
||||||
|
else:
|
||||||
|
home_dir = Path.home()/"Documents/lollms/logs"
|
||||||
|
home_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
self.log_file_path = home_dir/file_name
|
||||||
|
if self.log_file_path.exists():
|
||||||
|
if not self.ask_override_file():
|
||||||
|
print("Canceled")
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
with(open(self.log_file_path, "w") as f):
|
||||||
|
self.header = f"""------------------------
|
||||||
|
Log file for lollms discussion
|
||||||
|
Participating personalities:
|
||||||
|
{self.config['personalities']}
|
||||||
|
------------------------
|
||||||
|
"""
|
||||||
|
f.write(self.header)
|
||||||
|
self.is_logging = True
|
||||||
|
return True
|
||||||
|
except:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def log(self, text, append=False):
|
||||||
|
try:
|
||||||
|
with(open(self.log_file_path, "a" if append else "w") as f):
|
||||||
|
f.write(text) if append else f.write(self.header+self.personality.personality_conditioning+text)
|
||||||
|
return True
|
||||||
|
except:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def stop_log(self):
|
||||||
|
self.is_logging = False
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def load_binding(self):
|
||||||
|
if self.config.binding_name is None:
|
||||||
|
print(f"No bounding selected")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_binding()
|
||||||
|
# cfg.download_model(url)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
self.binding_class = BindingBuilder().build_binding(self.lollms_paths.bindings_zoo_path, self.config)
|
||||||
|
except Exception as ex:
|
||||||
|
print(ex)
|
||||||
|
print(f"Couldn't find binding. Please verify your configuration file at {self.cfg_path} or use the next menu to select a valid binding")
|
||||||
|
self.menu.select_binding()
|
||||||
|
|
||||||
|
def load_model(self):
|
||||||
|
try:
|
||||||
|
self.model = ModelBuilder(self.binding_class, self.config).get_model()
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load model. Please verify your configuration file at {self.cfg_path} or use the next menu to select a valid model")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{self.config.get_model_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_model()
|
||||||
|
|
||||||
|
|
||||||
|
def load_personality(self):
|
||||||
|
try:
|
||||||
|
self.personality = PersonalityBuilder(self.lollms_paths, self.config, self.model).build_personality()
|
||||||
|
except Exception as ex:
|
||||||
|
ASCIIColors.error(f"Couldn't load personality. Please verify your configuration file at {self.cfg_path} or use the next menu to select a valid personality")
|
||||||
|
ASCIIColors.error(f"Binding returned this exception : {ex}")
|
||||||
|
ASCIIColors.error(f"{self.config.get_personality_path_infos()}")
|
||||||
|
print("Please select a valid model or install a new one from a url")
|
||||||
|
self.menu.select_model()
|
||||||
|
self.cond_tk = self.personality.model.tokenize(self.personality.personality_conditioning)
|
||||||
|
self.n_cond_tk = len(self.cond_tk)
|
||||||
|
|
||||||
|
def reset_context(self):
|
||||||
|
if self.personality.include_welcome_message_in_disucssion:
|
||||||
|
full_discussion = (
|
||||||
|
self.personality.ai_message_prefix +
|
||||||
|
self.personality.welcome_message +
|
||||||
|
self.personality.link_text
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
full_discussion = ""
|
||||||
|
return full_discussion
|
||||||
|
|
||||||
|
def safe_generate(self, full_discussion:str, n_predict=None, callback=None):
|
||||||
|
"""safe_generate
|
||||||
|
|
||||||
|
Args:
|
||||||
|
full_discussion (string): A prompt or a long discussion to use for generation
|
||||||
|
callback (_type_, optional): A callback to call for each received token. Defaults to None.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: Model output
|
||||||
|
"""
|
||||||
|
if n_predict == None:
|
||||||
|
n_predict =self.personality.model_n_predicts
|
||||||
|
tk = self.personality.model.tokenize(full_discussion)
|
||||||
|
n_tokens = len(tk)
|
||||||
|
fd = self.personality.model.detokenize(tk[-min(self.config.ctx_size-self.n_cond_tk,n_tokens):])
|
||||||
|
self.bot_says = ""
|
||||||
|
output = self.personality.model.generate(self.personality.personality_conditioning+fd, n_predict=n_predict, callback=callback)
|
||||||
|
return output
|
||||||
|
|
||||||
|
def remove_text_from_string(self, string, text_to_find):
|
||||||
|
"""
|
||||||
|
Removes everything from the first occurrence of the specified text in the string (case-insensitive).
|
||||||
|
|
||||||
|
Parameters:
|
||||||
|
string (str): The original string.
|
||||||
|
text_to_find (str): The text to find in the string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The updated string.
|
||||||
|
"""
|
||||||
|
index = string.lower().find(text_to_find.lower())
|
||||||
|
|
||||||
|
if index != -1:
|
||||||
|
string = string[:index]
|
||||||
|
|
||||||
|
return string
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# Create the argument parser
|
||||||
|
parser = argparse.ArgumentParser(description='App Description')
|
||||||
|
|
||||||
|
# Add the configuration path argument
|
||||||
|
parser.add_argument('--configuration_path', default=None,
|
||||||
|
help='Path to the configuration file')
|
||||||
|
|
||||||
|
parser.add_argument('--reset_personal_path', action='store_true', help='Reset the personal path')
|
||||||
|
parser.add_argument('--reset_config', action='store_true', help='Reset the configurations')
|
||||||
|
|
||||||
|
# Parse the command-line arguments
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.reset_personal_path:
|
||||||
|
LollmsPaths.reset_configs()
|
||||||
|
|
||||||
|
if args.reset_config:
|
||||||
|
cfg_path = LollmsPaths.find_paths().personal_configuration_path / "local_config.yaml"
|
||||||
|
try:
|
||||||
|
cfg_path.unlink()
|
||||||
|
ASCIIColors.success("LOLLMS configuration reset successfully")
|
||||||
|
except:
|
||||||
|
ASCIIColors.success("Couldn't reset LOLLMS configuration")
|
||||||
|
|
||||||
|
configuration_path = args.configuration_path
|
||||||
|
|
||||||
|
Settings(configuration_path=configuration_path, show_commands_list=True)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
7
requirements.txt
Normal file
7
requirements.txt
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
tqdm
|
||||||
|
pyyaml
|
||||||
|
Pillow
|
||||||
|
flask
|
||||||
|
flask_socketio
|
||||||
|
flask-cors
|
||||||
|
simple-websocket
|
7
requirements_dev.txt
Normal file
7
requirements_dev.txt
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
tqdm
|
||||||
|
pyyaml
|
||||||
|
Pillow
|
||||||
|
flask
|
||||||
|
flask_socketio
|
||||||
|
flask-cors
|
||||||
|
simple-websocket
|
52
setup.py
Normal file
52
setup.py
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
from pathlib import Path
|
||||||
|
from typing import Union
|
||||||
|
|
||||||
|
import setuptools
|
||||||
|
|
||||||
|
with open("README.md", "r") as fh:
|
||||||
|
long_description = fh.read()
|
||||||
|
|
||||||
|
|
||||||
|
def read_requirements(path: Union[str, Path]):
|
||||||
|
with open(path, "r") as file:
|
||||||
|
return file.read().splitlines()
|
||||||
|
|
||||||
|
|
||||||
|
requirements = read_requirements("requirements.txt")
|
||||||
|
requirements_dev = read_requirements("requirements_dev.txt")
|
||||||
|
|
||||||
|
def get_all_files(path):
|
||||||
|
path = Path(path)
|
||||||
|
file_list = []
|
||||||
|
for file_path in path.rglob('*'):
|
||||||
|
if file_path.is_file():
|
||||||
|
if file_path.name != "__pycache__" and file_path.suffix !=".pyc" and file_path.name!="local_config.yaml" and file_path.name!=".installed" and file_path.name!=".git" and file_path.name!=".gitignore":
|
||||||
|
file_list.append("/".join(str(file_path).replace("\\","/").split("/")[1:]))
|
||||||
|
return file_list
|
||||||
|
|
||||||
|
setuptools.setup(
|
||||||
|
name="lollms",
|
||||||
|
version="1.1.60",
|
||||||
|
author="Saifeddine ALOUI",
|
||||||
|
author_email="aloui.saifeddine@gmail.com",
|
||||||
|
description="A python library for AI personality definition",
|
||||||
|
long_description=long_description,
|
||||||
|
long_description_content_type="text/markdown",
|
||||||
|
url="https://github.com/ParisNeo/lollms",
|
||||||
|
packages=setuptools.find_packages(),
|
||||||
|
include_package_data=True,
|
||||||
|
install_requires=requirements,
|
||||||
|
entry_points={
|
||||||
|
'console_scripts': [
|
||||||
|
'lollms-server = lollms.server:main',
|
||||||
|
'lollms-console = lollms.console:main',
|
||||||
|
'lollms-settings = lollms.settings:main',
|
||||||
|
],
|
||||||
|
},
|
||||||
|
extras_require={"dev": requirements_dev},
|
||||||
|
classifiers=[
|
||||||
|
"Programming Language :: Python :: 3.8",
|
||||||
|
"License :: OSI Approved :: Apache Software License",
|
||||||
|
"Operating System :: OS Independent",
|
||||||
|
],
|
||||||
|
)
|
2
train/.gitignore
vendored
Normal file
2
train/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
output
|
||||||
|
!output/.keep
|
48
train/configs/deepspeed/ds_config.yaml
Normal file
48
train/configs/deepspeed/ds_config.yaml
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
{
|
||||||
|
"train_batch_size": "auto",
|
||||||
|
"gradient_accumulation_steps": "auto",
|
||||||
|
"train_micro_batch_size_per_gpu": "auto",
|
||||||
|
"fp16": {
|
||||||
|
"enabled": "auto",
|
||||||
|
"min_loss_scale": 1,
|
||||||
|
"loss_scale_window": 1000,
|
||||||
|
"hysteresis": 2,
|
||||||
|
"initial_scale_power": 32
|
||||||
|
},
|
||||||
|
"bf16": {
|
||||||
|
"enabled": "auto"
|
||||||
|
},
|
||||||
|
"gradient_clipping": 1,
|
||||||
|
"zero_optimization": {
|
||||||
|
"stage": 2,
|
||||||
|
"offload_param": {
|
||||||
|
"device": "none"
|
||||||
|
},
|
||||||
|
"offload_optimizer": {
|
||||||
|
"device": "none"
|
||||||
|
},
|
||||||
|
"allgather_partitions": true,
|
||||||
|
"allgather_bucket_size": 5e8,
|
||||||
|
"contiguous_gradients": true
|
||||||
|
},
|
||||||
|
"optimizer": {
|
||||||
|
"type": "AdamW",
|
||||||
|
"params": {
|
||||||
|
"lr": "auto",
|
||||||
|
"betas": [
|
||||||
|
0.9,
|
||||||
|
0.999
|
||||||
|
],
|
||||||
|
"eps": 1e-08
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"scheduler": {
|
||||||
|
"type": "WarmupLR",
|
||||||
|
"params": {
|
||||||
|
"warmup_min_lr": 0,
|
||||||
|
"warmup_max_lr": "auto",
|
||||||
|
"warmup_num_steps": "auto",
|
||||||
|
"warmup_type": "linear"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
29
train/configs/train/finetune.yaml
Normal file
29
train/configs/train/finetune.yaml
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
# model/tokenizer
|
||||||
|
model_name: # add model here
|
||||||
|
tokenizer_name: # add model here
|
||||||
|
gradient_checkpointing: true
|
||||||
|
save_name: # CHANGE
|
||||||
|
|
||||||
|
# dataset
|
||||||
|
streaming: false
|
||||||
|
num_proc: 64
|
||||||
|
dataset_path: # update
|
||||||
|
max_length: 1024
|
||||||
|
batch_size: 32
|
||||||
|
|
||||||
|
# train dynamics
|
||||||
|
lr: 5.0e-5
|
||||||
|
eval_every: 800
|
||||||
|
eval_steps: 100
|
||||||
|
save_every: 800
|
||||||
|
output_dir: # CHANGE
|
||||||
|
checkpoint: null
|
||||||
|
lora: false
|
||||||
|
warmup_steps: 100
|
||||||
|
num_epochs: 2
|
||||||
|
|
||||||
|
# logging
|
||||||
|
wandb: true
|
||||||
|
wandb_entity: # update
|
||||||
|
wandb_project_name: # update
|
||||||
|
seed: 42
|
31
train/configs/train/finetune_lora.yaml
Normal file
31
train/configs/train/finetune_lora.yaml
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
# model/tokenizer
|
||||||
|
model_name: # update
|
||||||
|
tokenizer_name: # update
|
||||||
|
gradient_checkpointing: false
|
||||||
|
save_name: # CHANGE
|
||||||
|
|
||||||
|
# dataset
|
||||||
|
streaming: false
|
||||||
|
num_proc: 64
|
||||||
|
dataset_path: # CHANGE
|
||||||
|
max_length: 1024
|
||||||
|
batch_size: 4
|
||||||
|
|
||||||
|
# train dynamics
|
||||||
|
lr: 5.0e-5
|
||||||
|
min_lr: 0
|
||||||
|
weight_decay: 0.0
|
||||||
|
eval_every: 2000
|
||||||
|
eval_steps: 100
|
||||||
|
save_every: 2000
|
||||||
|
output_dir: # CHANGE
|
||||||
|
checkpoint: null
|
||||||
|
lora: true
|
||||||
|
warmup_steps: 100
|
||||||
|
num_epochs: 2
|
||||||
|
|
||||||
|
# logging
|
||||||
|
wandb: true
|
||||||
|
wandb_entity: # update
|
||||||
|
wandb_project_name: # update
|
||||||
|
seed: 42
|
31
train/configs/train/finetune_lora_ airoboros-7b-gpt4.yaml
Normal file
31
train/configs/train/finetune_lora_ airoboros-7b-gpt4.yaml
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
# model/tokenizer
|
||||||
|
model_name: jondurbin/airoboros-7b-gpt4 # update
|
||||||
|
tokenizer_name: jondurbin/airoboros-7b-gpt4 # update
|
||||||
|
gradient_checkpointing: false
|
||||||
|
save_name: parisneo-7b_gpt42_lora # CHANGE
|
||||||
|
|
||||||
|
# dataset
|
||||||
|
streaming: false
|
||||||
|
num_proc: 64
|
||||||
|
dataset_path: # CHANGE
|
||||||
|
max_length: 1024
|
||||||
|
batch_size: 4
|
||||||
|
|
||||||
|
# train dynamics
|
||||||
|
lr: 5.0e-5
|
||||||
|
min_lr: 0
|
||||||
|
weight_decay: 0.0
|
||||||
|
eval_every: 2000
|
||||||
|
eval_steps: 100
|
||||||
|
save_every: 2000
|
||||||
|
output_dir: output # CHANGE
|
||||||
|
checkpoint: null
|
||||||
|
lora: true
|
||||||
|
warmup_steps: 100
|
||||||
|
num_epochs: 2
|
||||||
|
|
||||||
|
# logging
|
||||||
|
wandb: false # update if you want to use weights and biases
|
||||||
|
wandb_entity: # update
|
||||||
|
wandb_project_name: # update
|
||||||
|
seed: 42
|
15
train/requirements.txt
Normal file
15
train/requirements.txt
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
accelerate
|
||||||
|
datasets
|
||||||
|
torchmetrics
|
||||||
|
evaluate
|
||||||
|
transformers>=4.28.0
|
||||||
|
wandb
|
||||||
|
pip
|
||||||
|
peft
|
||||||
|
nodelist-inflator
|
||||||
|
deepspeed
|
||||||
|
sentencepiece
|
||||||
|
jsonlines
|
||||||
|
nomic
|
||||||
|
scikit-learn
|
||||||
|
matplotlib
|
233
train/train.py
Normal file
233
train/train.py
Normal file
@ -0,0 +1,233 @@
|
|||||||
|
import os
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer, get_scheduler, LlamaForCausalLM
|
||||||
|
import torch
|
||||||
|
from torch.optim import AdamW
|
||||||
|
from argparse import ArgumentParser
|
||||||
|
from read import read_config
|
||||||
|
from accelerate import Accelerator
|
||||||
|
from accelerate.utils import DummyScheduler, DummyOptim, set_seed
|
||||||
|
from peft import get_peft_model, LoraConfig, TaskType
|
||||||
|
from data import load_data
|
||||||
|
from torchmetrics import MeanMetric
|
||||||
|
from tqdm import tqdm
|
||||||
|
import wandb
|
||||||
|
|
||||||
|
torch.backends.cuda.matmul.allow_tf32 = True
|
||||||
|
|
||||||
|
def format_metrics(metrics, split, prefix=""):
|
||||||
|
log = f"[{split}]" + prefix
|
||||||
|
log += " ".join([f"{key}: {value:.4f}" for key, value in metrics.items()])
|
||||||
|
|
||||||
|
return log
|
||||||
|
|
||||||
|
|
||||||
|
def evaluate(model, val_dataloader):
|
||||||
|
model.eval()
|
||||||
|
val_loss = MeanMetric(nan_strategy="error").to(model.device)
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
for batch in tqdm(val_dataloader):
|
||||||
|
loss = model(**batch).loss
|
||||||
|
|
||||||
|
loss_values = accelerator.gather_for_metrics({"loss": loss.detach()})
|
||||||
|
|
||||||
|
val_loss.update(loss_values["loss"])
|
||||||
|
|
||||||
|
return val_loss
|
||||||
|
|
||||||
|
|
||||||
|
def train(accelerator, config):
|
||||||
|
set_seed(config['seed'])
|
||||||
|
|
||||||
|
accelerator.print(config)
|
||||||
|
accelerator.print(f"Using {accelerator.num_processes} GPUs")
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(config['tokenizer_name'], model_max_length=config['max_length'])
|
||||||
|
# if no pad token, set it to eos
|
||||||
|
if tokenizer.pad_token is None:
|
||||||
|
tokenizer.pad_token = tokenizer.eos_token
|
||||||
|
|
||||||
|
|
||||||
|
with accelerator.main_process_first():
|
||||||
|
train_dataloader, val_dataloader = load_data(config, tokenizer)
|
||||||
|
|
||||||
|
|
||||||
|
checkpoint = config["gradient_checkpointing"]
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(config["model_name"],
|
||||||
|
use_cache=False if checkpoint else True,
|
||||||
|
trust_remote_code=True)
|
||||||
|
if checkpoint:
|
||||||
|
model.gradient_checkpointing_enable()
|
||||||
|
|
||||||
|
if config["lora"]:
|
||||||
|
peft_config = LoraConfig(
|
||||||
|
# should R be configurable?
|
||||||
|
task_type=TaskType.CAUSAL_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
|
||||||
|
)
|
||||||
|
model = get_peft_model(model, peft_config)
|
||||||
|
model.print_trainable_parameters()
|
||||||
|
|
||||||
|
optimizer_cls = (
|
||||||
|
AdamW
|
||||||
|
if accelerator.state.deepspeed_plugin is None
|
||||||
|
or "optimizer" not in accelerator.state.deepspeed_plugin.deepspeed_config
|
||||||
|
else DummyOptim
|
||||||
|
)
|
||||||
|
|
||||||
|
# karpathy doesn't decay embeddding, maybe we should exclude
|
||||||
|
# https://github.com/karpathy/minGPT/commit/bbbdac74fa9b2e55574d70056163ffbae42310c1#diff-2075fa9c224b395be5bda85544dd36572b59c76c54562819eadadbf268602834R157s
|
||||||
|
optimizer = optimizer_cls(model.parameters(), lr=config["lr"], weight_decay=config["weight_decay"])
|
||||||
|
|
||||||
|
if accelerator.state.deepspeed_plugin is not None:
|
||||||
|
gradient_accumulation_steps = accelerator.state.deepspeed_plugin.deepspeed_config[
|
||||||
|
"gradient_accumulation_steps"
|
||||||
|
]
|
||||||
|
|
||||||
|
# decay to min_lr instead of 0
|
||||||
|
lr_ratio = config["min_lr"] / config["lr"]
|
||||||
|
accelerator.print(f"Len of train_dataloader: {len(train_dataloader)}")
|
||||||
|
total_num_steps = (len(train_dataloader) / gradient_accumulation_steps) * config["num_epochs"]
|
||||||
|
# instead of decaying to zero, decay to ratio of min_lr / lr
|
||||||
|
total_num_steps += int(total_num_steps * lr_ratio) + config["warmup_steps"]
|
||||||
|
accelerator.print(f"Total training steps: {total_num_steps}")
|
||||||
|
|
||||||
|
# Creates Dummy Scheduler if `scheduler` was specified in the config file else creates `args.lr_scheduler_type` Scheduler
|
||||||
|
if (
|
||||||
|
accelerator.state.deepspeed_plugin is None
|
||||||
|
or "scheduler" not in accelerator.state.deepspeed_plugin.deepspeed_config
|
||||||
|
):
|
||||||
|
scheduler = get_scheduler(
|
||||||
|
name="cosine",
|
||||||
|
optimizer=optimizer,
|
||||||
|
num_warmup_steps=config["warmup_steps"] * accelerator.num_processes,
|
||||||
|
num_training_steps=total_num_steps,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
scheduler = DummyScheduler(
|
||||||
|
optimizer, total_num_steps=config["warmup_steps"], warmup_num_steps=config["warmup_steps"]
|
||||||
|
)
|
||||||
|
|
||||||
|
model, optimizer, train_dataloader, val_dataloader, scheduler = accelerator.prepare(
|
||||||
|
model, optimizer, train_dataloader, val_dataloader, scheduler
|
||||||
|
)
|
||||||
|
|
||||||
|
# setup for saving training states in case preemption
|
||||||
|
accelerator.register_for_checkpointing(scheduler)
|
||||||
|
|
||||||
|
if config["checkpoint"]:
|
||||||
|
accelerator.load_state(config["checkpoint"])
|
||||||
|
accelerator.print(f"Resumed from checkpoint: {config['checkpoint']}")
|
||||||
|
path = os.path.basename(config["train_args"]["resume_from_checkpoint"])
|
||||||
|
training_difference = os.path.splitext(path)[0]
|
||||||
|
resume_step = int(training_difference.replace("step_", ""))
|
||||||
|
accelerator.skip_first_batches(train_dataloader, resume_step)
|
||||||
|
accelerator.print(f"Resuming from step {resume_step}")
|
||||||
|
|
||||||
|
|
||||||
|
# log gradients
|
||||||
|
if accelerator.is_main_process and config["wandb"]:
|
||||||
|
wandb.watch(model, log_freq=config["log_grads_every"], log="all")
|
||||||
|
|
||||||
|
for epoch in range(config["num_epochs"]):
|
||||||
|
train_loss = MeanMetric(nan_strategy="error").to(model.device)
|
||||||
|
for step, batch in enumerate(tqdm(train_dataloader)):
|
||||||
|
model.train()
|
||||||
|
outputs = model(**batch)
|
||||||
|
loss = outputs.loss
|
||||||
|
|
||||||
|
# gather loss before backprop in case of gradient accumulation
|
||||||
|
loss_values = accelerator.gather_for_metrics({"loss": loss.detach().float()})
|
||||||
|
train_loss.update(loss_values["loss"])
|
||||||
|
|
||||||
|
loss = loss / gradient_accumulation_steps
|
||||||
|
accelerator.backward(loss)
|
||||||
|
# get gradient norm of all params
|
||||||
|
|
||||||
|
# log LR in case something weird happens
|
||||||
|
if step > 0 and step % (config["eval_every"] // 10) == 0:
|
||||||
|
if config["wandb"]:
|
||||||
|
curr_step = step + epoch * len(train_dataloader)
|
||||||
|
accelerator.log({"lr": scheduler.get_last_lr()[0]}, step=curr_step)
|
||||||
|
|
||||||
|
if (step + 1) % gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
|
||||||
|
optimizer.step()
|
||||||
|
scheduler.step()
|
||||||
|
optimizer.zero_grad()
|
||||||
|
|
||||||
|
|
||||||
|
if step > 0 and step % config["save_every"] == 0:
|
||||||
|
curr_step = step + epoch * len(train_dataloader)
|
||||||
|
accelerator.save_state(f"{config['output_dir']}/step_{curr_step}")
|
||||||
|
|
||||||
|
if step > 0 and (step % config["eval_every"] == 0 or step == len(train_dataloader) - 1):
|
||||||
|
val_loss = evaluate(model, val_dataloader)
|
||||||
|
|
||||||
|
log_train = {
|
||||||
|
"train_loss": train_loss.compute()
|
||||||
|
}
|
||||||
|
log_val = {
|
||||||
|
"val_loss": val_loss.compute()
|
||||||
|
}
|
||||||
|
|
||||||
|
if config["wandb"]:
|
||||||
|
curr_step = step + epoch * len(train_dataloader)
|
||||||
|
accelerator.log({**log_train, **log_val}, step=curr_step)
|
||||||
|
|
||||||
|
accelerator.print(f"Current LR: {scheduler.get_last_lr()[0]}")
|
||||||
|
accelerator.print(format_metrics(log_train, "train", f" step {step} "))
|
||||||
|
accelerator.print(format_metrics(log_val, "val", f" step {step} "))
|
||||||
|
|
||||||
|
train_loss.reset()
|
||||||
|
|
||||||
|
accelerator.print(f"Epoch {epoch} finished")
|
||||||
|
accelerator.print(f"Pushing to HF hub")
|
||||||
|
accelerator.wait_for_everyone()
|
||||||
|
unwrapped_model = accelerator.unwrap_model(model)
|
||||||
|
try:
|
||||||
|
if accelerator.is_main_process:
|
||||||
|
unwrapped_model.push_to_hub(config["save_name"] + f"-epoch_{epoch}", private=True)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
accelerator.print(e)
|
||||||
|
accelerator.print(f"Failed to push to hub")
|
||||||
|
|
||||||
|
unwrapped_model.save_pretrained(
|
||||||
|
f"{config['output_dir']}/epoch_{epoch}",
|
||||||
|
is_main_process=accelerator.is_main_process,
|
||||||
|
save_function=accelerator.save,
|
||||||
|
state_dict=accelerator.get_state_dict(model),
|
||||||
|
)
|
||||||
|
|
||||||
|
accelerator.wait_for_everyone()
|
||||||
|
unwrapped_model = accelerator.unwrap_model(model)
|
||||||
|
unwrapped_model.save_pretrained(
|
||||||
|
f"{config['output_dir']}/final",
|
||||||
|
is_main_process=accelerator.is_main_process,
|
||||||
|
save_function=accelerator.save,
|
||||||
|
state_dict=accelerator.get_state_dict(model),
|
||||||
|
)
|
||||||
|
|
||||||
|
accelerator.end_training()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# parse arguments by reading in a config
|
||||||
|
parser = ArgumentParser()
|
||||||
|
parser.add_argument("--config", type=str, default="config.yaml")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
config = read_config(args.config)
|
||||||
|
|
||||||
|
if config["wandb"]:
|
||||||
|
accelerator = Accelerator(log_with="wandb")
|
||||||
|
accelerator.init_trackers(
|
||||||
|
project_name=config["wandb_project_name"],
|
||||||
|
config=config,
|
||||||
|
init_kwargs={"wandb": {"entity": config["wandb_entity"]}},
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
accelerator = Accelerator()
|
||||||
|
|
||||||
|
train(accelerator, config=config)
|
Loading…
Reference in New Issue
Block a user