mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2024-12-21 13:17:47 +00:00
496 lines
14 KiB
Markdown
496 lines
14 KiB
Markdown
I'll create a comprehensive guide on how to build a LoLLMs binding.
|
|
|
|
# Building a LoLLMs Binding - Developer Guide
|
|
|
|
## Introduction
|
|
LoLLMs (Lord of Large Language Models) is a framework for interfacing with various language models. A binding is a connector that allows LoLLMs to interact with a specific model or API.
|
|
|
|
## Table of Contents
|
|
1. Basic Structure
|
|
2. Essential Components
|
|
3. Binding Types
|
|
4. Step-by-Step Guide
|
|
5. Advanced Features
|
|
6. Best Practices
|
|
|
|
## 1. Basic Structure
|
|
|
|
A LoLLMs binding consists of the following files:
|
|
```
|
|
binding_name/
|
|
├── __init__.py
|
|
├── binding.py
|
|
├── config.yaml
|
|
├── logo.png
|
|
└── README.md
|
|
```
|
|
|
|
## 2. Essential Components
|
|
|
|
### 2.1 Base Class
|
|
All bindings must inherit from `LLMBinding`:
|
|
|
|
```python
|
|
from lollms.binding import LLMBinding, LOLLMSConfig, BindingType
|
|
from lollms.paths import LollmsPaths
|
|
from lollms.config import BaseConfig, TypedConfig, ConfigTemplate, InstallOption
|
|
|
|
class YourBinding(LLMBinding):
|
|
def __init__(self,
|
|
config: LOLLMSConfig,
|
|
lollms_paths: LollmsPaths = None,
|
|
installation_option:InstallOption=InstallOption.INSTALL_IF_NECESSARY,
|
|
lollmsCom=None) -> None:
|
|
# Your initialization code
|
|
```
|
|
|
|
### 2.2 Configuration
|
|
Define your binding's configuration using TypedConfig:
|
|
|
|
```python
|
|
binding_config = TypedConfig(
|
|
ConfigTemplate([
|
|
{"name":"api_key","type":"str","value":"", "help":"API key"},
|
|
{"name":"temperature","type":"float","value":0.7, "min":0.0, "max":1.0},
|
|
# Add more configuration parameters
|
|
]),
|
|
BaseConfig(config={})
|
|
)
|
|
```
|
|
|
|
## 3. Binding Types
|
|
LoLLMs supports different binding types:
|
|
```python
|
|
class BindingType:
|
|
TEXT = "text" # Text only
|
|
TEXT_IMAGE = "text_image" # Text + image input
|
|
MULTIMODAL = "multimodal" # Multiple input/output modalities
|
|
```
|
|
|
|
## 4. Step-by-Step Guide
|
|
|
|
### 4.1 Create Basic Structure
|
|
```python
|
|
class YourBinding(LLMBinding):
|
|
def __init__(self, config, lollms_paths=None, installation_option=InstallOption.INSTALL_IF_NECESSARY, lollmsCom=None):
|
|
binding_config = TypedConfig(
|
|
ConfigTemplate([
|
|
# Your config parameters
|
|
]),
|
|
BaseConfig(config={})
|
|
)
|
|
|
|
super().__init__(
|
|
Path(__file__).parent,
|
|
lollms_paths,
|
|
config,
|
|
binding_config,
|
|
installation_option,
|
|
supported_file_extensions=[''],
|
|
lollmsCom=lollmsCom
|
|
)
|
|
```
|
|
|
|
### 4.2 Implement Required Methods
|
|
|
|
```python
|
|
def build_model(self, model_name=None):
|
|
"""Build or initialize the model"""
|
|
super().build_model(model_name)
|
|
# Your model initialization code
|
|
return self
|
|
|
|
def generate(self,
|
|
prompt: str,
|
|
n_predict: int = 128,
|
|
callback: Callable[[str], None] = None,
|
|
verbose: bool = False,
|
|
**gpt_params) -> str:
|
|
"""Generate text from prompt"""
|
|
# Your generation code
|
|
|
|
def tokenize(self, prompt:str):
|
|
"""Tokenize text"""
|
|
# Your tokenization code
|
|
|
|
def detokenize(self, tokens_list:list):
|
|
"""Detokenize tokens"""
|
|
# Your detokenization code
|
|
|
|
def list_models(self):
|
|
"""List available models"""
|
|
# Return list of model names
|
|
|
|
def get_available_models(self, app:LoLLMsCom=None):
|
|
"""Get detailed model information"""
|
|
# Return list of model details
|
|
```
|
|
|
|
### 4.3 Installation Support
|
|
```python
|
|
def install(self):
|
|
"""Install required packages"""
|
|
super().install()
|
|
PackageManager.install_package("your-required-package")
|
|
```
|
|
|
|
## 5. Advanced Features
|
|
|
|
### 5.1 Image Support
|
|
For bindings that support images:
|
|
|
|
```python
|
|
def generate_with_images(self,
|
|
prompt:str,
|
|
images:list=[],
|
|
n_predict: int = 128,
|
|
callback: Callable[[str, int, dict], bool] = None,
|
|
verbose: bool = False,
|
|
**gpt_params):
|
|
"""Generate text from prompt and images"""
|
|
# Your image processing code
|
|
```
|
|
|
|
### 5.2 Embedding Support
|
|
For models that support embeddings:
|
|
|
|
```python
|
|
def embed(self, text):
|
|
"""Compute text embedding"""
|
|
# Your embedding code
|
|
return embedding_vector
|
|
```
|
|
|
|
## 6. Best Practices
|
|
|
|
### 6.1 Error Handling
|
|
Always implement proper error handling:
|
|
|
|
```python
|
|
def generate(self, prompt, **kwargs):
|
|
try:
|
|
# Your generation code
|
|
except Exception as ex:
|
|
trace_exception(ex)
|
|
self.error(ex)
|
|
return ""
|
|
```
|
|
|
|
### 6.2 Configuration Validation
|
|
Validate configuration in settings_updated:
|
|
|
|
```python
|
|
def settings_updated(self):
|
|
if not self.binding_config.api_key:
|
|
self.error("API key not set!")
|
|
else:
|
|
self.build_model()
|
|
```
|
|
|
|
### 6.3 Documentation
|
|
Always include:
|
|
- README.md with usage instructions
|
|
- Docstrings for methods
|
|
- Configuration parameter descriptions
|
|
- Requirements and dependencies
|
|
|
|
### 6.4 Status Updates
|
|
Use provided methods for status updates:
|
|
```python
|
|
self.info("Information message")
|
|
self.warning("Warning message")
|
|
self.error("Error message")
|
|
self.success("Success message")
|
|
```
|
|
|
|
## Example config.yaml
|
|
```yaml
|
|
name: YourBinding
|
|
author: Your Name
|
|
version: 1.0.0
|
|
description: Description of your binding
|
|
url: https://github.com/yourusername/your-binding
|
|
license: Apache 2.0
|
|
```
|
|
|
|
I'll add a detailed section about callbacks in LoLLMs bindings.
|
|
|
|
# Callbacks in LoLLMs Bindings
|
|
|
|
## Introduction
|
|
Callbacks are crucial in LoLLMs as they enable streaming text generation, allowing the UI to update in real-time and providing control over the generation process.
|
|
|
|
## Callback Types
|
|
|
|
```python
|
|
from lollms.types import MSG_OPERATION_TYPE
|
|
|
|
|
|
class MSG_OPERATION_TYPE(Enum):
|
|
# Conditionning
|
|
MSG_OPERATION_TYPE_ADD_CHUNK = 0 # Add a chunk to the current message
|
|
MSG_OPERATION_TYPE_SET_CONTENT = 1 # sets the content of current message
|
|
MSG_OPERATION_TYPE_SET_CONTENT_INVISIBLE_TO_AI = 2 # sets the content of current message as invisible to ai
|
|
MSG_OPERATION_TYPE_SET_CONTENT_INVISIBLE_TO_USER = 3 # sets the content of current message as invisible to user
|
|
# Informations
|
|
MSG_OPERATION_TYPE_EXCEPTION = 4 # An exception occured
|
|
MSG_OPERATION_TYPE_WARNING = 5 # A warning occured
|
|
MSG_OPERATION_TYPE_INFO = 6 # An information to be shown to user
|
|
|
|
# Steps
|
|
MSG_OPERATION_TYPE_STEP = 7 # An instant step (a step that doesn't need time to be executed)
|
|
MSG_OPERATION_TYPE_STEP_START = 8 # A step has started (the text contains an explanation of the step done by he personality)
|
|
MSG_OPERATION_TYPE_STEP_PROGRESS = 9 # The progress value (the text contains a percentage and can be parsed by the reception)
|
|
MSG_OPERATION_TYPE_STEP_END_SUCCESS = 10# A step has been done (the text contains an explanation of the step done by he personality)
|
|
MSG_OPERATION_TYPE_STEP_END_FAILURE = 11# A step has been done (the text contains an explanation of the step done by he personality)
|
|
|
|
#Extra
|
|
MSG_OPERATION_TYPE_JSON_INFOS = 12# A JSON output that is useful for summarizing the process of generation used by personalities like chain of thoughts and tree of thooughts
|
|
MSG_OPERATION_TYPE_REF = 13# References (in form of [text](path))
|
|
MSG_OPERATION_TYPE_CODE = 14# A javascript code to execute
|
|
MSG_OPERATION_TYPE_UI = 15# A vue.js component to show (we need to build some and parse the text to show it)
|
|
|
|
#Commands
|
|
MSG_OPERATION_TYPE_NEW_MESSAGE = 16# A new message
|
|
MSG_OPERATION_TYPE_FINISHED_MESSAGE = 17# End of current message
|
|
```
|
|
|
|
## Implementation Examples
|
|
|
|
### 1. Basic Callback Usage
|
|
|
|
```python
|
|
def generate(self,
|
|
prompt: str,
|
|
n_predict: int = 128,
|
|
callback: Callable[[str, MSG_OPERATION_TYPE], bool] = None,
|
|
verbose: bool = False,
|
|
**gpt_params) -> str:
|
|
"""
|
|
Generate text with callback support
|
|
|
|
Args:
|
|
prompt: Input text
|
|
n_predict: Max tokens to generate
|
|
callback: Function called for each generated chunk
|
|
verbose: Enable verbose output
|
|
"""
|
|
output = ""
|
|
try:
|
|
# Example streaming response
|
|
for chunk in model.stream_generate(prompt):
|
|
if callback is not None:
|
|
# Call callback with chunk and operation type
|
|
# If callback returns False, stop generation
|
|
if not callback(chunk, MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK):
|
|
break
|
|
output += chunk
|
|
|
|
except Exception as ex:
|
|
trace_exception(ex)
|
|
self.error(ex)
|
|
|
|
return output
|
|
```
|
|
|
|
### 2. Advanced Callback Usage
|
|
|
|
```python
|
|
def generate_with_images(self,
|
|
prompt:str,
|
|
images:list=[],
|
|
n_predict: int = 128,
|
|
callback: Callable[[str, MSG_OPERATION_TYPE, dict], bool] = None,
|
|
verbose: bool = False,
|
|
**gpt_params):
|
|
"""Generate text with images and advanced callback usage"""
|
|
|
|
output = ""
|
|
try:
|
|
# Process response stream
|
|
for chunk in model.stream_response():
|
|
# Add new chunk
|
|
if chunk.type == 'text':
|
|
if callback is not None:
|
|
# Send chunk with metadata
|
|
metadata = {
|
|
'token_count': len(self.tokenize(chunk.text)),
|
|
'finish_reason': chunk.finish_reason
|
|
}
|
|
if not callback(chunk.text,
|
|
MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK,
|
|
metadata):
|
|
break
|
|
output += chunk.text
|
|
|
|
# Replace last chunk (e.g., for word corrections)
|
|
elif chunk.type == 'correction':
|
|
if callback is not None:
|
|
if not callback(chunk.text,
|
|
MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_REPLACE_LAST):
|
|
break
|
|
output = output[:-len(chunk.previous)] + chunk.text
|
|
|
|
# Add new line
|
|
elif chunk.type == 'newline':
|
|
if callback is not None:
|
|
if not callback("\n",
|
|
MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_NEW_LINE):
|
|
break
|
|
output += "\n"
|
|
|
|
except Exception as ex:
|
|
trace_exception(ex)
|
|
self.error(ex)
|
|
|
|
return output
|
|
```
|
|
|
|
### 3. Callback with Progress Updates
|
|
|
|
```python
|
|
def generate(self, prompt: str, n_predict: int = 128, callback=None, **kwargs):
|
|
output = ""
|
|
tokens_generated = 0
|
|
|
|
try:
|
|
for chunk in model.stream_generate(prompt):
|
|
tokens_generated += len(self.tokenize(chunk))
|
|
|
|
if callback is not None:
|
|
# Include progress information
|
|
metadata = {
|
|
'progress': tokens_generated / n_predict,
|
|
'tokens_generated': tokens_generated,
|
|
'max_tokens': n_predict
|
|
}
|
|
|
|
if not callback(chunk,
|
|
MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK,
|
|
metadata):
|
|
break
|
|
|
|
output += chunk
|
|
|
|
# Check token limit
|
|
if tokens_generated >= n_predict:
|
|
break
|
|
|
|
except Exception as ex:
|
|
trace_exception(ex)
|
|
self.error(ex)
|
|
|
|
return output
|
|
```
|
|
|
|
## Best Practices for Callbacks
|
|
|
|
1. **Always Check Callback Return Value**
|
|
```python
|
|
if callback is not None:
|
|
if not callback(chunk, MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK):
|
|
break # Stop generation if callback returns False
|
|
```
|
|
|
|
2. **Handle Different Operation Types**
|
|
```python
|
|
# Add new content
|
|
callback(chunk, MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK)
|
|
|
|
# Replace last chunk
|
|
callback(corrected_text, MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_REPLACE_LAST)
|
|
|
|
# Add new line
|
|
callback("\n", MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_NEW_LINE)
|
|
```
|
|
|
|
3. **Include Useful Metadata**
|
|
```python
|
|
metadata = {
|
|
'progress': current_tokens / max_tokens,
|
|
'temperature': temperature,
|
|
'token_count': token_count,
|
|
'finish_reason': finish_reason
|
|
}
|
|
callback(chunk, operation_type, metadata)
|
|
```
|
|
|
|
4. **Error Handling in Callbacks**
|
|
```python
|
|
try:
|
|
if callback is not None:
|
|
if not callback(chunk, MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK):
|
|
break
|
|
except Exception as ex:
|
|
self.error(f"Callback error: {ex}")
|
|
# Continue or break based on your needs
|
|
```
|
|
|
|
5. **Respect Token Limits**
|
|
```python
|
|
token_count = len(self.tokenize(output))
|
|
if token_count >= n_predict:
|
|
if callback is not None:
|
|
callback("", MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK,
|
|
{'finish_reason': 'length'})
|
|
break
|
|
```
|
|
|
|
## Common Use Cases
|
|
|
|
1. **Progress Display**
|
|
```python
|
|
def progress_callback(chunk, op_type, metadata=None):
|
|
if metadata and 'progress' in metadata:
|
|
print(f"Progress: {metadata['progress']*100:.2f}%")
|
|
return True
|
|
```
|
|
|
|
2. **Token Counting**
|
|
```python
|
|
def token_callback(chunk, op_type, metadata=None):
|
|
if metadata and 'token_count' in metadata:
|
|
print(f"Tokens generated: {metadata['token_count']}")
|
|
return True
|
|
```
|
|
|
|
3. **UI Updates**
|
|
```python
|
|
def ui_callback(chunk, op_type, metadata=None):
|
|
if op_type == MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_ADD_CHUNK:
|
|
update_ui_text(chunk)
|
|
elif op_type == MSG_OPERATION_TYPE.MSG_OPERATION_TYPE_NEW_LINE:
|
|
update_ui_newline()
|
|
return True
|
|
```
|
|
|
|
Remember that callbacks are essential for:
|
|
- Real-time text streaming
|
|
- Progress monitoring
|
|
- User interaction
|
|
- Generation control
|
|
- UI updates
|
|
|
|
Using callbacks effectively makes your binding more interactive and user-friendly.
|
|
|
|
## Conclusion
|
|
Building a LoLLMs binding requires:
|
|
1. Implementing the base interface
|
|
2. Proper configuration management
|
|
3. Error handling
|
|
4. Documentation
|
|
5. Following best practices
|
|
|
|
For more examples, check the official LoLLMs bindings repository.
|
|
|
|
Remember to test your binding thoroughly and maintain compatibility with the LoLLMs framework's conventions and interfaces.
|
|
|
|
## Support
|
|
For help or questions about binding development:
|
|
- Visit the LoLLMs GitHub repository
|
|
- Join the community discussion
|
|
- Check existing bindings for examples
|
|
|
|
Happy binding development!
|
|
|