mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2025-02-21 01:31:20 +00:00
6.2 KiB
6.2 KiB
LoLLMS Function Calls Documentation
Overview
The LoLLMS Function Calls system allows Large Language Models (LLMs) to interact with external functions and tools. This system enables the AI to perform tasks beyond text generation, such as retrieving real-time information, performing calculations, or interacting with external APIs.
Key Concepts
- Function Zoo: A directory containing all available functions
- Mounted Functions: Functions currently enabled for use by the LLM
- Function Call Format: Special JSON format for invoking functions
- Processing Types:
- Direct Execution (needs_processing=False)
- AI Interpretation (needs_processing=True)
System Architecture
graph TD
A[User Prompt] --> B[Prompt Enhancement]
B --> C[LLM Processing]
C --> D{Function Call Detected?}
D -->|Yes| E[Function Execution]
D -->|No| F[Direct Response]
E --> G{needs_processing?}
G -->|Yes| H[AI Interpretation]
G -->|No| I[Direct Output]
H --> J[Final Response]
I --> J
F --> J
Function Call Lifecycle
- Prompt Enhancement: System adds function descriptions to user prompt
- LLM Processing: AI generates response, potentially including function calls
- Function Detection: System extracts and validates function calls
- Execution: Selected functions are executed with provided parameters
- Result Processing: Output is either shown directly or processed by AI
- Final Response: Combined output is presented to user
Creating New Functions
Step 1: Create Function Directory
- Navigate to functions zoo:
cd {lollms_functions_zoo_path}
- Create new directory:
mkdir my_function cd my_function
Step 2: Create Configuration File (config.yaml)
name: my_function
description: Brief description of what the function does
parameters:
param1:
type: string
description: Description of parameter
param2:
type: number
description: Another parameter
returns:
result1:
type: string
description: Description of return value
examples:
- "Example usage 1"
- "Example usage 2"
needs_processing: true/false
author: Your Name
version: 1.0.0
Step 3: Implement Function Logic (function.py)
class MyFunction:
def __init__(self, lollmsElfServer):
self.lollmsElfServer = lollmsElfServer
def run(self, **kwargs):
"""
Main function logic
Returns: Dictionary containing results
"""
# Access parameters
param1 = kwargs.get('param1')
param2 = kwargs.get('param2')
# Your logic here
result = perform_operation(param1, param2)
return {
"result1": result,
"status": "success"
}
Step 4: Mount the Function
- Add to mounted functions list in configuration
- Or use API endpoint:
POST /mount_function_call { "client_id": "your_client_id", "function_name": "my_function" }
Function Call Format
AI-Generated Format
<lollms_function_call>
{
"function_name": "function_name",
"parameters": {
"param1": "value1",
"param2": "value2"
},
"needs_processing": true/false
}
</lollms_function_call>
Response Format
{
"function_name": "function_name",
"parameters": {
"param1": "value1",
"param2": "value2"
},
"results": {
"result1": "value1",
"result2": "value2"
},
"status": "success/error",
"message": "Additional information"
}
API Endpoints
-
List Available Functions
GET /list_function_calls
-
List Mounted Functions
GET /list_mounted_function_calls
-
Mount Function
POST /mount_function_call { "client_id": "your_client_id", "function_name": "function_name" }
-
Unmount Function
POST /unmount_function_call { "client_id": "your_client_id", "function_name": "function_name" }
Best Practices
- Error Handling: Implement robust error handling in your function
- Parameter Validation: Validate all inputs before processing
- Security: Sanitize all inputs and outputs
- Documentation: Provide clear examples and descriptions
- Versioning: Maintain version numbers for compatibility
- Performance: Optimize for quick execution
- Idempotency: Make functions repeatable without side effects
Example: Weather Function
config.yaml
name: get_weather
description: Get current weather information
parameters:
location:
type: string
description: City name or coordinates
unit:
type: string
enum: [celsius, fahrenheit]
default: celsius
returns:
temperature:
type: number
condition:
type: string
examples:
- "What's the weather in Paris?"
- "How's the weather today?"
needs_processing: true
author: Weather Inc.
version: 1.2.0
function.py
import requests
class WeatherFunction:
def __init__(self, lollmsElfServer):
self.lollmsElfServer = lollmsElfServer
self.api_key = "your_api_key"
def run(self, **kwargs):
location = kwargs.get('location', 'Paris')
unit = kwargs.get('unit', 'celsius')
try:
response = requests.get(
f"https://api.weatherapi.com/v1/current.json?key={self.api_key}&q={location}"
)
data = response.json()
return {
"temperature": data['current']['temp_c'] if unit == 'celsius' else data['current']['temp_f'],
"condition": data['current']['condition']['text'],
"location": location,
"unit": unit,
"status": "success"
}
except Exception as e:
return {
"status": "error",
"message": str(e)
}
This documentation provides a comprehensive guide to understanding, creating, and managing function calls in the LoLLMS system. Follow these guidelines to extend the capabilities of your LLM with custom functions.