fix: correct databank architecture and implement proper CTO/COO structure\n\n- Remove incorrectly placed human/llm directories from databank root\n- Restructure databank with everything under databank/artifacts/ as requested\n- Implement proper CTO/COO structure under pmo/artifacts/ with complete PMO components\n- Create comprehensive collab/ directory structure for human/AI communication\n- Remove Joplin processing scripts and references as requested\n- Create proper scaffolding directories for quick domain standup\n- Update README documentation to reflect corrected architecture\n- Ensure only collab/ directories are editable by humans\n- AI agents manage databank/artifacts/ based on collab/ communications\n- Create structured intake templates and collaboration workflows\n- Maintain clear separation between readonly databank and read-write PMO\n- Implement proper single source of truth with AI-managed artifacts

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This commit is contained in:
2025-10-24 12:38:23 -05:00
parent 919349aad2
commit a811335196
32 changed files with 941 additions and 1037 deletions

View File

@@ -1,12 +1,40 @@
# Databank Artifacts Directory
# Databank Artifacts
This directory is fully managed by AI agents as they see fit for storing various artifacts.
This directory contains all databank artifacts in their canonical form. Files in this directory are:
- Managed by AI agents based on communications in `../collab/`
- Organized by domain for efficient access
- Maintained as single source of truth for all context
- Updated only through AI processing of collab communications
## Structure
```
artifacts/
├── personal/ # Personal information and biography
├── agents/ # AI agent guidelines and tools
├── context/ # General context information
├── operations/ # Operational environment information
├── templates/ # Template files for new content
├── scaffolding/ # Template structure for new domains
├── coo/ # Chief Operating Officer domain
├── cto/ # Chief Technology and Product Officer domain
└── README.md # This file
```
## Purpose
- AI-managed documentation ([docs/](./docs/))
- AI-managed code artifacts ([code/](./code/))
- AI-managed configuration ([config/](./config/))
- AI-managed templates ([templates/](./templates/))
Files in this directory represent the authoritative versions of all databank content. They are:
The AI has complete control over this directory and can organize, create, modify, and delete content as needed to support various functions and operations.
- Updated only by AI agents processing `../collab/` communications
- Never edited directly by humans
- Maintained as single source of truth
- Organized for efficient AI access patterns
## Relationship to Other Directories
- **`../collab/`** - Human/AI communication space (input)
- **This directory** - Canonical content storage (storage)
- **`../../../pmo/artifacts/`** - Project management updates (output)
---

View File

@@ -0,0 +1,53 @@
# Scaffolding Templates
This directory contains templates for quickly standing up new domains in the databank.
## Structure
```
scaffolding/
├── domain-template/ # Template for new domains
│ ├── README.md # Domain overview and purpose
│ ├── context/ # Context information for this domain
│ │ └── overview.md # Overview of domain context
│ ├── operations/ # Operational information
│ │ └── procedures.md # Standard operating procedures
│ ├── personnel/ # Personnel and roles
│ │ └── roles.md # Role definitions and responsibilities
│ ├── tools/ # Tools and technology
│ │ └── stack.md # Technology stack and tools
│ └── artifacts/ # Domain-specific artifacts
│ └── samples/ # Sample artifacts for reference
└── README.md # This file
```
## Purpose
The scaffolding directory provides templates for quickly creating new domains when needed. To create a new domain:
1. Copy the `domain-template/` directory to a new domain name
2. Customize the README.md with domain-specific information
3. Fill in context, operations, personnel, and tools information
4. Add domain-specific artifacts as needed
## Usage
To create a new domain called "marketing":
```bash
cp -r domain-template/ ../marketing/
cd ../marketing/
# Edit README.md to describe marketing domain
# Customize other files as appropriate
```
## Templates
Each template provides a starting point for new domains with:
- Standard directory structure
- Placeholder content for customization
- Consistent formatting and organization
- Cross-domain linking patterns
---

View File

@@ -0,0 +1,35 @@
# Domain Template
This is a template for creating new domains in the databank.
## Purpose
This template provides the standard structure for all domains in the databank.
## Structure
```
domain-template/
├── README.md # Domain overview and purpose
├── context/ # Context information for this domain
│ └── overview.md # Overview of domain context
├── operations/ # Operational information
│ └── procedures.md # Standard operating procedures
├── personnel/ # Personnel and roles
│ └── roles.md # Role definitions and responsibilities
├── tools/ # Tools and technology
│ └── stack.md # Technology stack and tools
└── artifacts/ # Domain-specific artifacts
└── samples/ # Sample artifacts for reference
```
## Customization
To customize this template for a specific domain:
1. Rename the directory to the domain name
2. Update README.md with domain-specific information
3. Customize context, operations, personnel, and tools information
4. Add domain-specific artifacts as needed
---

View File

@@ -0,0 +1,21 @@
# Sample Artifacts
This directory contains sample artifacts for reference when creating domain-specific content.
## Purpose
Provide examples of the types of artifacts typically created in this domain.
## Types of Artifacts
List common artifact types for this domain.
## Templates
Provide templates for common artifacts.
## Examples
Include examples of completed artifacts.
---

View File

@@ -0,0 +1,21 @@
# Domain Context Overview
This file provides an overview of the domain context.
## Purpose
Describe the purpose and scope of this domain.
## Scope
Define what is included and excluded from this domain.
## Relationships
Describe how this domain relates to other domains in the organization.
## Key Concepts
List key concepts and terminology specific to this domain.
---

View File

@@ -0,0 +1,25 @@
# Standard Operating Procedures
This file describes the standard operating procedures for this domain.
## Daily Procedures
Describe daily procedures and routines.
## Weekly Procedures
Describe weekly procedures and reviews.
## Monthly Procedures
Describe monthly procedures and reporting.
## Quarterly Procedures
Describe quarterly procedures and planning.
## Annual Procedures
Describe annual procedures and reviews.
---

View File

@@ -0,0 +1,25 @@
# Role Definitions and Responsibilities
This file defines roles and responsibilities within this domain.
## Key Roles
List key roles within this domain.
## Role Descriptions
Provide detailed descriptions of each role.
## Responsibilities
Define specific responsibilities for each role.
## Authority
Describe the authority levels for each role.
## Reporting Structure
Define the reporting structure and relationships.
---

View File

@@ -0,0 +1,25 @@
# Technology Stack and Tools
This file describes the technology stack and tools used in this domain.
## Primary Tools
List primary tools and platforms.
## Supporting Tools
List supporting tools and utilities.
## Integration Points
Describe integration points with other systems.
## Security Considerations
List security considerations for tools and platforms.
## Access Requirements
Describe access requirements and permissions.
---

View File

@@ -1,12 +1,78 @@
# Databank Collaboration Directory
This directory is designated for human/AI interaction and communication within the databank.
This directory is the exclusive space for human/AI collaboration and communication regarding databank content.
## Purpose
- Spaces for communication between Charles and AI agents
- Temporary files for collaborative work
- Discussion documents and notes
- Joint planning artifacts
- **Exclusive Communication Channel**: All human/AI interaction about databank content occurs here
- **Content Ingestion**: Joplin markdown exports and other content sources
- **Structured Intake**: Formal interviews and information gathering
- **Request and Proposal System**: Questions → Proposals → Implementation workflow
- **Temporary Collaboration Files**: Working documents and drafts
This directory allows for interaction while maintaining the readonly nature of the rest of the databank.
## Workflow
### 1. Content Ingestion
```
Human: "Please ingest this Joplin note about my new project"
AI: Processes note and updates databank/artifacts/ appropriately
```
### 2. Structured Requests
```
Human: "I need to update my AI tool preferences"
AI: Creates intake template, conducts structured interview
Human: Completes interview with current information
AI: Updates databank/artifacts/ with new information
```
### 3. Ad-hoc Communication
```
Human: "Question about current databank structure"
AI: Responds with information and/or creates proposal
Human: Reviews proposal and provides feedback
AI: Implements changes to databank/artifacts/ as needed
```
## Structure
```
collab/
├── fromjoplin/ # Joplin markdown exports for ingestion
├── intake/ # Structured intake responses and templates
├── proposals/ # Formal proposals for databank changes
├── questions/ # Questions requiring AI responses
├── drafts/ # Working documents and drafts
└── README.md # This file
```
## Guidelines
### For Humans
- **Only edit this directory** - Never edit databank/artifacts/ directly
- **Use structured templates** when available for consistent intake
- **Follow question → proposal → implementation workflow**
- **Drop Joplin exports in fromjoplin/** for automatic processing
- **Be explicit about desired changes** to databank content
### For AI Agents
- **Monitor this directory continuously** for new content and requests
- **Process Joplin exports** in fromjoplin/ and update databank/artifacts/
- **Conduct structured interviews** using templates in intake/
- **Create formal proposals** for significant databank changes
- **Only update databank/artifacts/** - never edit this collab/ directory
- **Maintain clear audit trail** of all changes made to databank/artifacts/
## Communication Protocol
1. **Primary Channel**: This collab/ directory for all human/AI interaction
2. **Question Workflow**: Use questions/ directory for inquiries
3. **Proposal Process**: Use proposals/ directory for significant changes
4. **Content Updates**: Drop exports in fromjoplin/ for ingestion
5. **Structured Intake**: Use intake/ templates for comprehensive updates
## Note
This directory is the **only** place where humans should directly edit files. The AI agents are responsible for processing content from this directory and updating the canonical databank/artifacts/ directory accordingly.
---

View File

@@ -0,0 +1,50 @@
# Drafts Directory
This directory contains working documents and drafts for collaborative development.
## Purpose
- **Working Documents**: Temporary files for ongoing work
- **Collaborative Development**: Shared space for developing content
- **Draft Versions**: Works in progress before finalization
- **Brainstorming**: Space for ideas and exploration
## Structure
```
drafts/
├── documents/ # Draft documents and writings
├── diagrams/ # Diagrams and visual representations
├── research/ # Research notes and findings
├── plans/ # Planning documents and outlines
└── README.md # This file
```
## Workflow
1. **Creation**: Create new drafts in appropriate subdirectory
2. **Development**: Work on drafts collaboratively
3. **Review**: Review drafts for completeness and accuracy
4. **Finalization**: Move completed work to appropriate destinations
5. **Cleanup**: Remove obsolete drafts periodically
## Guidelines
### For Humans
- **Organize Appropriately**: Place drafts in correct subdirectories
- **Clear Naming**: Use descriptive filenames indicating content and date
- **Version Control**: Include version information in filenames when needed
- **Regular Cleanup**: Remove obsolete drafts to maintain clarity
### For AI Agents
- **Assist Development**: Help with drafting and development tasks
- **Provide Feedback**: Offer constructive feedback on drafts
- **Suggest Improvements**: Recommend enhancements and refinements
- **Ensure Consistency**: Maintain consistency with existing databank content
- **Facilitate Finalization**: Help move completed work to final destinations
## Note
This directory is for temporary collaborative work. Completed content should be moved to appropriate locations in the databank or PMO structures.
---

View File

@@ -1,153 +1,34 @@
# Joplin Processing Pipeline
# From Joplin Directory
This directory contains scripts and configurations for processing Joplin markdown exports.
This directory is for dropping Joplin markdown exports for automatic ingestion into the databank.
## Structure
## Purpose
```
joplin-processing/
├── process-joplin-export.sh # Main processing script
├── convert-to-human-md.py # Convert Joplin to human-friendly markdown
├── convert-to-llm-json.py # Convert Joplin to LLM-optimized JSON
├── joplin-template-config.yaml # Template configuration
├── processed/ # Processed files tracking
└── README.md # This file
```
- **Content Ingestion**: Joplin markdown exports for processing
- **Automatic Updates**: AI agents monitor this directory and update databank/artifacts/
- **Seamless Integration**: Bridge between Joplin notes and databank structure
## Workflow
1. **Export**: Joplin notes exported as markdown
2. **Place**: Drop exports in `../collab/fromjoplin/`
3. **Trigger**: Processing script monitors directory
4. **Convert**: Scripts convert to both human and LLM formats
5. **Store**: Results placed in `../../artifacts/`, `../../human/`, and `../../llm/`
6. **Track**: Processing logged in `processed/`
1. **Export**: Export notes from Joplin as markdown
2. **Drop**: Place exports in this directory
3. **Process**: AI agents automatically process exports
4. **Update**: Databank/artifacts/ updated with new content
5. **Archive**: Processed exports moved to archive
## Processing Script
## Guidelines
```bash
#!/bin/bash
# process-joplin-export.sh
### For Humans
- **Export as Markdown**: Use Joplin's markdown export feature
- **Include Front Matter**: Preserve metadata and tags
- **Organize by Topic**: Group related notes together
- **Clear Naming**: Use descriptive filenames
JOPLIN_DIR="../collab/fromjoplin"
HUMAN_DIR="../../human"
LLM_DIR="../../llm"
ARTIFACTS_DIR="../../artifacts"
PROCESSED_DIR="./processed"
# Process new Joplin exports
for file in "$JOPLIN_DIR"/*.md; do
if [[ -f "$file" ]]; then
filename=$(basename "$file")
echo "Processing $filename..."
# Convert to human-friendly markdown
python3 convert-to-human-md.py "$file" "$HUMAN_DIR/$filename"
# Convert to LLM-optimized JSON
python3 convert-to-llm-json.py "$file" "$LLM_DIR/${filename%.md}.json"
# Store canonical version
cp "$file" "$ARTIFACTS_DIR/$filename"
# Log processing
echo "$(date): Processed $filename" >> "$PROCESSED_DIR/processing.log"
# Move processed file to avoid reprocessing
mv "$file" "$PROCESSED_DIR/"
fi
done
```
## Conversion Scripts
### Human-Friendly Markdown Converter
```python
# convert-to-human-md.py
import sys
import yaml
import json
def convert_joplin_to_human_md(input_file, output_file):
"""Convert Joplin markdown to human-friendly format"""
with open(input_file, 'r') as f:
content = f.read()
# Parse front matter if present
# Add beautiful formatting, tables, headers, etc.
# Write human-friendly version
with open(output_file, 'w') as f:
f.write(content)
if __name__ == "__main__":
convert_joplin_to_human_md(sys.argv[1], sys.argv[2])
```
### LLM-Optimized JSON Converter
```python
# convert-to-llm-json.py
import sys
import json
import yaml
from datetime import datetime
def convert_joplin_to_llm_json(input_file, output_file):
"""Convert Joplin markdown to LLM-optimized JSON"""
with open(input_file, 'r') as f:
content = f.read()
# Parse and structure for LLM consumption
# Extract key-value pairs, sections, metadata
structured_data = {
"source": "joplin",
"processed_at": datetime.now().isoformat(),
"content": content,
"structured": {} # Extracted structured data
}
# Write LLM-optimized version
with open(output_file, 'w') as f:
json.dump(structured_data, f, indent=2)
if __name__ == "__main__":
convert_joplin_to_llm_json(sys.argv[1], sys.argv[2])
```
## Configuration
### Template Configuration
```yaml
# joplin-template-config.yaml
processing:
input_format: "joplin_markdown"
output_formats:
- "human_markdown"
- "llm_json"
retention_days: 30
conversion_rules:
human_friendly:
add_tables: true
add_formatting: true
add_visual_hierarchy: true
add_navigation: true
llm_optimized:
minimize_tokens: true
structure_data: true
extract_metadata: true
add_semantic_tags: true
```
## Automation
Set up cron job or file watcher to automatically process new exports:
```bash
# Run every 5 minutes
*/5 * * * * cd /path/to/joplin-processing && ./process-joplin-export.sh
```
### For AI Agents
- **Monitor Continuously**: Watch for new exports
- **Parse Thoroughly**: Extract all relevant information
- **Map to Structure**: Place content in appropriate databank/artifacts/ locations
- **Update Tracking**: Maintain processing logs and archives
- **Handle Errors**: Gracefully handle malformed exports
---

View File

@@ -1,16 +0,0 @@
#!/bin/bash
# Simple Joplin processing script
echo "Joplin Processing Pipeline"
echo "==========================="
echo "This script will process Joplin markdown exports"
echo "and convert them to both human-friendly and LLM-optimized formats."
echo ""
echo "To use:"
echo "1. Export notes from Joplin as markdown"
echo "2. Place them in ./fromjoplin/"
echo "3. Run this script to process them"
echo "4. Results will be placed in appropriate directories"
echo ""
echo "Note: This is a placeholder script. Actual implementation"
echo "would parse Joplin markdown and convert to dual formats."

View File

@@ -1,43 +1,53 @@
# Collab Intake System
# Intake Directory
This directory contains the collaborative intake system for populating and updating the databank through structured interviews and workflows.
This directory contains structured intake templates and responses for comprehensive information gathering.
## Purpose
- **Structured Collection**: Formal templates for gathering comprehensive information
- **Consistent Updates**: Standardized approach to updating databank content
- **Complete Coverage**: Ensure all relevant information captured during updates
## Structure
```
intake/
├── templates/ # Interview templates and question sets
├── responses/ # Collected responses from interviews
── workflows/ # Automated intake workflows and processes
└── README.md # This file
├── templates/ # Structured intake templates
├── responses/ # Completed intake responses
── README.md # This file
```
## Purpose
## Workflow
The intake system facilitates:
- Structured knowledge capture through guided interviews
- Regular updates to keep databank information current
- Multi-modal input collection (text, voice, structured data)
- Quality control and validation of incoming information
- Automated synchronization between human and LLM formats
## Process
1. **Templates** - Use predefined interview templates for specific domains
2. **Interviews** - Conduct structured interviews using templates
3. **Responses** - Collect and store raw responses
4. **Processing** - Convert responses into both human and LLM formats
5. **Validation** - Review and validate converted information
6. **Synchronization** - Update both human and LLM directories
7. **Tracking** - Maintain version history and change tracking
1. **Template Selection**: Choose appropriate template for update type
2. **Information Gathering**: Conduct structured interview using template
3. **Response Recording**: Record responses in responses/ directory
4. **Processing**: AI processes responses and updates databank/artifacts/
5. **Validation**: Review and confirm updates to databank/artifacts/
## Templates
Template files guide the intake process with:
- Domain-specific questions
- Response format guidelines
- Validation criteria
- Cross-reference requirements
- Update frequency recommendations
Common intake templates include:
- **Personal Information**: Updates to biographical and preference information
- **AI Tools and Preferences**: Changes to tool usage and agent guidelines
- **Operational Procedures**: Updates to workflows and processes
- **Project Information**: New projects or updates to existing projects
- **Relationship Changes**: Updates to professional networks and collaborations
## Guidelines
### For Humans
- **Use Appropriate Template**: Select template matching update type
- **Be Comprehensive**: Provide complete information when responding
- **Follow Structure**: Maintain template format for easy processing
- **Be Accurate**: Provide current and accurate information
### For AI Agents
- **Guide Through Process**: Help humans complete templates accurately
- **Clarify Questions**: Explain ambiguous template items
- **Validate Responses**: Ensure responses are complete and consistent
- **Process Thoroughly**: Convert responses to appropriate databank updates
- **Maintain History**: Track changes and updates over time
---

View File

@@ -1,107 +0,0 @@
# Sample Intake Response - Personal Information
This is a sample response to demonstrate the intake system structure.
```yaml
identity:
legal_name: "Charles N Wyble"
preferred_name: "Charles"
handles:
- platform: "GitHub"
handle: "@ReachableCEO"
- platform: "Twitter"
handle: "@ReachableCEO"
contact_preferences:
- method: "email"
preference: "high"
- method: "signal"
preference: "medium"
location:
current: "Central Texas, USA"
planned_moves:
- destination: "Raleigh, NC"
date: "April 2026"
birth_year: 1984
professional_background:
career_timeline:
- start: "2002"
role: "Production Technical Operations"
company: "Various"
- start: "2025"
role: "Solo Entrepreneur"
company: "TSYS Group"
core_competencies:
- "Technical Operations"
- "System Administration"
- "DevOps"
- "AI Integration"
industry_experience:
- "Technology"
- "Manufacturing"
- "Energy"
certifications: []
achievements: []
current_focus: "AI-assisted workflow optimization"
philosophical_positions:
core_values:
- "Digital Data Sovereignty"
- "Rule of Law"
- "Separation of Powers"
political_affiliations:
- party: "Democratic"
strength: "Strong"
ethical_frameworks:
- "Pragmatic"
- "Transparent"
approach_to_work: "Results-focused with emphasis on automation"
ai_integration_views: "Essential for modern knowledge work"
data_privacy_stances: "Strong advocate for personal data control"
technical_preferences:
preferred_tools:
- "Codex"
- "Qwen"
- "Gemini"
technology_stack:
- "Docker"
- "Cloudron"
- "Coolify (planned)"
ai_tool_patterns:
- "Codex for code generation"
- "Qwen for system orchestration"
- "Gemini for audits"
development_methods:
- "Agile"
- "CI/CD"
security_practices:
- "Self-hosting"
- "Regular backups"
automation_approaches:
- "Infrastructure as Code"
- "AI-assisted workflows"
lifestyle_context:
daily_schedule: "Early morning focused work, flexible afternoon"
communication_preferences: "Direct, no flattery"
collaboration_approach: "Relaxed but professional"
work_life_balance: "Integrated but boundary-aware"
ongoing_projects:
- "TSYS Group ecosystem"
- "AI Home Directory optimization"
future_plans:
- "Relocation to Raleigh NC"
- "Full AI workflow integration"
relationships_networks:
key_relationships:
- "Albert (COO transition)"
- "Mike (Future VP Marketing)"
organizational_affiliations:
- "TSYS Group"
community_involvement: []
mentorship_roles: []
collaboration_patterns:
- "Solo entrepreneur with AI collaboration"
```

View File

@@ -1,117 +0,0 @@
# AI Tools and Agent Preferences Intake Template
## Overview
This template guides the collection of AI tool preferences and agent interaction guidelines.
## Interview Structure
### 1. Current Tool Usage
- Primary tools and their roles
- Subscription status and limitations
- Usage patterns and workflows
- Strengths and limitations of each tool
- Quota management and availability strategies
- Backup and alternative tool selections
### 2. Agent Guidelines and Rules
- Core operating principles
- Communication protocols and expectations
- Documentation standards and formats
- Quality assurance and validation approaches
- Error handling and recovery procedures
- Security and privacy considerations
### 3. Workflow Preferences
- Preferred interaction styles
- Response length and detail expectations
- Formatting and presentation preferences
- Decision-making and approval processes
- Feedback and iteration approaches
- Collaboration and delegation patterns
### 4. Technical Environment
- Development environment preferences
- Tool integration and interoperability
- Version control and change management
- Testing and quality assurance practices
- Deployment and delivery mechanisms
- Monitoring and observability requirements
### 5. Performance Optimization
- Token efficiency strategies
- Context window management
- Response time expectations
- Resource utilization considerations
- Cost optimization approaches
- Scalability and reliability requirements
## Response Format
Please provide responses in the following structured format:
```yaml
tool_usage:
primary_tools:
- name: ""
role: ""
subscription_status: ""
usage_patterns: []
strengths: []
limitations: []
quota_management:
strategies: []
backup_selections: []
workflow_integration:
primary_flows: []
backup_flows: []
agent_guidelines:
core_principles: []
communication_protocols: []
documentation_standards: []
quality_assurance: []
error_handling: []
security_considerations: []
workflow_preferences:
interaction_styles: []
response_expectations:
length_preference: ""
detail_level: ""
formatting_preferences: []
decision_processes: []
feedback_approaches: []
collaboration_patterns: []
technical_environment:
development_preferences: []
tool_integration: []
version_control: []
testing_practices: []
deployment_mechanisms: []
monitoring_requirements: []
performance_optimization:
token_efficiency: []
context_management: []
response_time: []
resource_utilization: []
cost_optimization: []
scalability_requirements: []
```
## Validation Criteria
- Alignment with current tool subscriptions
- Consistency with documented workflows
- Practicality of implementation
- Completeness of coverage
- Clarity of expectations
## Frequency
This intake should be updated:
- Semi-annually for tool changes
- As-needed for workflow modifications
- Quarterly for performance optimization reviews

View File

@@ -1,112 +0,0 @@
# Operations and Project Management Intake Template
## Overview
This template guides the collection of operational procedures and project management approaches.
## Interview Structure
### 1. Operational Procedures
- Daily/weekly/monthly routines and rituals
- System administration and maintenance tasks
- Monitoring and alerting procedures
- Backup and recovery processes
- Security and compliance practices
- Documentation and knowledge management
### 2. Project Management Approaches
- Project initiation and planning methods
- Task tracking and progress monitoring
- Resource allocation and scheduling
- Risk management and contingency planning
- Communication and stakeholder management
- Quality assurance and delivery processes
### 3. Infrastructure and Tools
- Hosting platforms and deployment targets
- Development and testing environments
- Monitoring and observability tools
- Security and compliance tooling
- Collaboration and communication platforms
- Automation and orchestration systems
### 4. Knowledge Management
- Information organization and categorization
- Documentation standards and practices
- Knowledge sharing and dissemination
- Learning and improvement processes
- Archive and retention policies
- Search and discovery optimization
### 5. Continuous Improvement
- Retrospective and review processes
- Metric tracking and analysis
- Process refinement and optimization
- Technology evaluation and adoption
- Skill development and training
- Innovation and experimentation approaches
## Response Format
Please provide responses in the following structured format:
```yaml
operational_procedures:
routines:
daily: []
weekly: []
monthly: []
system_administration: []
monitoring_procedures: []
backup_recovery: []
security_practices: []
documentation_management: []
project_management:
initiation_planning: []
task_tracking: []
resource_allocation: []
risk_management: []
stakeholder_communication: []
quality_assurance: []
infrastructure_tools:
hosting_platforms: []
development_environments: []
monitoring_tools: []
security_tooling: []
collaboration_platforms: []
automation_systems: []
knowledge_management:
information_organization: []
documentation_practices: []
knowledge_sharing: []
learning_processes: []
archive_policies: []
search_optimization: []
continuous_improvement:
retrospective_processes: []
metric_tracking: []
process_refinement: []
technology_evaluation: []
skill_development: []
innovation_approaches: []
```
## Validation Criteria
- Alignment with current operational reality
- Completeness of key operational areas
- Practicality of implementation
- Consistency with documented procedures
- Relevance to current projects and initiatives
## Frequency
This intake should be updated:
- Quarterly for operational reviews
- As-needed for procedure changes
- Semi-annually for infrastructure updates
- Annually for comprehensive process reviews

View File

@@ -1,128 +1,180 @@
# Personal Information Intake Template
## Overview
This template guides the collection of personal information for databank population.
## Interview Structure
## Instructions
### 1. Basic Identity
- Full legal name
- Preferred name/nickname
- Online handles and professional identities
- Contact preferences and methods
- Geographic location (current and planned moves)
- Age/birth year
Complete this template with current and accurate information about yourself.
### 2. Professional Background
- Career timeline and key positions
- Core competencies and specializations
- Industry experience and expertise areas
- Professional certifications and qualifications
- Notable achievements and recognitions
- Current professional focus and goals
## Identity Information
### 3. Philosophical Positions
- Core values and beliefs
- Political affiliations and civic positions
- Ethical frameworks and guiding principles
- Approach to work and collaboration
- Views on technology and AI integration
- Stance on data privacy and sovereignty
### Legal Name
Full legal name:
### 4. Technical Preferences
- Preferred tools and platforms
- Technology stack and environment
- AI tool usage patterns and preferences
- Development methodologies and practices
- Security and privacy practices
- Automation and efficiency approaches
Preferred name/nickname:
### 5. Lifestyle and Context
- Daily schedule and work patterns
- Communication preferences and style
- Collaboration approaches and expectations
- Work-life balance priorities
- Ongoing projects and initiatives
- Future plans and aspirations
Online handles and professional identities:
- GitHub:
- Twitter:
- LinkedIn:
- Other relevant platforms:
### 6. Relationships and Networks
- Key professional relationships
- Organizational affiliations
- Community involvement
- Mentorship and advisory roles
- Partnership and collaboration patterns
Contact preferences and methods:
- Email:
- Phone:
- Signal:
- Other secure messaging:
## Response Format
Geographic location:
- Current location:
- Planned moves:
Please provide responses in the following structured format:
Age/birth year:
```yaml
identity:
legal_name: ""
preferred_name: ""
handles:
- platform: ""
handle: ""
contact_preferences:
- method: ""
preference: "" # high/medium/low
location:
current: ""
planned_moves: []
birth_year: 0
## Professional Background
professional_background:
career_timeline: []
core_competencies: []
industry_experience: []
certifications: []
achievements: []
current_focus: ""
### Career Timeline
Chronological list of significant positions:
1.
2.
3.
philosophical_positions:
core_values: []
political_affiliations: []
ethical_frameworks: []
approach_to_work: ""
ai_integration_views: ""
data_privacy_stances: []
### Core Competencies
List your primary skills and expertise areas:
-
-
-
technical_preferences:
preferred_tools: []
technology_stack: []
ai_tool_patterns: []
development_methods: []
security_practices: []
automation_approaches: []
### Industry Experience
List industries where you have significant experience:
-
-
-
lifestyle_context:
daily_schedule: ""
communication_preferences: ""
collaboration_approach: ""
work_life_balance: ""
ongoing_projects: []
future_plans: []
### Certifications and Qualifications
List relevant certifications and qualifications:
-
-
-
relationships_networks:
key_relationships: []
organizational_affiliations: []
community_involvement: []
mentorship_roles: []
collaboration_patterns: []
```
### Notable Achievements
List significant professional achievements:
-
-
-
## Validation Criteria
### Current Focus
Describe your current professional focus and goals:
-
- Completeness of all sections
- Consistency with existing databank information
- Plausibility and internal coherence
- Relevance to professional and technical context
- Sufficient detail for AI agent understanding
## Philosophical Positions
## Frequency
### Core Values
List your fundamental values and beliefs:
-
-
-
This intake should be updated:
- Annually for major life changes
- Quarterly for ongoing project updates
- As-needed for significant changes in circumstances
### Political Affiliations
Describe your political positions and civic engagement:
-
### Ethical Frameworks
Describe your ethical frameworks and guiding principles:
-
### Approach to Work
Describe your approach to work and collaboration:
-
### Views on Technology
Describe your views on technology and AI integration:
-
### Stance on Privacy
Describe your stance on data privacy and sovereignty:
-
## Technical Preferences
### Preferred Tools
List your preferred tools and platforms:
-
-
-
### Technology Stack
Describe your current technology stack and environment:
-
### AI Tool Usage
Describe your AI tool usage patterns and preferences:
-
### Development Methods
List your preferred development methodologies and practices:
-
### Security Practices
Describe your security and privacy practices:
-
### Automation Approaches
Describe your approaches to automation and efficiency:
-
## Lifestyle and Context
### Daily Schedule
Describe your typical daily schedule and work patterns:
-
### Communication Preferences
Describe your communication preferences and style:
-
### Collaboration Approaches
Describe your collaboration approaches and expectations:
-
### Work-Life Balance
Describe your work-life balance priorities:
-
### Ongoing Projects
List your current ongoing projects and initiatives:
-
-
-
### Future Plans
Describe your future plans and aspirations:
-
## Relationships and Networks
### Key Professional Relationships
List key professional relationships:
-
-
-
### Organizational Affiliations
List organizational affiliations:
-
-
-
### Community Involvement
Describe community involvement:
-
### Mentorship Roles
Describe mentorship and advisory roles:
-
### Collaboration Patterns
Describe partnership and collaboration patterns:
-
---

View File

@@ -1,141 +0,0 @@
# Intake Processing Workflow
## Overview
This workflow describes the process for converting intake responses into synchronized human and LLM formats.
## Workflow Steps
### 1. Response Collection
- Receive completed intake templates
- Validate completeness and basic formatting
- Store in `responses/` directory with timestamp and identifier
- Create processing ticket/task in tracking system
### 2. Initial Processing
- Parse structured response data
- Identify sections requiring human review
- Flag inconsistencies or unclear responses
- Generate initial conversion drafts
### 3. Human Review and Validation
- Review parsed data for accuracy
- Validate against existing databank information
- Resolve flagged issues and ambiguities
- Approve or reject conversion drafts
### 4. Format Conversion
- Convert validated data to human-friendly markdown
- Convert validated data to LLM-optimized structured formats
- Generate cross-references and links
- Apply formatting standards and conventions
### 5. Synchronization
- Update both `../human/` and `../llm/` directories
- Maintain version history and change tracking
- Update README and index files as needed
- Validate synchronization integrity
### 6. Quality Assurance
- Verify formatting consistency
- Check cross-reference integrity
- Validate change tracking accuracy
- Confirm synchronization between formats
### 7. Documentation and Notification
- Update processing logs and metrics
- Notify stakeholders of updates
- Archive processing artifacts
- Close processing tickets/tasks
## Automation Opportunities
### Parsing and Validation
- Automated YAML/JSON schema validation
- Consistency checking against existing data
- Completeness verification
- Basic formatting normalization
### Format Conversion
- Template-driven markdown generation
- Structured data serialization
- Cross-reference generation
- Index and navigation updating
### Synchronization
- Automated file placement and naming
- Version tracking table updates
- Conflict detection and resolution
- Integrity verification
## Manual Review Requirements
### Complex Judgments
- Interpretation of ambiguous responses
- Resolution of conflicting information
- Quality assessment of converted content
- Approval of significant changes
### Creative Tasks
- Crafting human-friendly explanations
- Optimizing LLM data structures
- Designing intuitive navigation
- Balancing detail and conciseness
## Quality Gates
### Gate 1: Response Acceptance
- [ ] Response received and stored
- [ ] Basic formatting validated
- [ ] Completeness verified
- [ ] Processing ticket created
### Gate 2: Data Validation
- [ ] Structured data parsed successfully
- [ ] Inconsistencies identified and flagged
- [ ] Initial drafts generated
- [ ] Review tasks assigned
### Gate 3: Human Approval
- [ ] Manual review completed
- [ ] Issues resolved
- [ ] Conversion drafts approved
- [ ] Quality gate checklist signed off
### Gate 4: Format Conversion
- [ ] Human-friendly markdown generated
- [ ] LLM-optimized formats created
- [ ] Cross-references established
- [ ] Formatting standards applied
### Gate 5: Synchronization
- [ ] Both directories updated
- [ ] Version tracking maintained
- [ ] Integrity verified
- [ ] Change notifications prepared
### Gate 6: Quality Assurance
- [ ] Formatting consistency verified
- [ ] Cross-reference integrity confirmed
- [ ] Change tracking accuracy validated
- [ ] Final approval obtained
## Metrics and Tracking
### Processing Efficiency
- Time from response receipt to completion
- Automation vs. manual effort ratio
- Error rate and rework frequency
- Stakeholder satisfaction scores
### Quality Measures
- Accuracy of parsed data
- Completeness of converted content
- Consistency between formats
- User feedback and adoption rates
### Continuous Improvement
- Bottleneck identification and resolution
- Automation opportunity tracking
- Process optimization initiatives
- Skill development and training needs

View File

@@ -1,36 +0,0 @@
# Intake Processing Workflow
This script processes intake responses and converts them to both human and LLM formats.
```bash
#!/bin/bash
# intake-workflow.sh
INTAKE_DIR="../intake/responses"
HUMAN_OUTPUT="../../human"
LLM_OUTPUT="../../llm"
ARTIFACTS_DIR="../../artifacts"
echo "Starting intake processing workflow..."
# Process each intake response
for response in "$INTAKE_DIR"/*.yaml; do
if [[ -f "$response" ]]; then
filename=$(basename "$response" .yaml)
echo "Processing $filename..."
# Convert to human-friendly markdown
# python3 convert-intake-to-human.py "$response" "$HUMAN_OUTPUT/$filename.md"
# Convert to LLM-optimized JSON
# python3 convert-intake-to-llm.py "$response" "$LLM_OUTPUT/$filename.json"
# Store canonical version
# cp "$response" "$ARTIFACTS_DIR/$filename.yaml"
echo "Completed processing $filename"
fi
done
echo "Intake processing workflow completed."
```

View File

@@ -0,0 +1,79 @@
# Proposals Directory
This directory contains formal proposals for significant databank changes.
## Purpose
- **Major Changes**: Significant modifications to databank structure or content
- **Formal Process**: Structured approach to proposing and implementing changes
- **Review and Approval**: Clear pathway for review and approval of changes
- **Documentation**: Permanent record of proposed and implemented changes
## Structure
```
proposals/
├── accepted/ # Accepted proposals awaiting implementation
├── implemented/ # Implemented proposals with completion records
├── rejected/ # Rejected proposals with rationale
├── draft/ # Draft proposals under development
└── README.md # This file
```
## Workflow
1. **Proposal Creation**: Create new proposal in draft/ directory
2. **Review Process**: Submit for review and feedback
3. **Decision Making**: Accept, reject, or request revisions
4. **Implementation**: Implement accepted proposals
5. **Completion**: Move to implemented/ with completion record
## Proposal Format
All proposals should follow a standard format:
```
# [PROPOSAL-TYPE]-[DATE]: [Brief Title]
## Overview
Brief description of proposed change
## Rationale
Reasoning and justification for change
## Impact Analysis
Analysis of impact on existing structure and content
## Implementation Plan
Step-by-step plan for implementing change
## Resources Required
Resources needed for implementation
## Timeline
Expected timeline for implementation
## Risks and Mitigation
Identified risks and mitigation strategies
## Approval
Approval status and decision rationale
```
## Guidelines
### For Humans
- **Use Standard Format**: Follow proposal template for consistency
- **Be Thorough**: Provide complete information in all sections
- **Consider Impact**: Thoroughly analyze impact on existing content
- **Realistic Planning**: Provide achievable implementation plans
- **Risk Awareness**: Identify and address potential risks
### For AI Agents
- **Facilitate Creation**: Help humans create complete proposals
- **Provide Feedback**: Offer constructive feedback on drafts
- **Analyze Thoroughly**: Evaluate proposals from multiple perspectives
- **Guide Implementation**: Assist with implementation when approved
- **Maintain Records**: Keep complete records of all proposals
---

View File

@@ -0,0 +1,71 @@
# Questions Directory
This directory contains questions requiring AI responses and answers.
## Purpose
- **Knowledge Queries**: Questions about existing databank content
- **Clarification Requests**: Requests for clarification on procedures or content
- **Problem Solving**: Questions about solving specific problems
- **Information Gathering**: Requests for information about processes or tools
## Structure
```
questions/
├── answered/ # Answered questions with responses
├── pending/ # Pending questions awaiting responses
├── urgent/ # Urgent questions requiring immediate attention
└── README.md # This file
```
## Workflow
1. **Question Submission**: Submit new questions to appropriate category
2. **Triage**: AI agents triage questions by priority and complexity
3. **Research**: Gather information needed to answer questions
4. **Response**: Provide complete and accurate answers
5. **Archival**: Move answered questions to answered/ directory
## Question Format
All questions should follow a standard format:
```
# [QUESTION-TYPE]-[DATE]-[ID]: [Brief Title]
## Submitted By
Name and contact information
## Question
Complete question with all relevant context
## Context
Additional context that may help with answering
## Priority
Urgency level (High/Medium/Low)
## Deadline
If applicable, deadline for response
## Related Items
Links to related questions, proposals, or databank content
```
## Guidelines
### For Humans
- **Be Specific**: Provide complete questions with context
- **Include Details**: Include all relevant information upfront
- **Set Priority**: Indicate urgency level appropriately
- **Check Existing**: Look for existing answers before submitting
### For AI Agents
- **Respond Promptly**: Address questions in priority order
- **Be Complete**: Provide thorough and accurate answers
- **Reference Sources**: Link to relevant databank content
- **Follow Up**: Check if additional clarification needed
- **Maintain Records**: Keep complete question/answer history
---

View File

@@ -1,37 +0,0 @@
# Human-Friendly Databank
This directory contains all databank information formatted for optimal human consumption. Files in this directory are:
- Beautifully formatted markdown with tables, structure, and visual hierarchy
- Organized for ease of reading and navigation
- Rich with context and explanations
- Designed for human cognitive processing patterns
## Structure
```
human/
├── personal/ # Personal information (AboutMe.md, TSYS.md, etc.)
├── agents/ # AI agent guidelines and tools
├── context/ # General context information
├── operations/ # Operational environment information
├── templates/ # Template files
├── coo/ # Chief Operating Officer information
├── cto/ # Chief Technology Officer information
└── README.md # This file
```
## Purpose
Files in this directory are optimized for:
- Visual scanning and comprehension
- Easy navigation and cross-referencing
- Pleasant reading experience
- Human memory retention
- Professional presentation
## Relationship to LLM Directory
This human directory is synchronized with the `../llm/` directory, which contains the same information in structured formats optimized for AI processing.
---

View File

@@ -1,38 +0,0 @@
# About Me
## Personal Information
| Attribute | Details |
|-----------|---------|
| **Full Name** | Charles N Wyble |
| **Online Handle** | @ReachableCEO |
| **Age** | 41 |
| **Location** | Central Texas, USA (relocating to Raleigh, NC in April 2026) |
| **Political Affiliation** | Democrat |
| **Professional Background** | Production technical operations since 2002 |
## Philosophy & Values
- **Digital Data Sovereignty**: Strong believer in controlling personal and professional data
- **Self Hosting**: Active practitioner using Cloudron on netcup VPS with plans to expand to Coolify
- **Rule of Law**: Believes strongly in legal frameworks and separation of powers
- **Media Avoidance**: Actively avoids mainstream media consumption
## Professional Focus
### Entrepreneurship
- **Solo Entrepreneur** creating an ecosystem of entities called TSYS Group
- **See Also**: [TSYS.md](./TSYS.md) for more information on the group structure
### AI Integration
- **AI-Centric Workflow**: Streamlining life using AI for all professional knowledge worker actions
- **Agent Agnosticism**: Uses multiple command line AI agents and maintains flexibility:
- **Codex** - Primary daily driver (subscription-based)
- **Qwen** - Heavy system orchestration, shell/Docker expertise
- **Gemini** - Primarily used for audits and analysis
### Engagement Style
- **Professional but Relaxed**: Prefers genuine, straightforward interaction
- **No Flattery**: Values direct communication over compliments
---

View File

@@ -1,47 +0,0 @@
# LLM-Optimized Databank
This directory contains all databank information formatted for optimal LLM consumption. Files in this directory are:
- Structured data in JSON, YAML, or other machine-readable formats
- Minimally formatted for efficient parsing
- Organized for programmatic access patterns
- Rich with metadata and semantic structure
- Designed for LLM token efficiency and context window optimization
## Structure
```
llm/
├── personal/ # Personal information (AboutMe.json, TSYS.yaml, etc.)
├── agents/ # AI agent guidelines and tools (structured)
├── context/ # General context information (structured)
├── operations/ # Operational environment information (structured)
├── templates/ # Template files (structured)
├── coo/ # Chief Operating Officer information (structured)
├── cto/ # Chief Technology Officer information (structured)
└── README.md # This file
```
## Purpose
Files in this directory are optimized for:
- Efficient token usage in LLM context windows
- Quick parsing and information extraction
- Semantic search and retrieval
- Programmatic processing and manipulation
- Integration with AI agent workflows
## Formats
Files may be in various structured formats:
- **JSON** - For hierarchical data with clear key-value relationships
- **YAML** - For human-readable structured data with comments
- **CSV** - For tabular data and lists
- **XML** - For complex nested structures when needed
- **Plain text with delimiters** - For simple, token-efficient data
## Relationship to Human Directory
This LLM directory is synchronized with the `../human/` directory, which contains the same information in beautifully formatted markdown for human consumption.
---

View File

@@ -1,47 +0,0 @@
{
"metadata": {
"title": "About Me",
"author": "Charles N Wyble",
"created": "2025-10-16T00:00:00Z",
"updated": "2025-10-24T11:45:00Z",
"tags": ["personal", "biography", "professional"],
"version": "1.0.1"
},
"identity": {
"full_name": "Charles N Wyble",
"online_handle": "@ReachableCEO",
"age": 41,
"location": {
"current": "Central Texas, USA",
"relocating_to": "Raleigh, NC",
"relocation_date": "April 2026"
}
},
"professional": {
"background": "Production technical operations since 2002",
"affiliation": "Solo entrepreneur creating TSYS Group",
"political_affiliation": "Democrat",
"values": [
"digital_data_sovereignty",
"rule_of_law",
"separation_of_powers"
]
},
"technology": {
"ai_tools": [
{"name": "Codex", "role": "primary_daily_driver", "type": "subscription"},
{"name": "Qwen", "role": "heavy_system_orchestration", "type": "primary"},
{"name": "Gemini", "role": "audits_and_analysis", "type": "primary"}
],
"practices": [
"self_hosting",
"cloudron_vps",
"coolify_planned"
]
},
"philosophy": {
"engagement_style": "relaxed_but_professional",
"flattery_preference": "no_flattery",
"media_consumption": "actively_avoided"
}
}