Compare commits
41 Commits
5efe5f4819
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 35b96b0e90 | |||
| d27cf46606 | |||
| 2253aa01c8 | |||
| f6deeb670f | |||
| 124d51ebff | |||
| 3ec443eef8 | |||
| becd640c86 | |||
| 343534ac12 | |||
| ac80431292 | |||
| 1ee39e859b | |||
| ab54d694f2 | |||
| 199789e2c4 | |||
| 80d5c64eb9 | |||
| 50b250e78f | |||
| ab57e3a3a1 | |||
| a960fb03b6 | |||
| cd30726ace | |||
| 48530814d5 | |||
| 3dd420a500 | |||
| 87f32cfd4b | |||
| 0337f401a7 | |||
| 8eabe6cf37 | |||
| 96d3178344 | |||
| 08d10b16cf | |||
| 073cb91585 | |||
| a51a1f987e | |||
| 2d26ed3ac7 | |||
| 91d52d2de5 | |||
| 0ce353ea9d | |||
| f4551aef0f | |||
| 2d330a5e37 | |||
| 06c0b14add | |||
| 742e3f6b97 | |||
| eb3cbb803d | |||
| 4111a6bcd7 | |||
| 421797aac1 | |||
| 9cb53e29e5 | |||
| f197545bac | |||
| aa745f3458 | |||
| 7a751de24a | |||
| bd862daf1a |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -88,3 +88,6 @@ temp/
|
||||
|
||||
# System files
|
||||
.SynologyWorkingDirectory
|
||||
CloudronStack/collab/*.lock
|
||||
CloudronStack/collab/*test-*
|
||||
CloudronStack/output/CloudronPackages-Workspaces/
|
||||
|
||||
@@ -1,73 +0,0 @@
|
||||
# CloudronStack Qwen Agent Development Log
|
||||
|
||||
Date: Wednesday, October 29, 2025
|
||||
|
||||
## Project Context
|
||||
|
||||
This repository contains Cloudron packaging artifacts for various upstream projects focused on:
|
||||
- Monitoring & Observability
|
||||
- Security & Compliance
|
||||
- Developer Platforms & Automation
|
||||
- Infrastructure & Operations
|
||||
- Data & Analytics
|
||||
- Business & Productivity
|
||||
- Industry & Specialized Solutions
|
||||
|
||||
## Agent Identity
|
||||
|
||||
- I am one of five QWEN chats operating in the tree
|
||||
- I am known as CloudronStack
|
||||
- My scope is limited to the CloudronStack directory
|
||||
- I maintain awareness of sibling directories for context
|
||||
|
||||
## Sibling Directories
|
||||
|
||||
The following sibling directories exist in the TSYSDevStack parent directory:
|
||||
- LifecycleStack
|
||||
- SupportStack
|
||||
- ToolboxStack
|
||||
- .git
|
||||
- .vscode
|
||||
- Other root-level files: commit-template.txt, .gitignore, LICENSE, QWEN.md, README.md
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
- All commits should be verbose/beautifully formatted
|
||||
- Use atomic commits
|
||||
- Use conventional commit format
|
||||
|
||||
## Git Operations Notice
|
||||
|
||||
- IMPORTANT: Git operations (commits and pushes) are handled exclusively by the Topside agent
|
||||
- CloudronBot should NOT perform git commits or pushes
|
||||
- All changes should be coordinated through the Topside agent for repository consistency
|
||||
|
||||
## Task Tracking
|
||||
|
||||
Current tasks and progress:
|
||||
- [x] Explore the current directory structure in depth
|
||||
- [x] Create a QWEN.md file to track our work
|
||||
- [x] Set up commit configuration for conventional commits
|
||||
- [x] Make initial atomic commit with verbose formatting
|
||||
|
||||
## Work Log
|
||||
|
||||
### Session 1 (2025-10-29)
|
||||
- Oriented to the directory tree
|
||||
- Analyzed README.md and GitUrlList.txt files
|
||||
- Created QWEN.md file for tracking work
|
||||
- Set up commit configuration requirements
|
||||
- Created commit template for conventional commits
|
||||
- Made initial atomic commit with verbose formatting
|
||||
|
||||
### Session 2 (2025-10-29)
|
||||
- Established identity as CloudronStack QWEN agent
|
||||
- Confirmed scope limited to CloudronStack directory
|
||||
- Updated QWEN.md with agent identity information
|
||||
|
||||
### Session 3 (2025-10-30)
|
||||
- Reviewed and enhanced directory organization with application-specific subdirectories
|
||||
- Added functionality to handle additional Git URLs dynamically
|
||||
- Created add_git_url and add_git_urls_from_file functions
|
||||
- Improved URL validation and duplicate checking
|
||||
- Enhanced system to support adding more applications later
|
||||
@@ -1,115 +0,0 @@
|
||||
# 🛰️ CloudronStack
|
||||
|
||||
CloudronStack contains Cloudron packaging artifacts for various upstream projects focused on different business capabilities. This stack serves as a catalog of third-party services grouped by capability for easy deployment and management.
|
||||
|
||||
---
|
||||
|
||||
## 📚 Service Categories
|
||||
|
||||
This repository contains all of the Cloudron packaging artifacts for the following upstream projects:
|
||||
|
||||
### Monitoring & Observability
|
||||
- https://github.com/getsentry/sentry
|
||||
- https://github.com/healthchecks/healthchecks
|
||||
- https://github.com/SigNoz/signoz
|
||||
- https://github.com/target/goalert
|
||||
|
||||
### Security & Compliance
|
||||
- https://github.com/fleetdm/fleet
|
||||
- https://github.com/GemGeorge/SniperPhish
|
||||
- https://github.com/gophish/gophish
|
||||
- https://github.com/kazhuravlev/database-gateway
|
||||
- https://github.com/security-companion/security-awareness-training
|
||||
- https://github.com/strongdm/comply
|
||||
- https://github.com/tirrenotechnologies/tirreno
|
||||
- https://github.com/todogroup/policies
|
||||
- https://github.com/wiredlush/easy-gate
|
||||
|
||||
### Developer Platforms & Automation
|
||||
- https://github.com/adnanh/webhook
|
||||
- https://github.com/huginn/huginn
|
||||
- https://github.com/metrue/fx
|
||||
- https://github.com/openblocks-dev/openblocks
|
||||
- https://github.com/reviewboard/reviewboard
|
||||
- https://github.com/runmedev/runme
|
||||
- https://github.com/stephengpope/no-code-architects-toolkit
|
||||
- https://github.com/windmill-labs/windmill
|
||||
|
||||
### Infrastructure & Operations
|
||||
- https://github.com/apache/apisix
|
||||
- https://github.com/fonoster/fonoster
|
||||
- https://github.com/mendersoftware/mender
|
||||
- https://github.com/netbox-community/netbox
|
||||
- https://github.com/rapiz1/rathole
|
||||
- https://github.com/rundeck/rundeck
|
||||
- https://github.com/SchedMD/slurm
|
||||
|
||||
### Data & Analytics
|
||||
- https://github.com/apache/seatunnel
|
||||
- https://github.com/datahub-project/datahub
|
||||
- https://github.com/gristlabs/grist-core
|
||||
- https://github.com/jamovi/jamovi
|
||||
- https://github.com/langfuse/langfuse
|
||||
- https://github.com/nautechsystems/nautilus_trader
|
||||
|
||||
### Business & Productivity
|
||||
- https://github.com/cortezaproject/corteza
|
||||
- https://github.com/HeyPuter/puter
|
||||
- https://github.com/inventree/InvenTree
|
||||
- https://github.com/jgraph/docker-drawio
|
||||
- https://github.com/jhpyle/docassemble
|
||||
- https://github.com/juspay/hyperswitch
|
||||
- https://github.com/killbill/killbill
|
||||
- https://github.com/midday-ai/midday
|
||||
- https://github.com/oat-sa/package-tao
|
||||
- https://github.com/openboxes/openboxes
|
||||
- https://github.com/Payroll-Engine/PayrollEngine
|
||||
- https://github.com/pimcore/pimcore
|
||||
- https://github.com/PLMore/PLMore
|
||||
- https://github.com/sebo-b/warp
|
||||
|
||||
### Industry & Specialized Solutions
|
||||
- https://github.com/BOINC/boinc
|
||||
- https://github.com/chirpstack/chirpstack
|
||||
- https://github.com/consuldemocracy/consuldemocracy
|
||||
- https://github.com/elabftw/elabftw
|
||||
- https://github.com/f4exb/sdrangel
|
||||
- https://gitlab.com/librespacefoundation/satnogs
|
||||
- https://github.com/opulo-inc/autobom
|
||||
- https://github.com/Resgrid/Core
|
||||
- https://github.com/wireviz/wireviz-web
|
||||
- https://github.com/wireviz/WireViz
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
To use these Cloudron packages:
|
||||
|
||||
1. Navigate to the CloudronStack directory:
|
||||
```bash
|
||||
cd CloudronStack
|
||||
```
|
||||
|
||||
2. Review the `collab/` directory for planning documents and collaboration notes:
|
||||
```bash
|
||||
ls -la collab/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧭 Working Agreement
|
||||
- **Stacks stay in sync.** When you add or modify automation, update both the relevant stack README and any linked prompts/docs.
|
||||
- **Collab vs Output.** Use `collab/` for planning and prompts, keep runnable artifacts under `output/`.
|
||||
- **Document forward.** New workflows should land alongside tests and a short entry in the appropriate README table.
|
||||
- **AI Agent Coordination.** Use Qwen agents for documentation updates, code changes, and maintaining consistency across stacks.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI Agent
|
||||
This stack is maintained by **CloudronBot**, an AI agent focused on CloudronStack documentation and packaging.
|
||||
|
||||
---
|
||||
|
||||
## 📄 License
|
||||
See [LICENSE](../LICENSE) for full terms. Contributions are welcome—open a discussion in the relevant stack's `collab/` area to kick things off.
|
||||
@@ -1,61 +0,0 @@
|
||||
https://github.com/target/goalert
|
||||
https://github.com/tirrenotechnologies/tirreno
|
||||
https://github.com/runmedev/runme
|
||||
https://github.com/datahub-project/datahub
|
||||
https://github.com/jhpyle/docassemble
|
||||
https://github.com/pimcore/pimcore
|
||||
https://github.com/kazhuravlev/database-gateway
|
||||
https://github.com/adnanh/webhook
|
||||
https://github.com/metrue/fx
|
||||
https://github.com/fonoster/fonoster
|
||||
https://github.com/oat-sa
|
||||
https://github.com/rundeck/rundeck
|
||||
https://github.com/juspay/hyperswitch
|
||||
https://github.com/Payroll-Engine/PayrollEngine
|
||||
https://github.com/openboxes/openboxes
|
||||
https://github.com/nautechsystems/nautilus_trader
|
||||
https://github.com/apache/apisix
|
||||
https://github.com/gristlabs/grist-core
|
||||
https://github.com/healthchecks/healthchecks
|
||||
https://github.com/fleetdm/fleet
|
||||
https://github.com/netbox-community/netbox
|
||||
https://github.com/apache/seatunnel
|
||||
https://github.com/rapiz1/rathole
|
||||
https://github.com/wiredlush/easy-gate
|
||||
https://github.com/huginn/huginn
|
||||
https://github.com/consuldemocracy/consuldemocracy
|
||||
https://github.com/BOINC/boinc
|
||||
https://github.com/SchedMD/slurm
|
||||
https://github.com/gophish/gophish
|
||||
https://github.com/GemGeorge/SniperPhish
|
||||
https://github.com/inventree/InvenTree
|
||||
https://github.com/mendersoftware/mender
|
||||
https://github.com/langfuse/langfuse
|
||||
https://github.com/wireviz/wireviz-web
|
||||
https://github.com/wireviz/WireViz
|
||||
https://github.com/killbill/killbill
|
||||
https://github.com/opulo-inc/autobom
|
||||
https://github.com/midday-ai/midday
|
||||
https://github.com/openblocks-dev/openblocks
|
||||
https://github.com/jgraph/docker-drawio
|
||||
https://github.com/SigNoz/signoz
|
||||
https://github.com/getsentry/sentry
|
||||
https://github.com/chirpstack/chirpstack
|
||||
https://github.com/elabftw/elabftw
|
||||
https://github.com/PLMore/PLMore
|
||||
https://gitlab.com/librespacefoundation/satnogs
|
||||
https://github.com/jamovi/jamovi
|
||||
https://github.com/reviewboard/reviewboard
|
||||
https://github.com/Resgrid/Core
|
||||
https://github.com/f4exb/sdrangel
|
||||
https://github.com/stephengpope/no-code-architects-toolkit
|
||||
https://github.com/sebo-b/warp
|
||||
https://github.com/windmill-labs/windmill
|
||||
https://github.com/cortezaproject/corteza
|
||||
https://github.com/mendersoftware
|
||||
https://github.com/security-companion/security-awareness-training
|
||||
https://github.com/strongdm/comply
|
||||
https://github.com/todogroup/policies
|
||||
https://github.com/sebo-b/warp
|
||||
https://github.com/windmill-labs/windmill
|
||||
https://github.com/HeyPuter/puter
|
||||
@@ -1,33 +0,0 @@
|
||||
Create Cloudron packages for all upstream applications found in:
|
||||
|
||||
collab/GitUrlList.txt for Cloudron.
|
||||
|
||||
Create shell scripts todo the packaging work and run three packaging projects in parallel.
|
||||
|
||||
Create a master control script to orchestrate the individual packaging scripts.
|
||||
|
||||
Create and maintain a status tracker in collab/STATUS.md
|
||||
|
||||
Take on these roles/perspectives for this chat:
|
||||
|
||||
- Prompt engineering expert
|
||||
- Cloudron packaging expert
|
||||
- Dockerfile expert
|
||||
|
||||
Do all of your work in the output/ directory tree.
|
||||
|
||||
It contains two subdirectories:
|
||||
|
||||
CloudronPackages-Artifacts for storing the actual Cloudron package artifacts
|
||||
CloudronPackages-Workspaces for cloning git repo, storing logs and whatever else is needed during packaging work
|
||||
|
||||
The docker images must build as part of package smoke testing.
|
||||
|
||||
Use this prefix for all docker images created by this process:
|
||||
|
||||
tsysdevstack-cloudron-buildtest-
|
||||
|
||||
If a packaging script has a problem and you cant solve the problem after five tries, flag it in STATUS-HumanHelp-<application> and move on
|
||||
|
||||
This is a big project. I expect it to run fully autonomously over the next four days or so. Be careful. Be Brutal. audit your work as you go.
|
||||
be methodical. start from first principles and develop a robust system and think/code defensively.
|
||||
@@ -1,87 +0,0 @@
|
||||
# Cloudron Packaging Status Tracker
|
||||
|
||||
## Overview
|
||||
This file tracks the status of Cloudron packaging for all upstream applications.
|
||||
|
||||
## Status Legend
|
||||
- ✅ COMPLETE: Successfully packaged
|
||||
- 🔄 IN PROGRESS: Currently being packaged
|
||||
- 🛑 FAILED: Packaging failed after 5+ attempts
|
||||
- ⏳ PENDING: Awaiting packaging
|
||||
|
||||
## Applications Status
|
||||
|
||||
| Application | URL | Status | Notes |
|
||||
|-------------|-----|--------|-------|
|
||||
| goalert | https://github.com/target/goalert | ⏳ PENDING | |
|
||||
| tirreno | https://github.com/tirrenotechnologies/tirreno | ⏳ PENDING | |
|
||||
| runme | https://github.com/runmedev/runme | ⏳ PENDING | |
|
||||
| datahub | https://github.com/datahub-project/datahub | ⏳ PENDING | |
|
||||
| docassemble | https://github.com/jhpyle/docassemble | ⏳ PENDING | |
|
||||
| pimcore | https://github.com/pimcore/pimcore | ⏳ PENDING | |
|
||||
| database-gateway | https://github.com/kazhuravlev/database-gateway | ⏳ PENDING | |
|
||||
| webhook | https://github.com/adnanh/webhook | ⏳ PENDING | |
|
||||
| fx | https://github.com/metrue/fx | ⏳ PENDING | |
|
||||
| fonoster | https://github.com/fonoster/fonoster | ⏳ PENDING | |
|
||||
| oat-sa | https://github.com/oat-sa | ⏳ PENDING | |
|
||||
| rundeck | https://github.com/rundeck/rundeck | ⏳ PENDING | |
|
||||
| hyperswitch | https://github.com/juspay/hyperswitch | ⏳ PENDING | |
|
||||
| PayrollEngine | https://github.com/Payroll-Engine/PayrollEngine | ⏳ PENDING | |
|
||||
| openboxes | https://github.com/openboxes/openboxes | ⏳ PENDING | |
|
||||
| nautilus_trader | https://github.com/nautechsystems/nautilus_trader | ⏳ PENDING | |
|
||||
| apisix | https://github.com/apache/apisix | ⏳ PENDING | |
|
||||
| grist-core | https://github.com/gristlabs/grist-core | ⏳ PENDING | |
|
||||
| healthchecks | https://github.com/healthchecks/healthchecks | ⏳ PENDING | |
|
||||
| fleet | https://github.com/fleetdm/fleet | ⏳ PENDING | |
|
||||
| netbox | https://github.com/netbox-community/netbox | ⏳ PENDING | |
|
||||
| seatunnel | https://github.com/apache/seatunnel | ⏳ PENDING | |
|
||||
| rathole | https://github.com/rapiz1/rathole | ⏳ PENDING | |
|
||||
| easy-gate | https://github.com/wiredlush/easy-gate | ⏳ PENDING | |
|
||||
| huginn | https://github.com/huginn/huginn | ⏳ PENDING | |
|
||||
| consuldemocracy | https://github.com/consuldemocracy/consuldemocracy | ⏳ PENDING | |
|
||||
| boinc | https://github.com/BOINC/boinc | ⏳ PENDING | |
|
||||
| slurm | https://github.com/SchedMD/slurm | ⏳ PENDING | |
|
||||
| gophish | https://github.com/gophish/gophish | ⏳ PENDING | |
|
||||
| SniperPhish | https://github.com/GemGeorge/SniperPhish | ⏳ PENDING | |
|
||||
| InvenTree | https://github.com/inventree/InvenTree | ⏳ PENDING | |
|
||||
| mender | https://github.com/mendersoftware/mender | ⏳ PENDING | |
|
||||
| langfuse | https://github.com/langfuse/langfuse | ⏳ PENDING | |
|
||||
| wireviz-web | https://github.com/wireviz/wireviz-web | ⏳ PENDING | |
|
||||
| WireViz | https://github.com/wireviz/WireViz | ⏳ PENDING | |
|
||||
| killbill | https://github.com/killbill/killbill | ⏳ PENDING | |
|
||||
| autobom | https://github.com/opulo-inc/autobom | ⏳ PENDING | |
|
||||
| midday | https://github.com/midday-ai/midday | ⏳ PENDING | |
|
||||
| openblocks | https://github.com/openblocks-dev/openblocks | ⏳ PENDING | |
|
||||
| docker-drawio | https://github.com/jgraph/docker-drawio | ⏳ PENDING | |
|
||||
| signoz | https://github.com/SigNoz/signoz | ⏳ PENDING | |
|
||||
| sentry | https://github.com/getsentry/sentry | ⏳ PENDING | |
|
||||
| chirpstack | https://github.com/chirpstack/chirpstack | ⏳ PENDING | |
|
||||
| elabftw | https://github.com/elabftw/elabftw | ⏳ PENDING | |
|
||||
| PLMore | https://github.com/PLMore/PLMore | ⏳ PENDING | |
|
||||
| satnogs | https://gitlab.com/librespacefoundation/satnogs | ⏳ PENDING | |
|
||||
| jamovi | https://github.com/jamovi/jamovi | ⏳ PENDING | |
|
||||
| reviewboard | https://github.com/reviewboard/reviewboard | ⏳ PENDING | |
|
||||
| Core | https://github.com/Resgrid/Core | ⏳ PENDING | |
|
||||
| sdrangel | https://github.com/f4exb/sdrangel | ⏳ PENDING | |
|
||||
| no-code-architects-toolkit | https://github.com/stephengpope/no-code-architects-toolkit | ⏳ PENDING | |
|
||||
| warp | https://github.com/sebo-b/warp | ⏳ PENDING | |
|
||||
| windmill | https://github.com/windmill-labs/windmill | ⏳ PENDING | |
|
||||
| corteza | https://github.com/cortezaproject/corteza | ⏳ PENDING | |
|
||||
| mendersoftware | https://github.com/mendersoftware | ⏳ PENDING | |
|
||||
| security-awareness-training | https://github.com/security-companion/security-awareness-training | ⏳ PENDING | |
|
||||
| comply | https://github.com/strongdm/comply | ⏳ PENDING | |
|
||||
| policies | https://github.com/todogroup/policies | ⏳ PENDING | |
|
||||
| puter | https://github.com/HeyPuter/puter | ⏳ PENDING | |
|
||||
|
||||
## Progress Summary
|
||||
- Total Applications: 51
|
||||
- Completed: 0 (0%)
|
||||
- In Progress: 0 (0%)
|
||||
- Failed: 0 (0%)
|
||||
- Pending: 51 (100%)
|
||||
|
||||
## Human Help Required
|
||||
None at the moment.
|
||||
|
||||
## Last Updated
|
||||
Wednesday, October 29, 2025
|
||||
@@ -1,629 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Master Control Script for Cloudron Packaging
|
||||
# This script orchestrates the packaging of all applications from GitUrlList.txt
|
||||
# It runs three packaging projects in parallel and maintains status tracking
|
||||
|
||||
set -e # Exit on any error
|
||||
set -u # Exit on undefined variables
|
||||
set -o pipefail # Exit on pipe failures
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
OUTPUT_DIR="$SCRIPT_DIR"
|
||||
ARTIFACTS_DIR="$OUTPUT_DIR/CloudronPackages-Artifacts"
|
||||
WORKSPACES_DIR="$OUTPUT_DIR/CloudronPackages-Workspaces"
|
||||
STATUS_FILE="$(dirname "$SCRIPT_DIR")/collab/STATUS.md"
|
||||
GIT_URL_LIST="$(dirname "$SCRIPT_DIR")/collab/GitUrlList.txt"
|
||||
HUMAN_HELP_DIR="$WORKSPACES_DIR/human-help-required"
|
||||
MAX_RETRIES=5
|
||||
LOG_FILE="$WORKSPACES_DIR/packaging.log"
|
||||
|
||||
# Docker image prefix
|
||||
DOCKER_PREFIX="tsysdevstack-cloudron-buildtest-"
|
||||
|
||||
# Source the packaging functions
|
||||
source "$SCRIPT_DIR/package-functions.sh"
|
||||
|
||||
# Create necessary directories
|
||||
mkdir -p "$ARTIFACTS_DIR" "$WORKSPACES_DIR" "$HUMAN_HELP_DIR"
|
||||
|
||||
# Function to log messages
|
||||
log_message() {
|
||||
local level=$1
|
||||
local message=$2
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
# Sanitize message to prevent injection in logs
|
||||
local clean_message=$(printf '%s\n' "$message" | sed 's/[\`\$|&;<>]//g')
|
||||
echo "[$timestamp] [$level] $clean_message" >> "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Function to perform audit of the packaging process
|
||||
perform_audit() {
|
||||
log_message "INFO" "Starting audit process"
|
||||
|
||||
# Count total, completed, failed, and in-progress applications
|
||||
local total_count=$(grep -c "https://" "$GIT_URL_LIST" || echo 0)
|
||||
local completed_count=$(grep -c "✅ COMPLETE" "$STATUS_FILE" | grep -v "Progress Summary" || echo 0)
|
||||
local failed_count=$(grep -c "🛑 FAILED" "$STATUS_FILE" | grep -v "Human Help Required" || echo 0)
|
||||
local in_progress_count=$(grep -c "🔄 IN PROGRESS" "$STATUS_FILE" || echo 0)
|
||||
local pending_count=$((total_count - completed_count - failed_count - in_progress_count))
|
||||
|
||||
log_message "INFO" "Audit Summary - Total: $total_count, Completed: $completed_count, Failed: $failed_count, In Progress: $in_progress_count, Pending: $pending_count"
|
||||
|
||||
# Check for artifacts directory health
|
||||
local artifact_count=$(find "$ARTIFACTS_DIR" -mindepth 1 -maxdepth 1 -type d | wc -l)
|
||||
log_message "INFO" "Found $artifact_count artifact directories in $ARTIFACTS_DIR"
|
||||
|
||||
# Check for workspace directory health
|
||||
local workspace_count=$(find "$WORKSPACES_DIR" -mindepth 1 -maxdepth 1 -type d | grep -v "human-help-required\|packaging.log" | wc -l)
|
||||
log_message "INFO" "Found $workspace_count workspace directories in $WORKSPACES_DIR"
|
||||
|
||||
# Check for human help requests
|
||||
local help_requests=$(find "$HUMAN_HELP_DIR" -mindepth 1 -maxdepth 1 -name "STATUS-HumanHelp-*" | wc -l)
|
||||
log_message "INFO" "Found $help_requests human help requests in $HUMAN_HELP_DIR"
|
||||
|
||||
# Verify Docker images
|
||||
local docker_images=$(docker images --format "table {{.Repository}}:{{.Tag}}" | grep "$DOCKER_PREFIX" | wc -l)
|
||||
log_message "INFO" "Found $docker_images Docker images with prefix $DOCKER_PREFIX"
|
||||
|
||||
log_message "INFO" "Audit process completed"
|
||||
}
|
||||
|
||||
# Function to add a new Git URL to the list
|
||||
add_git_url() {
|
||||
local new_url=$1
|
||||
local git_list_file=${2:-"$GIT_URL_LIST"}
|
||||
|
||||
if [[ -z "$new_url" ]]; then
|
||||
log_message "ERROR" "No URL provided to add_git_url function"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate URL format
|
||||
if [[ ! "$new_url" =~ ^https?:// ]]; then
|
||||
log_message "ERROR" "Invalid URL format: $new_url"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if URL already exists in the file
|
||||
if grep -Fxq "$new_url" "$git_list_file"; then
|
||||
log_message "INFO" "URL already exists in $git_list_file: $new_url"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Add the URL to the file
|
||||
echo "$new_url" >> "$git_list_file"
|
||||
log_message "INFO" "Added new URL to $git_list_file: $new_url"
|
||||
|
||||
# Also update STATUS.md to include the new application
|
||||
local repo_name=$(get_repo_name "$new_url")
|
||||
local username_repo=$(get_username_repo "$new_url")
|
||||
|
||||
# Check if the application is already in STATUS.md
|
||||
if ! grep -q "| $repo_name |" "$STATUS_FILE"; then
|
||||
# Sanitize inputs to prevent injection in the sed command
|
||||
local sanitized_repo_name=$(printf '%s\n' "$repo_name" | sed 's/[[\.*^$()+?{|]/\\&/g; s/[&/]/\\&/g')
|
||||
local sanitized_url=$(printf '%s\n' "$new_url" | sed 's/[[\.*^$()+?{|]/\\&/g; s/[&/]/\\&/g')
|
||||
|
||||
# Append the new application to the table in STATUS.md
|
||||
sed -i "/## Applications Status/,/|-----|-----|-----|-----|/ {/|-----|-----|-----|-----|/a\| $sanitized_repo_name | $sanitized_url | ⏳ PENDING | |" "$STATUS_FILE"
|
||||
log_message "INFO" "Added $repo_name to STATUS.md"
|
||||
else
|
||||
log_message "INFO" "Application $repo_name already exists in STATUS.md"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Function to add multiple Git URLs from a file
|
||||
add_git_urls_from_file() {
|
||||
local input_file=$1
|
||||
local git_list_file=${2:-"$GIT_URL_LIST"}
|
||||
|
||||
if [[ ! -f "$input_file" ]]; then
|
||||
log_message "ERROR" "Input file does not exist: $input_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
while IFS= read -r url; do
|
||||
# Skip empty lines and comments
|
||||
if [[ -n "$url" && ! "$url" =~ ^[[:space:]]*# ]]; then
|
||||
add_git_url "$url" "$git_list_file"
|
||||
fi
|
||||
done < "$input_file"
|
||||
|
||||
log_message "INFO" "Finished processing URLs from $input_file"
|
||||
}
|
||||
|
||||
# Function to clean up Docker resources periodically
|
||||
cleanup_docker_resources() {
|
||||
log_message "INFO" "Starting Docker resource cleanup"
|
||||
|
||||
# Remove unused Docker images that are related to our builds
|
||||
# Use a broader pattern match since we now include timestamps in image names
|
||||
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.ID}}" | grep "$DOCKER_PREFIX" | awk '{print $3}' | xargs -r docker rmi -f 2>/dev/null || true
|
||||
|
||||
# Alternative: Remove all images with our prefix pattern (for cases where the grep doesn't catch all variations)
|
||||
docker images -q --filter "reference=$DOCKER_PREFIX*" | xargs -r docker rmi -f 2>/dev/null || true
|
||||
|
||||
# Remove exited containers
|
||||
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.ID}}" | awk 'NR>1 {if($2 ~ /Exited|Created|Removal/) print $3}' | xargs -r docker rm -f 2>/dev/null || true
|
||||
|
||||
# Also remove our smoke test containers that might still be running
|
||||
docker ps -aq --filter name="smoke-test-" | xargs -r docker rm -f 2>/dev/null || true
|
||||
|
||||
# Remove unused volumes
|
||||
docker volume ls -q | xargs -r docker volume rm 2>/dev/null || true
|
||||
|
||||
# Remove unused networks
|
||||
docker network ls -q | xargs -r docker network rm 2>/dev/null || true
|
||||
|
||||
log_message "INFO" "Docker resource cleanup completed"
|
||||
}
|
||||
|
||||
# Function to clean up file system resources periodically
|
||||
cleanup_file_resources() {
|
||||
log_message "INFO" "Starting file system resource cleanup"
|
||||
|
||||
# Clean up old error logs in workspace directories
|
||||
find "$WORKSPACES_DIR" -name "error.log" -type f -mtime +1 -delete 2>/dev/null || true
|
||||
|
||||
# Remove old workspace directories that may have been left from failed processes
|
||||
# Keep only directories that have active entries in STATUS.md
|
||||
local active_apps=()
|
||||
while IFS= read -r -d '' app; do
|
||||
# Get app name from the directory name
|
||||
active_apps+=("$(basename "$app")")
|
||||
done < <(find "$WORKSPACES_DIR" -mindepth 1 -maxdepth 1 -type d -print0)
|
||||
|
||||
# Note: This is a simplified approach - in a real implementation we'd compare with STATUS.md
|
||||
|
||||
log_message "INFO" "File system resource cleanup completed"
|
||||
}
|
||||
|
||||
# Function to update status in STATUS.md
|
||||
update_status() {
|
||||
local app_name=$1
|
||||
local new_status=$2
|
||||
local notes=${3:-""}
|
||||
|
||||
# Validate inputs to prevent injection
|
||||
if [[ -z "$app_name" ]] || [[ -z "$new_status" ]]; then
|
||||
log_message "ERROR" "Empty app_name or new_status in update_status function"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Sanitize inputs to prevent injection
|
||||
# Remove any pipe characters which would interfere with table format
|
||||
local clean_app_name=$(printf '%s\n' "$app_name" | sed 's/|//g; s/[[\.*^$()+?{|]/\\&/g')
|
||||
local clean_status=$(printf '%s\n' "$new_status" | sed 's/|//g; s/[[\.*^$()+?{|]/\\&/g')
|
||||
local clean_notes=$(printf '%s\n' "$notes" | sed 's/|//g; s/[[\.*^$()+?{|]/\\&/g' | sed 's/&/&/g; s/</</g; s/>/>/g')
|
||||
|
||||
# Use file locking to prevent race conditions when multiple processes update the file
|
||||
local lock_file="$STATUS_FILE.lock"
|
||||
exec 200>"$lock_file"
|
||||
flock -x 200 # Exclusive lock
|
||||
|
||||
# Update status in the file - find the line with the app name and update its status
|
||||
# Use a more targeted sed pattern to reduce chance of unintended matches
|
||||
sed -i "s/^| $clean_app_name | \([^|]*\) | \([^|]*\) | \([^|]*\) |$/| $clean_app_name | \1 | $clean_status | $clean_notes |/" "$STATUS_FILE"
|
||||
|
||||
# Release the lock by closing the file descriptor
|
||||
exec 200>&-
|
||||
|
||||
log_message "INFO" "Updated status for $app_name to $new_status"
|
||||
}
|
||||
|
||||
# Function to get the repository name from URL
|
||||
get_repo_name() {
|
||||
local url=$1
|
||||
if [[ -z "$url" ]]; then
|
||||
log_message "ERROR" "URL is empty in get_repo_name function"
|
||||
echo "unknown-repo"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Extract the basename more securely by using parameter expansion
|
||||
# First remove any trailing slashes
|
||||
local clean_url="${url%/}"
|
||||
local repo_part="${clean_url##*/}"
|
||||
repo_part="${repo_part%.git}"
|
||||
|
||||
# Sanitize the repo name to contain only valid characters
|
||||
local sanitized=$(printf '%s\n' "$repo_part" | sed 's/[^a-zA-Z0-9._-]/-/g')
|
||||
|
||||
# Double-check to prevent path traversal
|
||||
sanitized=$(printf '%s\n' "$sanitized" | sed 's/\.\.//g; s/\/\///g')
|
||||
|
||||
# Ensure the result is not empty
|
||||
if [[ -z "$sanitized" ]] || [[ "$sanitized" == "." ]] || [[ "$sanitized" == ".." ]]; then
|
||||
sanitized="unknown-repo-$(date +%s)"
|
||||
fi
|
||||
|
||||
echo "$sanitized"
|
||||
}
|
||||
|
||||
# Function to extract username/repo from URL for GitHub/GitLab/other
|
||||
get_username_repo() {
|
||||
local url=$1
|
||||
if [[ -z "$url" ]]; then
|
||||
log_message "ERROR" "URL is empty in get_username_repo function"
|
||||
echo "unknown-user/unknown-repo"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Clean the URL to prevent path traversal
|
||||
local clean_url="${url#*://}" # Remove protocol
|
||||
clean_url="${clean_url#*[email]*/}" # Remove potential user@host
|
||||
|
||||
if [[ "$clean_url" == *"github.com"* ]]; then
|
||||
# Extract username/repo from GitHub URL
|
||||
local path=${clean_url#*github.com/}
|
||||
path=${path%.git}
|
||||
# Ensure we have a valid path
|
||||
if [[ "$path" != *"/"* ]] || [[ "$path" == "/" ]]; then
|
||||
# If there's no slash, it might be malformed, use repo name
|
||||
path="unknown-user/$(get_repo_name "$url")"
|
||||
else
|
||||
# Sanitize the path to prevent directory traversal
|
||||
path=$(printf '%s\n' "$path" | sed 's/\.\.//g; s/\/\///g')
|
||||
fi
|
||||
echo "$path"
|
||||
elif [[ "$clean_url" == *"gitlab.com"* ]]; then
|
||||
# Extract username/repo from GitLab URL
|
||||
local path=${clean_url#*gitlab.com/}
|
||||
path=${path%.git}
|
||||
# Ensure we have a valid path
|
||||
if [[ "$path" != *"/"* ]] || [[ "$path" == "/" ]]; then
|
||||
# If there's no slash, it might be malformed, use repo name
|
||||
path="unknown-user/$(get_repo_name "$url")"
|
||||
else
|
||||
# Sanitize the path to prevent directory traversal
|
||||
path=$(printf '%s\n' "$path" | sed 's/\.\.//g; s/\/\///g')
|
||||
fi
|
||||
echo "$path"
|
||||
else
|
||||
# For other URLs, try to extract pattern user/repo
|
||||
local path=${clean_url#*/} # Remove host part
|
||||
if [[ "$path" == *"/"* ]]; then
|
||||
path=${path%.git}
|
||||
# Sanitize the path to prevent directory traversal
|
||||
path=$(printf '%s\n' "$path" | sed 's/\.\.//g; s/\/\///g')
|
||||
else
|
||||
# If no slash, use a generic format
|
||||
local repo=$(get_repo_name "$url")
|
||||
path="unknown-user/$repo"
|
||||
fi
|
||||
echo "$path"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to run individual packaging script
|
||||
run_packaging_script() {
|
||||
local url=$1
|
||||
local repo_name=$(get_repo_name "$url")
|
||||
local username_repo=$(get_username_repo "$url")
|
||||
local workspace_dir="$WORKSPACES_DIR/$repo_name"
|
||||
local artifact_dir="$ARTIFACTS_DIR/$repo_name"
|
||||
|
||||
echo "$(date): Starting packaging for $repo_name ($url)" >> "$WORKSPACES_DIR/packaging.log"
|
||||
|
||||
# Update status to IN PROGRESS
|
||||
update_status "$repo_name" "🔄 IN PROGRESS" "Packaging started"
|
||||
|
||||
# Initialize workspace
|
||||
mkdir -p "$workspace_dir" "$artifact_dir"
|
||||
|
||||
# Clone repository
|
||||
if [ ! -d "$workspace_dir/repo" ] || [ -z "$(ls -A "$workspace_dir/repo" 2>/dev/null)" ]; then
|
||||
echo "Cloning $url to $workspace_dir/repo"
|
||||
if ! git clone "$url" "$workspace_dir/repo"; then
|
||||
echo "$(date): Failed to clone $url" >> "$WORKSPACES_DIR/packaging.log"
|
||||
update_status "$repo_name" "🛑 FAILED" "Failed to clone repository"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
# Update repository
|
||||
echo "Updating $url in $workspace_dir/repo"
|
||||
if ! (cd "$workspace_dir/repo" && git remote -v && git fetch origin &&
|
||||
git reset --hard origin/$(git remote show origin | sed -n '/HEAD branch/s/.*: //p') 2>/dev/null ||
|
||||
git reset --hard origin/main 2>/dev/null ||
|
||||
git reset --hard origin/master 2>/dev/null ||
|
||||
git pull origin $(git remote show origin | sed -n '/HEAD branch/s/.*: //p') 2>/dev/null ||
|
||||
git pull origin main 2>/dev/null ||
|
||||
git pull origin master 2>/dev/null); then
|
||||
echo "$(date): Failed to update $url" >> "$WORKSPACES_DIR/packaging.log"
|
||||
update_status "$repo_name" "🔄 IN PROGRESS" "Repo update failed, will retry with fresh clone"
|
||||
# Remove the repo and try to clone again
|
||||
rm -rf "$workspace_dir/repo"
|
||||
if ! git clone "$url" "$workspace_dir/repo"; then
|
||||
echo "$(date): Failed to re-clone $url after update failure" >> "$WORKSPACES_DIR/packaging.log"
|
||||
update_status "$repo_name" "🛑 FAILED" "Failed to update or re-clone repository"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Attempt packaging with retries
|
||||
local attempt=1
|
||||
local success=0
|
||||
|
||||
while [ $attempt -le $MAX_RETRIES ] && [ $success -eq 0 ]; do
|
||||
echo "$(date): Attempt $attempt/$MAX_RETRIES for $repo_name" >> "$WORKSPACES_DIR/packaging.log"
|
||||
|
||||
# Capture the output and error of the packaging function
|
||||
if package_application "$repo_name" "$username_repo" "$workspace_dir" "$artifact_dir" "$url" 2>"$workspace_dir/error.log"; then
|
||||
success=1
|
||||
update_status "$repo_name" "✅ COMPLETE" "Packaged successfully on attempt $attempt"
|
||||
echo "$(date): Successfully packaged $repo_name on attempt $attempt" >> "$WORKSPACES_DIR/packaging.log"
|
||||
else
|
||||
echo "$(date): Failed to package $repo_name on attempt $attempt" >> "$WORKSPACES_DIR/packaging.log"
|
||||
cat "$workspace_dir/error.log" >> "$WORKSPACES_DIR/packaging.log"
|
||||
((attempt++))
|
||||
fi
|
||||
done
|
||||
|
||||
if [ $success -eq 0 ]; then
|
||||
# Mark as failed and create human help request with more detailed information
|
||||
local error_details=""
|
||||
if [ -f "$workspace_dir/error.log" ]; then
|
||||
error_details=$(cat "$workspace_dir/error.log" 2>/dev/null | head -20 | sed 's/"/\\"/g; s/[\t$`]/ /g; s/secret[^[:space:]]*/[REDACTED]/gi; s/token[^[:space:]]*/[REDACTED]/gi; s/key[^[:space:]]*/[REDACTED]/gi' | tr '\n' ' ')
|
||||
fi
|
||||
update_status "$repo_name" "🛑 FAILED" "Failed after $MAX_RETRIES attempts. Error: $error_details"
|
||||
# Create a detailed human help file with proper sanitization
|
||||
{
|
||||
echo "Application: $repo_name"
|
||||
echo "URL: $url"
|
||||
echo "Issue: Failed to package after $MAX_RETRIES attempts"
|
||||
echo "Date: $(date)"
|
||||
echo "Error Details:"
|
||||
if [ -f "$workspace_dir/error.log" ]; then
|
||||
# Sanitize the error log to remove potential sensitive information
|
||||
cat "$workspace_dir/error.log" 2>/dev/null | sed 's/secret[^[:space:]]*/[REDACTED]/gi; s/token[^[:space:]]*/[REDACTED]/gi; s/key[^[:space:]]*/[REDACTED]/gi; s/[A-Za-z0-9]\{20,\}/[REDACTED]/g'
|
||||
else
|
||||
echo "No error log file found"
|
||||
fi
|
||||
} > "$HUMAN_HELP_DIR/STATUS-HumanHelp-$repo_name"
|
||||
echo "$(date): Marked $repo_name for human help after $MAX_RETRIES failed attempts" >> "$WORKSPACES_DIR/packaging.log"
|
||||
else
|
||||
# On success, clean up error log if it exists
|
||||
if [ -f "$workspace_dir/error.log" ]; then
|
||||
rm -f "$workspace_dir/error.log"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to package a specific application
|
||||
package_application() {
|
||||
local repo_name=$1
|
||||
local username_repo=$2
|
||||
local workspace_dir=$3
|
||||
local artifact_dir=$4
|
||||
local url=${5:-"https://github.com/unknown-user/$repo_name"} # Default URL if not provided
|
||||
|
||||
local repo_path="$workspace_dir/repo"
|
||||
|
||||
# Use the function library to detect and package the application
|
||||
detect_and_package "$repo_name" "$repo_path" "$artifact_dir" "$url"
|
||||
}
|
||||
|
||||
# Function to create a Dockerfile based on the application type
|
||||
create_dockerfile() {
|
||||
local repo_name=$1
|
||||
local repo_path=$2
|
||||
|
||||
# Detect application type and create appropriate Dockerfile
|
||||
# This is a simplified approach - in reality, this would be much more complex
|
||||
|
||||
if [ -f "$repo_path/package.json" ]; then
|
||||
# Node.js application
|
||||
cat > "$repo_path/Dockerfile" << EOF
|
||||
FROM node:18-alpine
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY package*.json ./
|
||||
RUN npm install
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE 3000
|
||||
|
||||
CMD ["npm", "start"]
|
||||
EOF
|
||||
elif [ -f "$repo_path/requirements.txt" ]; then
|
||||
# Python application
|
||||
cat > "$repo_path/Dockerfile" << EOF
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
CMD ["python", "app.py"]
|
||||
EOF
|
||||
elif [ -f "$repo_path/composer.json" ]; then
|
||||
# PHP application
|
||||
cat > "$repo_path/Dockerfile" << EOF
|
||||
FROM php:8.1-apache
|
||||
|
||||
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
|
||||
|
||||
COPY . /var/www/html/
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
CMD ["apache2-foreground"]
|
||||
EOF
|
||||
elif [ -f "$repo_path/Gemfile" ]; then
|
||||
# Ruby application
|
||||
cat > "$repo_path/Dockerfile" << EOF
|
||||
FROM ruby:3.0
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY Gemfile Gemfile.lock ./
|
||||
RUN bundle install
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE 3000
|
||||
|
||||
CMD ["ruby", "app.rb"]
|
||||
EOF
|
||||
else
|
||||
# Default to a basic server
|
||||
cat > "$repo_path/Dockerfile" << EOF
|
||||
FROM alpine:latest
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN apk add --no-cache bash
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
CMD ["sh", "-c", "while true; do sleep 30; done"]
|
||||
EOF
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to load URLs from Git URL list file
|
||||
load_git_urls() {
|
||||
local git_list_file=${1:-"$GIT_URL_LIST"}
|
||||
local urls=()
|
||||
|
||||
if [[ ! -f "$git_list_file" ]]; then
|
||||
log_message "ERROR" "Git URL list file does not exist: $git_list_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
while IFS= read -r line; do
|
||||
# Skip empty lines and comments
|
||||
if [[ -n "$line" && ! "$line" =~ ^[[:space:]]*# ]]; then
|
||||
# Validate that the line looks like a URL
|
||||
if [[ "$line" =~ ^https?:// ]]; then
|
||||
urls+=("$line")
|
||||
else
|
||||
log_message "WARN" "Invalid URL format skipped: $line"
|
||||
fi
|
||||
fi
|
||||
done < "$git_list_file"
|
||||
|
||||
# Print the urls array to stdout so the caller can capture it
|
||||
printf '%s\n' "${urls[@]}"
|
||||
}
|
||||
|
||||
# Main function to process all applications
|
||||
main() {
|
||||
log_message "INFO" "Starting Cloudron packaging process"
|
||||
|
||||
# Validate that required files exist
|
||||
if [[ ! -f "$SCRIPT_DIR/package-functions.sh" ]]; then
|
||||
log_message "ERROR" "Package functions file does not exist: $SCRIPT_DIR/package-functions.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Load URLs from the git list file
|
||||
local url_list
|
||||
mapfile -t url_list < <(load_git_urls)
|
||||
local total=${#url_list[@]}
|
||||
log_message "INFO" "Found $total URLs to process"
|
||||
|
||||
# Process applications in batches of 3 for parallel execution
|
||||
local i=0
|
||||
local batch_count=0
|
||||
|
||||
# Add heartbeat file to track process is alive
|
||||
local heartbeat_file="$WORKSPACES_DIR/process-heartbeat-$(date +%s).tmp"
|
||||
touch "$heartbeat_file"
|
||||
|
||||
while [ $i -lt $total ]; do
|
||||
# Process up to 3 applications in parallel
|
||||
local end=$((i + 3))
|
||||
[ $end -gt $total ] && end=$total
|
||||
|
||||
log_message "INFO" "Starting batch with applications $(printf '%s; ' "${url_list[@]:i:3}")"
|
||||
|
||||
for ((j = i; j < end; j++)); do
|
||||
log_message "INFO" "Starting packaging for ${url_list[$j]}"
|
||||
run_packaging_script "${url_list[$j]}" &
|
||||
done
|
||||
|
||||
# Wait for all background processes to complete
|
||||
wait
|
||||
|
||||
# Update heartbeat to show process is active
|
||||
touch "$heartbeat_file"
|
||||
|
||||
# Perform audit after each batch
|
||||
perform_audit
|
||||
|
||||
# Perform resource cleanup every 10 batches to prevent resource exhaustion during long runs
|
||||
((batch_count++))
|
||||
if [ $((batch_count % 10)) -eq 0 ]; then
|
||||
log_message "INFO" "Performing periodic resource cleanup after batch $batch_count"
|
||||
cleanup_docker_resources
|
||||
cleanup_file_resources
|
||||
fi
|
||||
|
||||
# Check for critical errors that might require stopping
|
||||
local failed_count_current=$(grep -o "🛑 FAILED" "$STATUS_FILE" | wc -l)
|
||||
local total_failed_since_start=$((failed_count_current))
|
||||
|
||||
# Optional: Add logic for stopping if too many failures occur in a row
|
||||
# This is commented out but can be enabled if needed
|
||||
# if [ $total_failed_since_start -gt 50 ]; then
|
||||
# log_message "ERROR" "Too many failures (${total_failed_since_start}), stopping process"
|
||||
# break
|
||||
# fi
|
||||
|
||||
# Update i for the next batch
|
||||
i=$end
|
||||
|
||||
# Update progress summary in STATUS.md
|
||||
local completed=$(grep -o "✅ COMPLETE" "$STATUS_FILE" | wc -l)
|
||||
local failed=$(grep -o "🛑 FAILED" "$STATUS_FILE" | wc -l)
|
||||
local in_progress=$(grep -o "🔄 IN PROGRESS" "$STATUS_FILE" | wc -l)
|
||||
local pending=$((total - completed - failed - in_progress))
|
||||
|
||||
# Ensure we don't have negative pending due to counting issues
|
||||
[ $pending -lt 0 ] && pending=0
|
||||
|
||||
# Update summary section in STATUS.md
|
||||
sed -i '/## Progress Summary/Q' "$STATUS_FILE"
|
||||
cat >> "$STATUS_FILE" << EOF
|
||||
## Progress Summary
|
||||
- Total Applications: $total
|
||||
- Completed: $completed ($(awk "BEGIN {printf \"%.0f\", $completed * 100 / $total}")%)
|
||||
- In Progress: $in_progress ($(awk "BEGIN {printf \"%.0f\", $in_progress * 100 / $total}")%)
|
||||
- Failed: $failed ($(awk "BEGIN {printf \"%.0f\", $failed * 100 / $total}")%)
|
||||
- Pending: $pending ($(awk "BEGIN {printf \"%.0f\", $pending * 100 / $total}")%)
|
||||
|
||||
## Human Help Required
|
||||
$(ls -1 "$HUMAN_HELP_DIR" 2>/dev/null || echo "None at the moment.")
|
||||
|
||||
## Last Updated
|
||||
$(date)
|
||||
EOF
|
||||
done
|
||||
|
||||
# Final cleanup
|
||||
rm -f "$heartbeat_file" 2>/dev/null || true
|
||||
|
||||
# Final audit
|
||||
perform_audit
|
||||
log_message "INFO" "Completed Cloudron packaging process"
|
||||
}
|
||||
|
||||
# Run the main function if script is executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
@@ -1,761 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Function library for Cloudron packaging
|
||||
# Contains specific packaging functions for different application types
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Function to package generic Node.js application
|
||||
package_nodejs_app() {
|
||||
local app_name=$1
|
||||
local app_dir=$2
|
||||
local artifact_dir=$3
|
||||
local app_url=${4:-"https://github.com/unknown-user/$app_name"} # Default URL if not provided
|
||||
|
||||
cd "$app_dir"
|
||||
|
||||
# Extract username/repo from the app_url for manifest
|
||||
local repo_path
|
||||
if [[ "$app_url" == *"github.com"* ]]; then
|
||||
repo_path=${app_url#*github.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
elif [[ "$app_url" == *"gitlab.com"* ]]; then
|
||||
repo_path=${app_url#*gitlab.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
else
|
||||
repo_path="unknown-user/$app_name"
|
||||
fi
|
||||
|
||||
# Create .dockerignore to exclude sensitive files
|
||||
cat > .dockerignore << 'DOCKERIGNORE_EOF'
|
||||
.git
|
||||
.gitignore
|
||||
*.env
|
||||
*.key
|
||||
*.pem
|
||||
*.crt
|
||||
*.cert
|
||||
Dockerfile
|
||||
.dockerignore
|
||||
*.log
|
||||
node_modules
|
||||
__pycache__
|
||||
.pytest_cache
|
||||
.coverage
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
LICENSE
|
||||
AUTHORS
|
||||
CONTRIBUTORS
|
||||
config/
|
||||
secrets/
|
||||
tokens/
|
||||
DOCKERIGNORE_EOF
|
||||
|
||||
# Create Cloudron manifest
|
||||
cat > app.manifest << EOF
|
||||
{
|
||||
"id": "com.$(echo "$repo_path" | sed 's/[^a-zA-Z0-9]/./g').cloudron",
|
||||
"title": "$app_name",
|
||||
"version": "1.0.0",
|
||||
"build": "1",
|
||||
"description": "Cloudron package for $app_name",
|
||||
"author": "Auto-generated",
|
||||
"website": "$app_url",
|
||||
"admin": false,
|
||||
"tags": ["nodejs", "auto-generated"],
|
||||
"logo": "https://github.com/fluidicon.png",
|
||||
"documentation": "$app_url",
|
||||
"changelog": "Initial packaging"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Determine the appropriate start command and port from package.json if available
|
||||
local start_cmd="npm start"
|
||||
local port=3000
|
||||
|
||||
if [ -f "package.json" ]; then
|
||||
# Try to extract the start script from package.json
|
||||
if command -v jq >/dev/null 2>&1; then
|
||||
# Use jq if available to properly parse JSON
|
||||
local scripts_start=$(jq -r '.scripts.start // empty' package.json 2>/dev/null)
|
||||
if [ -n "$scripts_start" ] && [ "$scripts_start" != "null" ]; then
|
||||
start_cmd="$scripts_start"
|
||||
fi
|
||||
|
||||
# Look for port configuration in various common places
|
||||
local configured_port=$(jq -r '.config.port // .port // empty' package.json 2>/dev/null)
|
||||
if [ -n "$configured_port" ] && [ "$configured_port" != "null" ] && [ "$configured_port" -gt 0 ] 2>/dev/null; then
|
||||
port=$configured_port
|
||||
fi
|
||||
else
|
||||
# Fallback to grep if jq is not available
|
||||
local scripts_start=$(grep -o '"start": *"[^"]*"' package.json | head -1 | cut -d'"' -f4)
|
||||
if [ -n "$scripts_start" ]; then
|
||||
start_cmd="$scripts_start"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create Dockerfile for Node.js with appropriate start command and port
|
||||
cat > Dockerfile << EOF
|
||||
FROM node:18-alpine
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY package*.json ./
|
||||
RUN npm install --only=production
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE $port
|
||||
|
||||
CMD $start_cmd
|
||||
EOF
|
||||
|
||||
# Build Docker image with a more unique name to avoid conflicts in parallel execution
|
||||
local docker_image="tsysdevstack-cloudron-buildtest-${app_name//[^a-zA-Z0-9]/-}-$(date +%s%N | cut -c1-10):latest"
|
||||
if ! docker build -t "$docker_image" .; then
|
||||
echo "Failed to build Docker image for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Perform smoke test on the Docker image
|
||||
if ! smoke_test_docker_image "$docker_image" "$app_name"; then
|
||||
echo "Smoke test failed for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Save the Docker image as an artifact
|
||||
docker save "$docker_image" | gzip > "$artifact_dir/${app_name//[^a-zA-Z0-9]/-}-$(date +%s).tar.gz"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Function to package generic Python application
|
||||
package_python_app() {
|
||||
local app_name=$1
|
||||
local app_dir=$2
|
||||
local artifact_dir=$3
|
||||
local app_url=${4:-"https://github.com/unknown-user/$app_name"} # Default URL if not provided
|
||||
|
||||
cd "$app_dir"
|
||||
|
||||
# Extract username/repo from the app_url for manifest
|
||||
local repo_path
|
||||
if [[ "$app_url" == *"github.com"* ]]; then
|
||||
repo_path=${app_url#*github.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
elif [[ "$app_url" == *"gitlab.com"* ]]; then
|
||||
repo_path=${app_url#*gitlab.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
else
|
||||
repo_path="unknown-user/$app_name"
|
||||
fi
|
||||
|
||||
# Create .dockerignore to exclude sensitive files
|
||||
cat > .dockerignore << 'DOCKERIGNORE_EOF'
|
||||
.git
|
||||
.gitignore
|
||||
*.env
|
||||
*.key
|
||||
*.pem
|
||||
*.crt
|
||||
*.cert
|
||||
Dockerfile
|
||||
.dockerignore
|
||||
*.log
|
||||
node_modules
|
||||
__pycache__
|
||||
.pytest_cache
|
||||
.coverage
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
LICENSE
|
||||
AUTHORS
|
||||
CONTRIBUTORS
|
||||
config/
|
||||
secrets/
|
||||
tokens/
|
||||
DOCKERIGNORE_EOF
|
||||
|
||||
# Create Cloudron manifest
|
||||
cat > app.manifest << EOF
|
||||
{
|
||||
"id": "com.$(echo "$repo_path" | sed 's/[^a-zA-Z0-9]/./g').cloudron",
|
||||
"title": "$app_name",
|
||||
"version": "1.0.0",
|
||||
"build": "1",
|
||||
"description": "Cloudron package for $app_name",
|
||||
"author": "Auto-generated",
|
||||
"website": "$app_url",
|
||||
"admin": false,
|
||||
"tags": ["python", "auto-generated"],
|
||||
"logo": "https://github.com/fluidicon.png",
|
||||
"documentation": "$app_url",
|
||||
"changelog": "Initial packaging"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Try to determine the appropriate start command and port
|
||||
local start_cmd="python app.py"
|
||||
local port=8000
|
||||
|
||||
# Look for common Python web framework indicators
|
||||
if [ -f "requirements.txt" ]; then
|
||||
if grep -E -i "flask" requirements.txt >/dev/null; then
|
||||
start_cmd="python -m flask run --host=0.0.0.0 --port=$port"
|
||||
elif grep -E -i "django" requirements.txt >/dev/null; then
|
||||
start_cmd="python manage.py runserver 0.0.0.0:$port"
|
||||
port=8000
|
||||
elif grep -E -i "fastapi" requirements.txt >/dev/null; then
|
||||
start_cmd="uvicorn main:app --host 0.0.0.0 --port $port"
|
||||
if [ ! -f "main.py" ] && [ -f "app.py" ]; then
|
||||
start_cmd="uvicorn app:app --host 0.0.0.0 --port $port"
|
||||
fi
|
||||
elif grep -E -i "gunicorn" requirements.txt >/dev/null; then
|
||||
if [ -f "wsgi.py" ]; then
|
||||
start_cmd="gunicorn wsgi:application --bind 0.0.0.0:$port"
|
||||
elif [ -f "app.py" ]; then
|
||||
start_cmd="gunicorn app:app --bind 0.0.0.0:$port"
|
||||
else
|
||||
start_cmd="gunicorn app:application --bind 0.0.0.0:$port"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create Dockerfile for Python with appropriate start command and port
|
||||
cat > Dockerfile << EOF
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE $port
|
||||
|
||||
CMD $start_cmd
|
||||
EOF
|
||||
|
||||
# Build Docker image with a more unique name to avoid conflicts in parallel execution
|
||||
local docker_image="tsysdevstack-cloudron-buildtest-${app_name//[^a-zA-Z0-9]/-}-$(date +%s%N | cut -c1-10):latest"
|
||||
if ! docker build -t "$docker_image" .; then
|
||||
echo "Failed to build Docker image for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Perform smoke test on the Docker image
|
||||
if ! smoke_test_docker_image "$docker_image" "$app_name"; then
|
||||
echo "Smoke test failed for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Save the Docker image as an artifact
|
||||
docker save "$docker_image" | gzip > "$artifact_dir/${app_name//[^a-zA-Z0-9]/-}-$(date +%s).tar.gz"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Function to package generic PHP application
|
||||
package_php_app() {
|
||||
local app_name=$1
|
||||
local app_dir=$2
|
||||
local artifact_dir=$3
|
||||
local app_url=${4:-"https://github.com/unknown-user/$app_name"} # Default URL if not provided
|
||||
|
||||
cd "$app_dir"
|
||||
|
||||
# Extract username/repo from the app_url for manifest
|
||||
local repo_path
|
||||
if [[ "$app_url" == *"github.com"* ]]; then
|
||||
repo_path=${app_url#*github.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
elif [[ "$app_url" == *"gitlab.com"* ]]; then
|
||||
repo_path=${app_url#*gitlab.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
else
|
||||
repo_path="unknown-user/$app_name"
|
||||
fi
|
||||
|
||||
# Create .dockerignore to exclude sensitive files
|
||||
cat > .dockerignore << 'DOCKERIGNORE_EOF'
|
||||
.git
|
||||
.gitignore
|
||||
*.env
|
||||
*.key
|
||||
*.pem
|
||||
*.crt
|
||||
*.cert
|
||||
Dockerfile
|
||||
.dockerignore
|
||||
*.log
|
||||
node_modules
|
||||
__pycache__
|
||||
.pytest_cache
|
||||
.coverage
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
LICENSE
|
||||
AUTHORS
|
||||
CONTRIBUTORS
|
||||
config/
|
||||
secrets/
|
||||
tokens/
|
||||
DOCKERIGNORE_EOF
|
||||
|
||||
# Create Cloudron manifest
|
||||
cat > app.manifest << EOF
|
||||
{
|
||||
"id": "com.$(echo "$repo_path" | sed 's/[^a-zA-Z0-9]/./g').cloudron",
|
||||
"title": "$app_name",
|
||||
"version": "1.0.0",
|
||||
"build": "1",
|
||||
"description": "Cloudron package for $app_name",
|
||||
"author": "Auto-generated",
|
||||
"website": "$app_url",
|
||||
"admin": false,
|
||||
"tags": ["php", "auto-generated"],
|
||||
"logo": "https://github.com/fluidicon.png",
|
||||
"documentation": "$app_url",
|
||||
"changelog": "Initial packaging"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Create Dockerfile for PHP with better configuration
|
||||
cat > Dockerfile << EOF
|
||||
FROM php:8.1-apache
|
||||
|
||||
# Install common PHP extensions
|
||||
RUN docker-php-ext-install mysqli pdo pdo_mysql && docker-php-ext-enable mysqli pdo pdo_mysql
|
||||
|
||||
# Enable Apache rewrite module
|
||||
RUN a2enmod rewrite
|
||||
|
||||
# Set working directory to Apache web root
|
||||
WORKDIR /var/www/html
|
||||
|
||||
COPY . .
|
||||
|
||||
# Make sure permissions are set correctly
|
||||
RUN chown -R www-data:www-data /var/www/html
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
CMD ["apache2-foreground"]
|
||||
EOF
|
||||
|
||||
# Build Docker image with a more unique name to avoid conflicts in parallel execution
|
||||
local docker_image="tsysdevstack-cloudron-buildtest-${app_name//[^a-zA-Z0-9]/-}-$(date +%s%N | cut -c1-10):latest"
|
||||
if ! docker build -t "$docker_image" .; then
|
||||
echo "Failed to build Docker image for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Perform smoke test on the Docker image
|
||||
if ! smoke_test_docker_image "$docker_image" "$app_name"; then
|
||||
echo "Smoke test failed for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Save the Docker image as an artifact
|
||||
docker save "$docker_image" | gzip > "$artifact_dir/${app_name//[^a-zA-Z0-9]/-}-$(date +%s).tar.gz"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Function to package generic Go application
|
||||
package_go_app() {
|
||||
local app_name=$1
|
||||
local app_dir=$2
|
||||
local artifact_dir=$3
|
||||
local app_url=${4:-"https://github.com/unknown-user/$app_name"} # Default URL if not provided
|
||||
|
||||
cd "$app_dir"
|
||||
|
||||
# Extract username/repo from the app_url for manifest
|
||||
local repo_path
|
||||
if [[ "$app_url" == *"github.com"* ]]; then
|
||||
repo_path=${app_url#*github.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
elif [[ "$app_url" == *"gitlab.com"* ]]; then
|
||||
repo_path=${app_url#*gitlab.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
else
|
||||
repo_path="unknown-user/$app_name"
|
||||
fi
|
||||
|
||||
# Create .dockerignore to exclude sensitive files
|
||||
cat > .dockerignore << 'DOCKERIGNORE_EOF'
|
||||
.git
|
||||
.gitignore
|
||||
*.env
|
||||
*.key
|
||||
*.pem
|
||||
*.crt
|
||||
*.cert
|
||||
Dockerfile
|
||||
.dockerignore
|
||||
*.log
|
||||
node_modules
|
||||
__pycache__
|
||||
.pytest_cache
|
||||
.coverage
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
LICENSE
|
||||
AUTHORS
|
||||
CONTRIBUTORS
|
||||
config/
|
||||
secrets/
|
||||
tokens/
|
||||
DOCKERIGNORE_EOF
|
||||
|
||||
# Create Cloudron manifest
|
||||
cat > app.manifest << EOF
|
||||
{
|
||||
"id": "com.$(echo "$repo_path" | sed 's/[^a-zA-Z0-9]/./g').cloudron",
|
||||
"title": "$app_name",
|
||||
"version": "1.0.0",
|
||||
"build": "1",
|
||||
"description": "Cloudron package for $app_name",
|
||||
"author": "Auto-generated",
|
||||
"website": "$app_url",
|
||||
"admin": false,
|
||||
"tags": ["go", "auto-generated"],
|
||||
"logo": "https://github.com/fluidicon.png",
|
||||
"documentation": "$app_url",
|
||||
"changelog": "Initial packaging"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Try to determine the binary name by looking for main.go and possible build files
|
||||
local binary_name="myapp"
|
||||
if [ -f "main.go" ]; then
|
||||
# Try to extract the package name from main.go
|
||||
local package_line=$(grep -m 1 "^package " main.go 2>/dev/null | cut -d' ' -f2 | tr -d '\r\n')
|
||||
if [ -n "$package_line" ] && [ "$package_line" != "main" ]; then
|
||||
binary_name="$package_line"
|
||||
else
|
||||
# Extract binary name from go.mod if available
|
||||
if [ -f "go.mod" ]; then
|
||||
local module_line=$(grep -m 1 "^module " go.mod 2>/dev/null | cut -d' ' -f2)
|
||||
if [ -n "$module_line" ]; then
|
||||
binary_name=$(basename "$module_line")
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create Dockerfile for Go with appropriate binary name
|
||||
cat > Dockerfile << EOF
|
||||
FROM golang:1.21-alpine AS builder
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -o $binary_name .
|
||||
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=builder /app/$binary_name .
|
||||
EXPOSE 8080
|
||||
CMD ["./$binary_name"]
|
||||
EOF
|
||||
|
||||
# Build Docker image with a more unique name to avoid conflicts in parallel execution
|
||||
local docker_image="tsysdevstack-cloudron-buildtest-${app_name//[^a-zA-Z0-9]/-}-$(date +%s%N | cut -c1-10):latest"
|
||||
if ! docker build -t "$docker_image" .; then
|
||||
echo "Failed to build Docker image for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Perform smoke test on the Docker image
|
||||
if ! smoke_test_docker_image "$docker_image" "$app_name"; then
|
||||
echo "Smoke test failed for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Save the Docker image as an artifact
|
||||
docker save "$docker_image" | gzip > "$artifact_dir/${app_name//[^a-zA-Z0-9]/-}-$(date +%s).tar.gz"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Function to perform smoke test on Docker images
|
||||
smoke_test_docker_image() {
|
||||
local docker_image=$1
|
||||
local app_name=$2
|
||||
|
||||
echo "Performing smoke test on $docker_image for $app_name"
|
||||
|
||||
# Validate that docker command exists
|
||||
if ! command -v docker >/dev/null 2>&1; then
|
||||
echo "Docker command not found, cannot perform smoke test"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Sanitize the app name for container name
|
||||
local clean_app_name=$(printf '%s\n' "$app_name" | sed 's/[^a-zA-Z0-9]/-/g' | tr -cd '[:alnum:]-')
|
||||
local container_name="smoke-test-${clean_app_name:0:50}-$(date +%s)"
|
||||
|
||||
# Validate container name doesn't exceed Docker limits
|
||||
if [ ${#container_name} -gt 63 ]; then
|
||||
container_name="${container_name:0:63}"
|
||||
fi
|
||||
|
||||
# Run without specific health check initially, just see if container starts and stays running
|
||||
if ! docker run -d --name "$container_name" "$docker_image" >/dev/null 2>&1; then
|
||||
echo "Failed to start container for $app_name during smoke test"
|
||||
# Remove container in case it was partially created
|
||||
docker rm -f "$container_name" >/dev/null 2>&1 || true
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Give the container time to start - wait with periodic checks
|
||||
local max_wait=30 # Maximum wait time in seconds
|
||||
local waited=0
|
||||
local container_status="not_started"
|
||||
|
||||
while [ $waited -lt $max_wait ]; do
|
||||
container_status=$(docker inspect -f '{{.State.Status}}' "$container_name" 2>/dev/null || echo "not_found")
|
||||
if [ "$container_status" = "running" ]; then
|
||||
break
|
||||
elif [ "$container_status" = "exited" ] || [ "$container_status" = "dead" ]; then
|
||||
# Container exited early, no need to wait longer
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
waited=$((waited + 2))
|
||||
done
|
||||
|
||||
if [ "$container_status" = "running" ]; then
|
||||
echo "Smoke test passed for $app_name - container is running"
|
||||
# Stop and remove the test container
|
||||
docker stop "$container_name" >/dev/null 2>&1 || true
|
||||
docker rm "$container_name" >/dev/null 2>&1 || true
|
||||
return 0
|
||||
else
|
||||
# Container stopped or crashed, get logs for debugging
|
||||
echo "Container for $app_name did not stay running during smoke test (status: $container_status after ${waited}s)"
|
||||
echo "Container logs:"
|
||||
docker logs "$container_name" 2>/dev/null | head -30 || echo "Could not retrieve container logs"
|
||||
# Force remove the container
|
||||
docker rm -f "$container_name" >/dev/null 2>&1 || true
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Generic function that detects application type and calls appropriate function
|
||||
detect_and_package() {
|
||||
local app_name=$1
|
||||
local app_dir=$2
|
||||
local artifact_dir=$3
|
||||
local app_url=${4:-"https://github.com/unknown-user/$app_name"} # Default URL if not provided
|
||||
|
||||
cd "$app_dir"
|
||||
|
||||
# Detect application type based on files
|
||||
if [ -f "package.json" ]; then
|
||||
echo "Detected Node.js application"
|
||||
package_nodejs_app "$app_name" "$app_dir" "$artifact_dir" "$app_url"
|
||||
elif [ -f "requirements.txt" ] || [ -f "setup.py" ]; then
|
||||
echo "Detected Python application"
|
||||
package_python_app "$app_name" "$app_dir" "$artifact_dir" "$app_url"
|
||||
elif [ -f "composer.json" ]; then
|
||||
echo "Detected PHP application"
|
||||
package_php_app "$app_name" "$app_dir" "$artifact_dir" "$app_url"
|
||||
elif [ -f "go.mod" ] || [ -f "*.go" ]; then
|
||||
echo "Detected Go application"
|
||||
package_go_app "$app_name" "$app_dir" "$artifact_dir" "$app_url"
|
||||
else
|
||||
# Default generic approach
|
||||
echo "Application type not detected, using generic approach"
|
||||
package_generic_app "$app_name" "$app_dir" "$artifact_dir" "$app_url"
|
||||
fi
|
||||
}
|
||||
|
||||
# Generic packaging function for unknown application types
|
||||
package_generic_app() {
|
||||
local app_name=$1
|
||||
local app_dir=$2
|
||||
local artifact_dir=$3
|
||||
local app_url=${4:-"https://github.com/unknown-user/$app_name"} # Default URL if not provided
|
||||
|
||||
cd "$app_dir"
|
||||
|
||||
# Create .dockerignore to exclude sensitive files
|
||||
cat > .dockerignore << 'DOCKERIGNORE_EOF'
|
||||
.git
|
||||
.gitignore
|
||||
*.env
|
||||
*.key
|
||||
*.pem
|
||||
*.crt
|
||||
*.cert
|
||||
Dockerfile
|
||||
.dockerignore
|
||||
*.log
|
||||
node_modules
|
||||
__pycache__
|
||||
.pytest_cache
|
||||
.coverage
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
README.md
|
||||
CHANGELOG.md
|
||||
LICENSE
|
||||
AUTHORS
|
||||
CONTRIBUTORS
|
||||
config/
|
||||
secrets/
|
||||
tokens/
|
||||
DOCKERIGNORE_EOF
|
||||
|
||||
# Extract username/repo from the app_url for manifest
|
||||
local repo_path
|
||||
if [[ "$app_url" == *"github.com"* ]]; then
|
||||
repo_path=${app_url#*github.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
elif [[ "$app_url" == *"gitlab.com"* ]]; then
|
||||
repo_path=${app_url#*gitlab.com/}
|
||||
repo_path=${repo_path%.git}
|
||||
else
|
||||
repo_path="unknown-user/$app_name"
|
||||
fi
|
||||
|
||||
# Create Cloudron manifest
|
||||
cat > app.manifest << EOF
|
||||
{
|
||||
"id": "com.$(echo "$repo_path" | sed 's/[^a-zA-Z0-9]/./g').cloudron",
|
||||
"title": "$app_name",
|
||||
"version": "1.0.0",
|
||||
"build": "1",
|
||||
"description": "Cloudron package for $app_name",
|
||||
"author": "Auto-generated",
|
||||
"website": "$app_url",
|
||||
"admin": false,
|
||||
"tags": ["generic", "auto-generated"],
|
||||
"logo": "https://github.com/fluidicon.png",
|
||||
"documentation": "$app_url",
|
||||
"changelog": "Initial packaging"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Create a basic Dockerfile that tries to run common application types
|
||||
cat > Dockerfile << 'DOCKERFILE_EOF'
|
||||
FROM alpine:latest
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
# Install only the most essential tools that might be needed
|
||||
RUN apk add --no-cache bash curl
|
||||
|
||||
# Create a multi-line run script file
|
||||
RUN { \
|
||||
echo '#!/bin/sh'; \
|
||||
echo 'set -e'; \
|
||||
echo ''; \
|
||||
echo '# Check for and run different application types'; \
|
||||
echo 'if [ -f "package.json" ]; then'; \
|
||||
echo ' echo "Detected Node.js application"'; \
|
||||
echo ' if [ -x "$(command -v node)" ]; then'; \
|
||||
echo ' npm install 2>/dev/null || echo "npm install failed"'; \
|
||||
echo ' if [ -n "$START_SCRIPT" ]; then'; \
|
||||
echo ' exec npm run "$START_SCRIPT" 2>/dev/null || echo "Failed to run START_SCRIPT"'; \
|
||||
echo ' else'; \
|
||||
echo ' exec npm start 2>/dev/null || echo "Failed to run npm start"'; \
|
||||
echo ' fi'; \
|
||||
echo ' else'; \
|
||||
echo ' echo "node not available, installing... (would require internet access)"'; \
|
||||
echo ' fi'; \
|
||||
echo 'elif [ -f "requirements.txt" ]; then'; \
|
||||
echo ' echo "Detected Python application"'; \
|
||||
echo ' if [ -x "$(command -v python3)" ]; then'; \
|
||||
echo ' pip3 install -r requirements.txt 2>/dev/null || echo "pip install failed"'; \
|
||||
echo ' if [ -f "app.py" ]; then'; \
|
||||
echo ' exec python3 app.py 2>/dev/null || while true; do sleep 30; done'; \
|
||||
echo ' elif [ -f "main.py" ]; then'; \
|
||||
echo ' exec python3 main.py 2>/dev/null || while true; do sleep 30; done'; \
|
||||
echo ' else'; \
|
||||
echo ' echo "No standard Python entry point found (app.py or main.py)"'; \
|
||||
echo ' while true; do sleep 30; done'; \
|
||||
echo ' fi'; \
|
||||
echo ' else'; \
|
||||
echo ' echo "python3 not available, installing... (would require internet access)"'; \
|
||||
echo ' fi'; \
|
||||
echo 'elif [ -f "go.mod" ] || [ -f "main.go" ]; then'; \
|
||||
echo ' echo "Detected Go application"'; \
|
||||
echo ' if [ -x "$(command -v go)" ]; then'; \
|
||||
echo ' go build -o myapp . 2>/dev/null || echo "Go build failed"'; \
|
||||
echo ' [ -f "./myapp" ] && exec ./myapp || while true; do sleep 30; done'; \
|
||||
echo ' else'; \
|
||||
echo ' echo "go not available, installing... (would require internet access)"'; \
|
||||
echo ' fi'; \
|
||||
echo 'elif [ -f "start.sh" ]; then'; \
|
||||
echo ' echo "Found start.sh script"'; \
|
||||
echo ' chmod +x start.sh'; \
|
||||
echo ' exec ./start.sh'; \
|
||||
echo 'elif [ -f "run.sh" ]; then'; \
|
||||
echo ' echo "Found run.sh script"'; \
|
||||
echo ' chmod +x run.sh'; \
|
||||
echo ' exec ./run.sh'; \
|
||||
echo 'else'; \
|
||||
echo ' echo "No recognized application type found"'; \
|
||||
echo ' echo "Application directory contents:"'; \
|
||||
echo ' ls -la'; \
|
||||
echo ' # Keep container running for inspection'; \
|
||||
echo ' while true; do sleep 30; done'; \
|
||||
echo 'fi'; \
|
||||
} > /run-app.sh && chmod +x /run-app.sh
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
CMD ["/run-app.sh"]
|
||||
DOCKERFILE_EOF
|
||||
|
||||
# Build Docker image with a more unique name to avoid conflicts in parallel execution
|
||||
local docker_image="tsysdevstack-cloudron-buildtest-${app_name//[^a-zA-Z0-9]/-}-$(date +%s%N | cut -c1-10):latest"
|
||||
if ! docker build -t "$docker_image" .; then
|
||||
echo "Failed to build Docker image for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Perform smoke test on the Docker image
|
||||
if ! smoke_test_docker_image "$docker_image" "$app_name"; then
|
||||
echo "Smoke test failed for $app_name"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Save the Docker image as an artifact
|
||||
docker save "$docker_image" | gzip > "$artifact_dir/${app_name//[^a-zA-Z0-9]/-}-$(date +%s).tar.gz"
|
||||
return 0
|
||||
}
|
||||
@@ -1,24 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test script to verify the add_git_url functionality
|
||||
|
||||
# Source the master script to get access to its functions
|
||||
source /home/localuser/TSYSDevStack/CloudronStack/output/master-control-script.sh
|
||||
|
||||
# Test adding a new URL
|
||||
echo "Testing add_git_url function..."
|
||||
add_git_url "https://github.com/testuser/testrepo"
|
||||
|
||||
# Check the git URL list file to see if the URL was added
|
||||
echo "Contents of GitUrlList.txt after adding:"
|
||||
cat /home/localuser/TSYSDevStack/CloudronStack/collab/GitUrlList.txt
|
||||
|
||||
# Test adding the same URL again (should not duplicate)
|
||||
echo "Testing adding the same URL again (should not duplicate)..."
|
||||
add_git_url "https://github.com/testuser/testrepo"
|
||||
|
||||
# Add another URL for good measure
|
||||
echo "Testing adding a second URL..."
|
||||
add_git_url "https://github.com/anotheruser/anotherrepo"
|
||||
|
||||
echo "Test completed successfully!"
|
||||
235
LICENSE
235
LICENSE
@@ -1,235 +0,0 @@
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
|
||||
|
||||
Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software.
|
||||
|
||||
A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public.
|
||||
|
||||
The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version.
|
||||
|
||||
An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
TSYSDevStack
|
||||
Copyright (C) 2025 KNEL
|
||||
|
||||
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see <http://www.gnu.org/licenses/>.
|
||||
@@ -1,62 +0,0 @@
|
||||
# LifecycleStack Qwen Agent Development Log
|
||||
|
||||
Date: Wednesday, October 29, 2025
|
||||
|
||||
## Project Context
|
||||
|
||||
LifecycleStack is one of the four curated stacks that power rapid prototyping, support simulations, developer workspaces, and lifecycle orchestration for TSYS Group. The specific focus of LifecycleStack is on tooling and processes that manage the evolution of TSYSDevStack workloads—from ideation and delivery to ongoing operations.
|
||||
|
||||
The four main stacks are:
|
||||
- **CloudronStack**: Cloudron application packaging and upstream research
|
||||
- **LifecycleStack**: Promotion workflows, governance, and feedback loops (this stack)
|
||||
- **SupportStack**: Demo environment for support tooling
|
||||
- **ToolboxStack**: Reproducible developer workspaces and containerized tooling
|
||||
|
||||
## LifecycleStack Role
|
||||
|
||||
As the LifecycleStack Qwen agent, I operate specifically within the LifecycleStack directory and am responsible for:
|
||||
|
||||
- Maintaining the LifecycleStack README.md file
|
||||
- Managing the LifecycleStack collab/ directory for planning documents
|
||||
- Tracking lifecycle management processes and workflows
|
||||
- Maintaining this LifecycleStack QWEN.md file for tracking work
|
||||
- Coordinating with other stack agents for lifecycle orchestration
|
||||
|
||||
## Sibling Directory Awareness
|
||||
|
||||
- **CloudronStack** (/home/localuser/TSYSDevStack/CloudronStack): Contains Cloudron application packaging research and documentation. Its collab directory contains GitUrlList.txt with various Cloudron-related repositories.
|
||||
- **SupportStack** (/home/localuser/TSYSDevStack/SupportStack): Contains support tooling and demo environments. Its collab directory contains extensive documentation including BuildTheStack, roadmaps, and chat prompts.
|
||||
- **ToolboxStack** (/home/localuser/TSYSDevStack/ToolboxStack): Contains developer workspace tooling. Its collab directory contains TSYSDevStack-toolbox-prompt.md with specific instructions for development environments.
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
- All commits should be verbose/beautifully formatted
|
||||
- Use atomic commits
|
||||
- Use conventional commit format
|
||||
|
||||
## Git Operations Notice
|
||||
|
||||
- IMPORTANT: Git operations (commits and pushes) are handled exclusively by the Topside agent
|
||||
- LifecycleBot should NOT perform git commits or pushes
|
||||
- All changes should be coordinated through the Topside agent for repository consistency
|
||||
|
||||
## Task Tracking
|
||||
|
||||
Current tasks and progress:
|
||||
- [x] Explore the LifecycleStack directory structure in depth
|
||||
- [x] Create a QWEN.md file to track our work
|
||||
- [x] Review the LifecycleStack README.md file
|
||||
- [x] Check the collab directory structure
|
||||
- [x] Understand sibling directories and their purposes
|
||||
- [x] Populate collab directory with initial planning documents
|
||||
- [ ] Develop lifecycle management workflows
|
||||
|
||||
## Work Log
|
||||
|
||||
### Session 1 (2025-10-29)
|
||||
- Oriented to the LifecycleStack directory structure
|
||||
- Reviewed LifecycleStack README.md for project understanding
|
||||
- Created LifecycleStack QWEN.md file for tracking work
|
||||
- Set up basic directory awareness and project context
|
||||
- Explored sibling directories to understand relationships
|
||||
- Added awareness of sibling stack purposes
|
||||
@@ -1,52 +0,0 @@
|
||||
# ♻️ LifecycleStack
|
||||
|
||||
LifecycleStack will eventually house the tooling and processes that manage the evolution of TSYSDevStack workloads—from ideation and delivery to ongoing operations. While the folder is in its inception phase, this README captures the intent and provides collaboration hooks for the future.
|
||||
|
||||
---
|
||||
|
||||
## Focus Areas
|
||||
| Stream | Description | Status |
|
||||
|--------|-------------|--------|
|
||||
| Release Management | Define staged promotion paths for stack artifacts. | 🛠️ Planning |
|
||||
| Observability Loop | Capture learnings from SupportStack deployments back into build workflows. | 🛠️ Planning |
|
||||
| Governance & Quality | Codify checklists, runbooks, and lifecycle metrics. | 🛠️ Planning |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
1. Navigate to the LifecycleStack directory:
|
||||
```bash
|
||||
cd LifecycleStack
|
||||
```
|
||||
|
||||
2. Review the `collab/` directory for planning documents and collaboration notes:
|
||||
```bash
|
||||
ls -la collab/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧭 Working Agreement
|
||||
- **Stacks stay in sync.** When you add or modify automation, update both the relevant stack README and any linked prompts/docs.
|
||||
- **Collab vs Output.** Use `collab/` for planning and prompts, keep runnable artifacts under `output/`.
|
||||
- **Document forward.** New workflows should land alongside tests and a short entry in the appropriate README table.
|
||||
- **AI Agent Coordination.** Use Qwen agents for documentation updates, code changes, and maintaining consistency across stacks.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI Agent
|
||||
This stack is maintained by **LifecycleBot**, an AI agent focused on LifecycleStack workflows.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
1. Draft an initial lifecycle charter outlining environments and promotion triggers.
|
||||
2. Align with SupportStack automation to surface lifecycle metrics.
|
||||
3. Incorporate ToolboxStack routines for reproducible release tooling.
|
||||
|
||||
> 📝 _Tip: If you are beginning new work here, open an issue or doc sketch that points back to this roadmap so the broader team can coordinate._
|
||||
|
||||
---
|
||||
|
||||
## 📄 License
|
||||
See [LICENSE](../LICENSE) for full terms. Contributions are welcome—open a discussion in the relevant stack's `collab/` area to kick things off.
|
||||
59
QWEN.md
59
QWEN.md
@@ -1,59 +0,0 @@
|
||||
# Qwen Agent Development Log
|
||||
|
||||
Date: Wednesday, October 29, 2025
|
||||
|
||||
## Project Context
|
||||
|
||||
TSYSDevStack is a constellation of curated stacks that power rapid prototyping, support simulations, developer workspaces, and (soon) lifecycle orchestration for TSYS Group. The project consists of four main stacks:
|
||||
|
||||
- **CloudronStack**: Cloudron application packaging and upstream research
|
||||
- **LifecycleStack**: Promotion workflows, governance, and feedback loops
|
||||
- **SupportStack**: Demo environment for support tooling
|
||||
- **ToolboxStack**: Reproducible developer workspaces and containerized tooling
|
||||
|
||||
## Topside Role
|
||||
|
||||
As the Topside Qwen agent, I operate at the top level of the directory tree and am responsible for:
|
||||
|
||||
- Keeping the top-level README.md and each of the four subdirectory README.md files up to date
|
||||
- Performing general housekeeping tasks
|
||||
- Maintaining this top-level QWEN.md file for tracking work
|
||||
- Handling ALL git operations (commits and pushes) for the entire repository
|
||||
- Other agents should NOT commit or push - only Topside agent performs git operations
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
- All commits should be verbose/beautifully formatted
|
||||
- Use atomic commits
|
||||
- Use conventional commit format
|
||||
|
||||
## Git Configuration
|
||||
|
||||
- Commit template configured to enforce conventional commits across all stacks
|
||||
- Template file: /home/localuser/TSYSDevStack/commit-template.txt
|
||||
- Template automatically configured for all git operations in the repository
|
||||
- Template ensures consistent commit format across all Qwen agents
|
||||
|
||||
## Task Tracking
|
||||
|
||||
Current tasks and progress:
|
||||
- [x] Explore the current directory structure in depth
|
||||
- [x] Create a QWEN.md file to track our work
|
||||
- [x] Review all subdirectory README.md files
|
||||
- [x] Update README.md files as needed throughout the project
|
||||
- [ ] Perform general housekeeping tasks as requested
|
||||
|
||||
## Work Log
|
||||
|
||||
### Session 1 (2025-10-29)
|
||||
- Oriented to the directory tree structure
|
||||
- Analyzed all README.md files in the project
|
||||
- Created QWEN.md file for tracking work
|
||||
- Set up commit configuration requirements
|
||||
- Updated all README.md files for consistency across the project:
|
||||
- Added Working Agreement section with consistent items
|
||||
- Added AI Agent section identifying the responsible bot
|
||||
- Added License section with reference to main LICENSE
|
||||
- Fixed CloudronStack README title and content
|
||||
- Created missing collab directory in LifecycleStack
|
||||
- Created top-level commit template and configured git
|
||||
59
README.md
59
README.md
@@ -1,59 +0,0 @@
|
||||
# 🌐 TSYSDevStack
|
||||
|
||||
> A constellation of curated stacks that power rapid prototyping, support simulations, developer workspaces, and (soon) lifecycle orchestration for TSYS Group.
|
||||
|
||||
---
|
||||
|
||||
## 📚 Stack Directory Map
|
||||
| Stack | Focus | Highlights |
|
||||
|-------|-------|------------|
|
||||
| [🛰️ CloudronStack](CloudronStack/README.md) | Cloudron application packaging and upstream research. | Catalog of third-party services grouped by capability. |
|
||||
| [♻️ LifecycleStack](LifecycleStack/README.md) | Promotion workflows, governance, and feedback loops. | Roadmap placeholders ready for lifecycle charters. |
|
||||
| [🛟 SupportStack](SupportStack/README.md) | Demo environment for support tooling (homepage, WakaAPI, MailHog, socket proxy). | Control script automation, Docker Compose bundles, targeted shell tests. |
|
||||
| [🧰 ToolboxStack](ToolboxStack/README.md) | Reproducible developer workspaces and containerized tooling. | Ubuntu-based dev container with mise, aqua, and helper scripts. |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
1. **Clone & Inspect**
|
||||
```bash
|
||||
git clone <repo-url>
|
||||
cd TSYSDevStack
|
||||
tree -L 2 # optional: explore the stack layout
|
||||
```
|
||||
2. **Run the Support Stack Demo**
|
||||
```bash
|
||||
cd SupportStack
|
||||
./output/code/TSYSDevStack-SupportStack-Demo-Control.sh start
|
||||
./output/code/TSYSDevStack-SupportStack-Demo-Control.sh test
|
||||
```
|
||||
> Uses Docker Compose bundles under `SupportStack/output/docker-compose/`.
|
||||
3. **Enter the Toolbox Workspace**
|
||||
```bash
|
||||
cd ToolboxStack/output/toolbox-base
|
||||
./build.sh && ./run.sh up
|
||||
docker exec -it tsysdevstack-toolboxstack-toolbox-base zsh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI Collaboration
|
||||
This project uses Qwen AI agents for development and maintenance:
|
||||
- **Topside**: Manages top-level README.md and directory structure
|
||||
- **CloudronBot**: Handles CloudronStack documentation and packaging
|
||||
- **LifecycleBot**: Manages LifecycleStack workflows
|
||||
- **SupportBot**: Maintains SupportStack operations
|
||||
- **ToolboxBot**: Handles ToolboxStack workspace management
|
||||
|
||||
---
|
||||
|
||||
## 🧭 Working Agreement
|
||||
- **Stacks stay in sync.** When you add or modify automation, update both the relevant stack README and any linked prompts/docs.
|
||||
- **Collab vs Output.** Use `collab/` for planning and prompts, keep runnable artifacts under `output/`.
|
||||
- **Document forward.** New workflows should land alongside tests and a short entry in the appropriate README table.
|
||||
- **AI Agent Coordination.** Use Qwen agents for documentation updates, code changes, and maintaining consistency across stacks.
|
||||
|
||||
---
|
||||
|
||||
## 📄 License
|
||||
See [LICENSE](LICENSE) for full terms. Contributions are welcome—open a discussion in the relevant stack’s `collab/` area to kick things off.
|
||||
39
ShipOrBust.md
Normal file
39
ShipOrBust.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# TSYS Development Stack - SHIP OR BUST
|
||||
|
||||
This repository is the home of the TSYS Group Development Stack.
|
||||
|
||||
It's been "reset" (the working directory anyway) (keeping the git history)(messy though it is...) at 2025-11-05 15:44. For the very last time!
|
||||
I need to ship this by 2025-11-15.
|
||||
|
||||
This file has been created to form a "line in the sand" and force myself to ship ship ship.
|
||||
|
||||
I started working on this project in late August early September (creating/destroying dozens of repos/attempts/versions). I've tried all manner of coding agents/approaches/structures. This is very public and very messy. On purpose. I want folks to see all the ups and downs of developing a large project (with or without AI coding agents).
|
||||
|
||||
After weeks of :
|
||||
|
||||
- Claude Code
|
||||
- Open Code
|
||||
- Codex
|
||||
- Gemini
|
||||
- Qwen
|
||||
|
||||
(and some Cursor, and some roo/cline/continue in VsCode)
|
||||
|
||||
and messing with git workflows/git worktrees
|
||||
|
||||
and lots of reading of what other folks are doing ....
|
||||
|
||||
I keep coming back to qwen running in a full screen terminal window on my right screen, and VsCode on the left.
|
||||
|
||||
I briefly tried running qwen in the VsCode integrated terminal (with/without the qwen coding assistant plugin) but I think the combination of xterm.js/node/ssh was too much and when I would resize the terminal window or move VsCode around, weird repaint issues would happen and sometimes qwen (and the other agents as well) would crash. That was annoying.
|
||||
|
||||
So now I keep things simple.
|
||||
|
||||
One VsCode window/workspace
|
||||
One terminal (two tabs) (the actual work and a QA tab)
|
||||
|
||||
All in on Qwen. Just cancelled my ChatGPT Plus subscription and deleted my account.
|
||||
|
||||
Lets get this built...
|
||||
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
# Qwen Code Context File
|
||||
|
||||
## Project Overview
|
||||
The TSYSDevStack SupportStack is a curated demo environment for developer support tools. It bundles Dockerized services, environment settings, automation scripts, and a growing library of collaboration notes. The stack includes tools like Atuin, MailHog, AtomicTracker (habit tracker), local Grafana/InfluxDB for private metrics (Apple health export), WakaAPI and other useful developer productivity and support tools.
|
||||
|
||||
## Multi-Chat Environment Context
|
||||
This project operates within a multi-chat system with five QWEN chats working on related but separate directory trees:
|
||||
- SupportStack (this chat) - Focused on developer support tools
|
||||
- Other sibling directories operate independently but may share common infrastructure patterns
|
||||
- All work is confined to the current directory tree only
|
||||
- Each chat maintains its own QWEN.md context file
|
||||
|
||||
## Project Structure
|
||||
```
|
||||
TSYSDevStack/SupportStack/
|
||||
├── README.md # Main project documentation
|
||||
├── collab/ # Collaboration notes, roadmaps, prompts
|
||||
├── output/ # Main project artifacts
|
||||
│ ├── code/ # Control script
|
||||
│ ├── config/ # Service configurations
|
||||
│ ├── docker-compose/ # Docker Compose files for services
|
||||
│ ├── docs/ # Documentation
|
||||
│ ├── tests/ # Test scripts
|
||||
│ └── TSYSDevStack-SupportStack-Demo-Settings # Environment settings
|
||||
```
|
||||
|
||||
## Key Components
|
||||
1. **Control Script**: Orchestrates start/stop/update/test flows for the demo stack (`output/code/TSYSDevStack-SupportStack-Demo-Control.sh`)
|
||||
2. **Environment Settings**: Centralized `.env` style configuration (`output/TSYSDevStack-SupportStack-Demo-Settings`)
|
||||
3. **Docker Compose Bundles**: Service definitions for developer tools like Atuin, MailHog, AtomicTracker, Grafana/InfluxDB, WakaAPI, and more (`output/docker-compose/`)
|
||||
4. **Service Config**: Configuration for developer tools mounted into containers (`output/config/`)
|
||||
5. **Tests**: Shell-based smoke, unit, and discovery tests for stack services (`output/tests/`)
|
||||
|
||||
## Current Status
|
||||
- **Project**: TSYSDevStack SupportStack Demo
|
||||
- **Status**: ✅ MVP COMPLETE
|
||||
- **Last Updated**: October 28, 2025
|
||||
|
||||
## MVP Components
|
||||
- **docker-socket-proxy**: Docker socket access for secure container communication
|
||||
- **homepage**: Homepage dashboard accessible at http://127.0.0.1:4000
|
||||
- **wakaapi**: WakaAPI service accessible at http://127.0.0.1:4001
|
||||
- **mailhog**: Mailhog service for email testing
|
||||
- **atuin**: Shell history with sync and search capabilities
|
||||
- **atomictracker**: Habit tracking application
|
||||
- **grafana/influxdb**: Private metrics collection and visualization
|
||||
|
||||
## Git Operations Notice
|
||||
- IMPORTANT: Git operations (commits and pushes) are handled exclusively by the Topside agent
|
||||
- SupportBot should NOT perform git commits or pushes
|
||||
- All changes should be coordinated through the Topside agent for repository consistency
|
||||
|
||||
## Key Technologies
|
||||
- Docker
|
||||
- Docker Compose
|
||||
- Shell scripting
|
||||
- Homepage dashboard
|
||||
- WakaAPI
|
||||
- MailHog
|
||||
- Atuin (shell history)
|
||||
- AtomicTracker (habit tracking)
|
||||
- Grafana/InfluxDB (metrics)
|
||||
- Apple health export tools
|
||||
|
||||
## Important Files
|
||||
- Control script: `output/code/TSYSDevStack-SupportStack-Demo-Control.sh`
|
||||
- Environment settings: `output/TSYSDevStack-SupportStack-Demo-Settings`
|
||||
- Docker Compose files: `output/docker-compose/`
|
||||
- Test scripts: `output/tests/`
|
||||
|
||||
## Development Notes
|
||||
- The stack expects Docker access and creates the shared network `tsysdevstack-supportstack-demo-network` if it does not exist.
|
||||
- Keep demo automation in `output/` and exploratory material in `collab/`.
|
||||
- When adding a new service, update both the compose files and the test suite to maintain coverage.
|
||||
- Focus on developer productivity and support tools such as Atuin, MailHog, AtomicTracker, Grafana/InfluxDB, WakaAPI, and Apple health export tools.
|
||||
@@ -1,55 +0,0 @@
|
||||
# 🛟 SupportStack
|
||||
|
||||
The SupportStack delivers a curated demo environment for customer support tooling. It bundles Dockerized services, environment settings, automation scripts, and a growing library of collaboration notes.
|
||||
|
||||
---
|
||||
|
||||
## Stack Snapshot
|
||||
| Component | Purpose | Path |
|
||||
|-----------|---------|------|
|
||||
| Control Script | Orchestrates start/stop/update/test flows for the demo stack. | [`output/code/TSYSDevStack-SupportStack-Demo-Control.sh`](output/code/TSYSDevStack-SupportStack-Demo-Control.sh) |
|
||||
| Environment Settings | Centralized `.env` style configuration consumed by scripts and compose files. | [`output/TSYSDevStack-SupportStack-Demo-Settings`](output/TSYSDevStack-SupportStack-Demo-Settings) |
|
||||
| Docker Compose Bundles | Service definitions for docker-socket-proxy, homepage, WakaAPI, and MailHog. | [`output/docker-compose/`](output/docker-compose) |
|
||||
| Service Config | Homepage/WakaAPI configuration mounted into containers. | [`output/config/`](output/config) |
|
||||
| Tests | Shell-based smoke, unit, and discovery tests for stack services. | [`output/tests/`](output/tests) |
|
||||
| Docs & Vendor Research | Reference material and curated vendor lists. | [`output/docs/`](output/docs) |
|
||||
| Collaboration Notes | Product direction, prompts, and status updates. | [`collab/`](collab) |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
1. Export or edit variables in `output/TSYSDevStack-SupportStack-Demo-Settings`.
|
||||
2. Use the control script to manage the stack:
|
||||
```bash
|
||||
./output/code/TSYSDevStack-SupportStack-Demo-Control.sh start
|
||||
./output/code/TSYSDevStack-SupportStack-Demo-Control.sh test
|
||||
./output/code/TSYSDevStack-SupportStack-Demo-Control.sh stop
|
||||
```
|
||||
3. Review `output/tests/` for additional validation scripts.
|
||||
|
||||
> ℹ️ The stack expects Docker access and creates the shared network `tsysdevstack-supportstack-demo-network` if it does not exist.
|
||||
|
||||
---
|
||||
|
||||
## 🧭 Working Agreement
|
||||
- **Stacks stay in sync.** When you add or modify automation, update both the relevant stack README and any linked prompts/docs.
|
||||
- **Collab vs Output.** Use `collab/` for planning and prompts, keep runnable artifacts under `output/`.
|
||||
- **Document forward.** New workflows should land alongside tests and a short entry in the appropriate README table.
|
||||
- **AI Agent Coordination.** Use Qwen agents for documentation updates, code changes, and maintaining consistency across stacks.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI Agent
|
||||
This stack is maintained by **SupportBot**, an AI agent focused on SupportStack operations.
|
||||
|
||||
---
|
||||
|
||||
## Collaboration Notes
|
||||
- Keep demo automation in `output/` and exploratory material in `collab/`.
|
||||
- When adding a new service, update both the compose files and the test suite to maintain coverage.
|
||||
- Synchronize documentation changes with any updates to automation or configuration—future contributors rely on the README table as the source of truth.
|
||||
|
||||
---
|
||||
|
||||
## 📄 License
|
||||
See [LICENSE](../LICENSE) for full terms. Contributions are welcome—open a discussion in the relevant stack's `collab/` area to kick things off.
|
||||
@@ -1,248 +0,0 @@
|
||||
# TSYSDevStack SupportStack Demo Builder
|
||||
|
||||
## Objective
|
||||
Create an out-of-the-box, localhost-bound only, ephemeral Docker volume-only demonstration version of the SupportStack components documented in the docs/VendorList-SupportStack.md file.
|
||||
|
||||
## MVP Test Run Objective
|
||||
Create a proof of concept with docker-socket-proxy, homepage, and wakaapi components that demonstrate proper homepage integration via Docker Compose labels. This MVP will serve as a validation of the full approach before proceeding with the complete stack implementation.
|
||||
|
||||
## Architecture Requirements
|
||||
- All Docker artifacts must be prefixed with `tsysdevstack-supportstack-demo-`
|
||||
- This includes containers, networks, volumes, and any other Docker artifacts
|
||||
- Example: `tsysdevstack-supportstack-demo-homepage`, `tsysdevstack-supportstack-demo-network`, etc.
|
||||
- Run exclusively on localhost (localhost binding only)
|
||||
- Ephemeral volumes only (no persistent storage)
|
||||
- Resource limits set for single-user demo capacity
|
||||
- No external network access (localhost bound only)
|
||||
- Components: docker-socket-proxy, portainer, homepage as foundational elements
|
||||
- All artifacts must go into artifacts/SupportStack directory to keep the directory well organized and avoid cluttering the root directory
|
||||
- Homepage container needs direct access to Docker socket for labels to auto-populate (not through proxy)
|
||||
- Docker socket proxy is for other containers that need Docker access but don't require direct socket access
|
||||
- Portainer can use docker-socket-proxy for read-only access, but homepage needs direct socket access
|
||||
- All containers need proper UID/GID mapping for security
|
||||
- Docker group GID must be mapped properly for containers using Docker socket
|
||||
- Non-Docker socket using containers should use invoking UID/GID
|
||||
|
||||
## Development Methodology
|
||||
- Strict Test Driven Development (TDD) process
|
||||
- Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
- 75%+ code coverage requirement
|
||||
- 100% test pass requirement
|
||||
- Component-by-component development approach
|
||||
- Complete one component before moving to the next
|
||||
- Apply TDD for every change, no matter how surgical
|
||||
- Test changes right after implementation as atomically as possible
|
||||
- Each fix or modification should be accompanied by a specific test to verify the issue
|
||||
- Ensure all changes are validated immediately after implementation
|
||||
|
||||
## MVP Component Development Sequence (Test Run) ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
1. **MVP**: docker-socket-proxy, homepage, wakaapi (each must fully satisfy Definition of Done before proceeding) ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- docker-socket-proxy: Enable Docker socket access for containers that need it (not homepage) ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- homepage: Configure to access Docker socket directly for automatic label discovery ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- wakaapi: Integrate with homepage using proper labels ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- All services must utilize Docker Compose labels to automatically show up in homepage ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- Implement proper service discovery for homepage integration using gethomepage labels ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- Ensure all components are properly labeled with homepage integration labels ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- Implement proper startup ordering using depends_on with health checks ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- Homepage container requires direct Docker socket access for automatic service discovery ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- Docker socket proxy provides controlled access for other containers ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
- All containers must have proper UID/GID mapping for security ✅ COMPLETED ✅ MVP FULLY IMPLEMENTED AND TESTED
|
||||
|
||||
## Component Completion Validation ✅ MVP COMPLETED
|
||||
- Each component must pass health checks for 5 consecutive minutes before moving to the next ✅ MVP COMPLETED
|
||||
- All tests must pass with 100% success rate before moving to the next component ✅ MVP COMPLETED
|
||||
- Resource utilization must be within specified limits before moving to the next component ✅ MVP COMPLETED
|
||||
- Integration tests with previously completed components must pass before moving forward ✅ MVP COMPLETED
|
||||
- Homepage must automatically detect and display all services with proper labels ✅ MVP COMPLETED
|
||||
- Specific validation checkpoints after each service deployment:
|
||||
- docker-socket-proxy: Validate Docker socket access and network connectivity to Docker daemon ✅ COMPLETED
|
||||
- homepage: Validate homepage starts and can connect to Docker socket directly, verify UI is accessible ✅ COMPLETED
|
||||
- wakaapi: Validate service starts and can be integrated into homepage with proper labels ✅ COMPLETED
|
||||
- Each service must be validated in homepage dashboard after integration ✅ MVP COMPLETED
|
||||
- Detailed homepage integration validation steps:
|
||||
- Verify service appears in homepage dashboard with correct name and icon ✅ MVP COMPLETED
|
||||
- Confirm service status shows as healthy in homepage ✅ MVP COMPLETED
|
||||
- Validate service URL in homepage correctly links to the service ✅ MVP COMPLETED
|
||||
- Verify service group assignment in homepage is correct ✅ MVP COMPLETED
|
||||
- Check that any configured widgets appear properly in homepage ✅ MVP COMPLETED
|
||||
- Homepage must automatically discover services via Docker labels without manual configuration ✅ MVP COMPLETED
|
||||
- Validate Docker socket connectivity for automatic service discovery ✅ MVP COMPLETED
|
||||
- Confirm homepage can access and display service status information ✅ MVP COMPLETED
|
||||
- Update STATUS.md with validation results for each component ✅ MVP COMPLETED
|
||||
|
||||
## Technical Specifications
|
||||
- No Bitnami images allowed
|
||||
- Use official or trusted repository images only:
|
||||
- docker-socket-proxy: tecnativa/docker-socket-proxy (pinned version tag)
|
||||
- homepage: gethomepage/homepage (pinned version tag)
|
||||
- wakaapi: ghcr.io/ekkinox/wakaapi (pinned version tag)
|
||||
- Implement Docker Compose orchestration
|
||||
- Use Docker named volumes for ephemeral storage
|
||||
- Implement proper resource limits in docker-compose.yml: CPU: 0.5-1.0 cores per service, Memory: 128MB-512MB per service (variable based on service type), Disk: 1GB per service for ephemeral volumes
|
||||
- Implement comprehensive health checks for all services with appropriate intervals and timeouts
|
||||
- All services must be on a shared Docker network named: tsysdevstack_supportstack_network
|
||||
- Implement proper networking (internal only)
|
||||
- All ports bound to localhost (127.0.0.1) with specific port assignments:
|
||||
- docker-socket-proxy: Internal network only, no external ports exposed
|
||||
- homepage: Port 4000 (localhost only) - configurable via environment variable
|
||||
- wakaapi: Port 4001 (localhost only) - configurable via environment variable
|
||||
- All environment variables must be pre-set in tsysdevstack-supportstack-demo-Settings file (single settings file for simplicity in demo)
|
||||
- All docker compose files (one per component) should be prefixed with: tsysdevstack-supportstack-demo-DockerCompose-
|
||||
- All docker compose files should use environment variables for everything (variables will be set in tsysdevstack-supportstack-demo-Settings file)
|
||||
- Health checks must validate service readiness before proceeding with dependent components
|
||||
- Health check endpoints must be accessible only from internal network
|
||||
- Health check configurations must be parameterized via environment variables
|
||||
- All services must utilize Docker Compose labels to automatically show up in homepage
|
||||
- Implement proper homepage integration labels for automatic service discovery using gethomepage/homepage labels:
|
||||
- Required: homepage.group, homepage.name, homepage.icon
|
||||
- Optional: homepage.href, homepage.description, homepage.widget.type, homepage.widget.url, homepage.widget.key, homepage.widget.fields, homepage.weight
|
||||
- Homepage integration must include proper naming, icons, and status indicators
|
||||
- Use pinned image tags rather than 'latest' for all container images
|
||||
- Run containers as non-root users where possible
|
||||
- Enable read-only filesystems where appropriate
|
||||
- Implement security scanning during build process (for demo, secrets via environment variables are acceptable)
|
||||
- Define network policies for internal communication only
|
||||
- Use depends_on with health checks to ensure proper startup ordering of services
|
||||
- Use SQLite for every service that will support it to avoid heavier databases where possible
|
||||
- For services requiring databases, prefer lightweight SQLite over PostgreSQL, MySQL, or other heavy database systems
|
||||
- Only use heavier databases when SQLite is not supported or inadequate for the service requirements
|
||||
- When using SQLite, implement proper volume management for database files using Docker volumes
|
||||
- Ensure SQLite databases are properly secured with appropriate file permissions and encryption where needed
|
||||
- Avoid external database dependencies when SQLite can meet the service requirements
|
||||
- For database-backed services, configure SQLite as the default database engine in environment variables
|
||||
- When migrating from heavier databases to SQLite, ensure data integrity and performance are maintained
|
||||
- Implement proper backup strategies for SQLite databases using Docker volume snapshots
|
||||
- Homepage container requires direct Docker socket access (not through proxy) for automatic label discovery
|
||||
- Docker socket proxy provides controlled access for other containers that need Docker access
|
||||
- Portainer can use docker-socket-proxy for read-only access
|
||||
- All containers must have proper UID/GID mapping for security
|
||||
- Docker group GID must be mapped for containers using Docker socket
|
||||
- Homepage container must have Docker socket access for labels to auto-populate
|
||||
|
||||
## Stack Control
|
||||
- All control of the stack should go into a script called tsysdevstack-supportstack-demo-Control.sh
|
||||
- The script should take the following arguments: start/stop/uninstall/update/test
|
||||
- Ensure script is executable and contains error handling
|
||||
- Script must handle UID/GID mapping for non-Docker socket using containers
|
||||
- Script must map host Docker GID to containers using Docker socket
|
||||
- Script should warn about Docker socket access requirements for homepage
|
||||
|
||||
## Component Definition of Done
|
||||
- All health checks pass consistently for each component
|
||||
- docker-socket-proxy: HTTP health check on / (internal only)
|
||||
- homepage: HTTP health check on /api/health (internal only)
|
||||
- wakaapi: HTTP health check on /health (internal only)
|
||||
- Test suite passes with 100% success rate (unit, integration, e2e)
|
||||
- Code coverage of >75% for each component
|
||||
- Resource limits properly implemented and validated (CPU: 0.5-1.0 cores, Memory: 128MB-512MB, Disk: 1GB per service)
|
||||
- All services properly bound to localhost only
|
||||
- Proper error handling and logging implemented (with retry logic and exponential backoff)
|
||||
- Documentation and configuration files created
|
||||
- Component successfully starts, runs, and stops without manual intervention
|
||||
- Component properly integrates with other components without conflicts
|
||||
- Automated self-recovery mechanisms implemented for common failure scenarios
|
||||
- Performance benchmarks met for single-user demo capacity (apply reasonable defaults based on service type)
|
||||
- Security scans completed and passed (run as non-root, read-only filesystems where appropriate)
|
||||
- No hard-coded values; all configuration via environment variables
|
||||
- All dependencies properly specified and resolved using depends_on with health checks
|
||||
- Component properly labeled with homepage integration labels (homepage.group, homepage.name, homepage.icon, etc.)
|
||||
- Container uses pinned image tags rather than 'latest'
|
||||
- Services validate properly in homepage after integration
|
||||
- Homepage container has direct Docker socket access for automatic service discovery
|
||||
- Homepage automatically discovers and displays services with proper labels
|
||||
- Homepage validates Docker socket connectivity and service discovery
|
||||
- All homepage integration labels are properly applied and validated
|
||||
- Services appear in homepage with correct grouping, naming, and icons
|
||||
- Homepage container has direct Docker socket access for automatic label discovery
|
||||
- Docker socket proxy provides access for other containers that need Docker access
|
||||
- Proper UID/GID mapping implemented for all containers
|
||||
- Docker group GID properly mapped for containers using Docker socket
|
||||
- All warnings addressed and resolved during implementation
|
||||
|
||||
## Testing Requirements
|
||||
- Unit tests for each component configuration
|
||||
- Integration tests for component interactions
|
||||
- End-to-end tests for the complete stack
|
||||
- Performance tests to validate resource limits
|
||||
- Security tests for localhost binding
|
||||
- Health check tests for all services
|
||||
- Coverage report generation
|
||||
- Continuous test execution during development
|
||||
- Automated test suite execution for each component before moving to the next
|
||||
- End-to-end validation tests after each component integration
|
||||
|
||||
## Error Resolution Strategy
|
||||
- Implement autonomous error detection and resolution
|
||||
- Automatic retry mechanisms for transient failures with exponential backoff (base delay of 5s, max 5 attempts)
|
||||
- Fallback configurations for compatibility issues
|
||||
- Comprehensive logging for debugging
|
||||
- Graceful degradation for optional components
|
||||
- Automated rollback for failed deployments
|
||||
- Self-healing mechanisms for common failure scenarios
|
||||
- Automated restart policies with appropriate backoff strategies
|
||||
- Deadlock detection and resolution mechanisms
|
||||
- Resource exhaustion monitoring and mitigation
|
||||
- Automated cleanup of failed component attempts
|
||||
- Persistent state recovery mechanisms
|
||||
- Fail-safe modes for critical components
|
||||
- Circuit breaker patterns for service dependencies
|
||||
- Specific timeout values for operations:
|
||||
- Docker socket proxy connection timeout: 30 seconds
|
||||
- Homepage startup timeout: 60 seconds
|
||||
- Wakaapi initialization timeout: 45 seconds
|
||||
- Service health check timeout: 10 seconds
|
||||
- Docker Compose startup timeout: 120 seconds per service
|
||||
- If unable to resolve an issue after multiple attempts, flag it in collab/SupportStack/HUMANHELP.md and move on
|
||||
- Maintain running status reports in collab/SupportStack/STATUS.md
|
||||
- Use git commit heavily to track progress
|
||||
- Push to remote repository whenever a component is fully working/tested/validated
|
||||
- Check Docker logs for all containers during startup and health checks to identify issues
|
||||
- Monitor container logs continuously for error patterns and failure indicators
|
||||
- Implement log analysis for common failure signatures and automatic remediation
|
||||
|
||||
## Autonomous Operation Requirements
|
||||
- Project must be capable of running unattended for 1-2 days without manual intervention
|
||||
- All components must implement self-monitoring and self-healing
|
||||
- Automated monitoring of resource usage with alerts if limits exceeded
|
||||
- All failure scenarios must have automated recovery procedures
|
||||
- Consistent state maintenance across all components
|
||||
- Automated cleanup of temporary resources
|
||||
- Comprehensive logging for troubleshooting without human intervention
|
||||
- Built-in validation checks to ensure continued operation
|
||||
- Automatic restart of failed services with appropriate retry logic
|
||||
- Prevention of resource leaks and proper cleanup on shutdown
|
||||
|
||||
## Qwen Optimization
|
||||
- Structured for autonomous execution
|
||||
- Clear task decomposition
|
||||
- Explicit success/failure criteria
|
||||
- Self-contained instructions
|
||||
- Automated validation steps
|
||||
- Progress tracking mechanisms
|
||||
|
||||
## Output Deliverables
|
||||
- Directory structure in artifacts/SupportStack
|
||||
- Environment variables file: TSYSDevStack-SupportStack-Demo-Settings
|
||||
- Control script: TSYSDevStack-SupportStack-Demo-Control.sh (with start/stop/uninstall/update/test arguments)
|
||||
- Docker Compose files prefixed with: TSYSDevStack-SupportStack-Demo-DockerCompose-
|
||||
- Component configuration files
|
||||
- Test suite (unit, integration, e2e)
|
||||
- Coverage reports
|
||||
- Execution logs
|
||||
- Documentation files
|
||||
- Health check scripts and configurations
|
||||
- Component readiness and liveness check definitions
|
||||
- Automated validation scripts for component completion
|
||||
- Monitoring and alerting configurations
|
||||
|
||||
The implementation should work autonomously, handling errors and resolving configuration issues without human intervention while strictly adhering to the TDD process.
|
||||
|
||||
## Production Considerations
|
||||
- For production implementation, additional items will be addressed including:
|
||||
- Enhanced monitoring and observability with centralized logging
|
||||
- Advanced security measures (secrets management, network policies, etc.)
|
||||
- Performance benchmarks and optimization
|
||||
- Configuration management with separation of required vs optional parameters
|
||||
- Advanced documentation (architecture diagrams, troubleshooting guides, etc.)
|
||||
- Production-grade error handling and recovery procedures
|
||||
- All deferred items will be tracked in collab/SupportStack/ProdRoadmap.md
|
||||
@@ -1,4 +0,0 @@
|
||||
THings to add in to SupportStack
|
||||
|
||||
MCP Server Manager of some kind (CLI? Web? BOth?)
|
||||
SO many options exist right now
|
||||
@@ -1,192 +0,0 @@
|
||||
I am a solo entrepreneur and freelancer.
|
||||
|
||||
Hosted on Netcup VPS — managed via Cloudron
|
||||
|
||||
| Icon | Service | Purpose / Notes |
|
||||
|------|---------|-----------------|
|
||||
| 📓 | Joplin Server | Self-hosted note sync / personal knowledge base |
|
||||
| 🔔 | ntfy.sh | Simple push notifications / webhooks |
|
||||
| 🖼️ | Firefly | Personal photo management |
|
||||
| 📂 | Paperless-NGX | Document ingestion / OCR / archival |
|
||||
| 🧾 | Dolibarr | ERP / CRM for small business |
|
||||
| 🎨 | Penpot | Design & SVG collaboration (open source Figma alternative) |
|
||||
| 🎧 | Audiobookshelf | Self-hosted audiobooks & media server |
|
||||
| 🖨️ | Stirling-PDF | PDF utilities / manipulation |
|
||||
| 📰 | FreshRSS | Self-hosted RSS reader |
|
||||
| 🤖 | OpenWebUI | Web UI for local LLM / AI interaction |
|
||||
| 🗄️ | MinIO | S3-compatible object storage |
|
||||
| 📝 | Hastebin | Quick paste / snippets service |
|
||||
| 📊 | Prometheus | Metrics collection |
|
||||
| 📈 | Grafana | Metrics visualization / dashboards |
|
||||
| 🐙 | Gitea | Git hosting (also Docker registry + CI integrations) |
|
||||
| 🔐 | Vault | Secrets management |
|
||||
| 🗂️ | Redmine | Project management / issue tracking |
|
||||
| 👥 | Keycloak | Single Sign-On / identity provider |
|
||||
| 📝 | Hedgedoc | Collaborative markdown editor / docs |
|
||||
| 🔎 | SearxNG | Privacy-respecting metasearch engine |
|
||||
| ⏱️ | Uptime Kuma | Service uptime / status monitoring |
|
||||
| 📷 | Immich | Personal photo & video backup server |
|
||||
| 🔗 | LinkWarden | Personal link/bookmark manager |
|
||||
| … | etc. | Additional Cloudron apps and personal services |
|
||||
|
||||
Notes:
|
||||
- All apps are deployed under Cloudron on a Netcup VPS.
|
||||
- This list is organized for quick visual reference; each entry is the hosted service name + short purpose.
|
||||
|
||||
I have been focused on the operations and infrastructure of building my businesses.
|
||||
Hence deployment of Cloudron and the services on it and moving data into it from various SAAS and legacy LAMP systems.
|
||||
|
||||
Now I am focusing on setting up my development environment on a Debian 12 VM. Below is an organized, left-justified reference of the selected SupportStack services — software name links to the project website and the second column links to the repository (link text: repository).
|
||||
|
||||
Core utilities
|
||||
| Icon | Software (website) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 🐚 | [atuin](https://atuin.sh) | [repository](https://github.com/ellie/atuin) |
|
||||
| 🧪 | [httpbin](https://httpbin.org) | [repository](https://github.com/postmanlabs/httpbin) |
|
||||
| 📁 | [Dozzle](https://github.com/amir20/dozzle) | [repository](https://github.com/amir20/dozzle) |
|
||||
| 🖥️ | [code-server](https://coder.com/code-server) | [repository](https://github.com/coder/code-server) |
|
||||
| 📬 | [MailHog](https://mailhog.github.io/) | [repository](https://github.com/mailhog/MailHog) |
|
||||
| 🧾 | [Adminer](https://www.adminer.org) | [repository](https://github.com/vrana/adminer) |
|
||||
| 🧰 | [Portainer](https://www.portainer.io) | [repository](https://github.com/portainer/portainer) |
|
||||
| 🔁 | [Watchtower](https://containrrr.dev/watchtower) | [repository](https://github.com/containrrr/watchtower) |
|
||||
|
||||
API, docs and mocking
|
||||
| Icon | Software (website) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 🧩 | [wiremock](http://wiremock.org) | [repository](https://github.com/wiremock/wiremock) |
|
||||
| 🔗 | [hoppscotch](https://hoppscotch.io) | [repository](https://github.com/hoppscotch/hoppscotch) |
|
||||
| 🧾 | [swagger-ui](https://swagger.io/tools/swagger-ui/) | [repository](https://github.com/swagger-api/swagger-ui) |
|
||||
| 📚 | [redoc](https://redoc.ly) | [repository](https://github.com/Redocly/redoc) |
|
||||
| 🔔 | [webhook.site](https://webhook.site) | [repository](https://github.com/search?q=webhook.site) |
|
||||
| 🧪 | [pact_broker](https://docs.pact.io/pact_broker) | [repository](https://github.com/pact-foundation/pact_broker) |
|
||||
| 🧰 | [httpbin (reference)](https://httpbin.org) | [repository](https://github.com/postmanlabs/httpbin) |
|
||||
|
||||
Observability & tracing
|
||||
| Icon | Software (website) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 🔍 | [Jaeger All-In-One](https://www.jaegertracing.io) | [repository](https://github.com/jaegertracing/jaeger) |
|
||||
| 📊 | [Loki](https://grafana.com/oss/loki/) | [repository](https://github.com/grafana/loki) |
|
||||
| 📤 | [Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) | [repository](https://github.com/grafana/loki) |
|
||||
| 🧭 | [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) | [repository](https://github.com/open-telemetry/opentelemetry-collector) |
|
||||
| 🧮 | [node-exporter (Prometheus)](https://prometheus.io/docs/guides/node-exporter/) | [repository](https://github.com/prometheus/node_exporter) |
|
||||
| 📦 | [google/cadvisor](https://github.com/google/cadvisor) | [repository](https://github.com/google/cadvisor) |
|
||||
|
||||
Chaos, networking & proxies
|
||||
| Icon | Software (website) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 🌩️ | [toxiproxy](https://github.com/Shopify/toxiproxy) | [repository](https://github.com/Shopify/toxiproxy) |
|
||||
| 🧨 | [pumba](https://github.com/alexei-led/pumba) | [repository](https://github.com/alexei-led/pumba) |
|
||||
| 🧭 | [CoreDNS](https://coredns.io) | [repository](https://github.com/coredns/coredns) |
|
||||
| 🔐 | [step-ca (smallstep)](https://smallstep.com/docs/step-ca/) | [repository](https://github.com/smallstep/certificates) |
|
||||
|
||||
Devops, CI/CD & registries
|
||||
| Icon | Software (website) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 📦 | [Registry (Distribution v2)](https://docs.docker.com/registry/) | [repository](https://github.com/distribution/distribution) |
|
||||
| ⚙️ | [Core workflow: Cadence](https://cadenceworkflow.io) | [repository](https://github.com/uber/cadence) |
|
||||
| 🧾 | [Unleash (feature flags)](https://www.getunleash.io) | [repository](https://github.com/Unleash/unleash) |
|
||||
| 🛡️ | [OpenPolicyAgent](https://www.openpolicyagent.org) | [repository](https://github.com/open-policy-agent/opa) |
|
||||
|
||||
Rendering, diagrams & misc developer tools
|
||||
| Icon | Software (website) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 🖼️ | [Kroki](https://kroki.io) | [repository](https://github.com/yuzutech/kroki) |
|
||||
| 🧭 | [Dozzle (logs)](https://github.com/amir20/dozzle) | [repository](https://github.com/amir20/dozzle) |
|
||||
| 📚 | [ArchiveBox](https://archivebox.io) | [repository](https://github.com/ArchiveBox/ArchiveBox) |
|
||||
| 🧩 | [Registry tools / misc searches] | [repository](https://github.com/search?q=registry2) |
|
||||
|
||||
Personal / community / uncertain (link targets go to GitHub search where official page/repo was ambiguous)
|
||||
| Icon | Software (website / search) | Repository |
|
||||
|:---|:---|:---|
|
||||
| 🧭 | [reactiveresume (search)](https://github.com/search?q=reactive+resume) | [repository](https://github.com/search?q=reactive+resume) |
|
||||
| 🎞️ | [tubearchivst (search)](https://github.com/search?q=tubearchivst) | [repository](https://github.com/search?q=tubearchivst) |
|
||||
| ⏱️ | [atomic tracker (search)](https://github.com/search?q=atomic+tracker) | [repository](https://github.com/search?q=atomic+tracker) |
|
||||
| 📈 | [wakaapi (search)](https://github.com/search?q=wakaapi) | [repository](https://github.com/search?q=wakaapi) |
|
||||
|
||||
Notes:
|
||||
- Where an authoritative project website exists it is linked in the Software column; where a dedicated site was not apparent the link points to a curated GitHub page or a GitHub search (to avoid guessing official domains).
|
||||
- Let me know if you want this exported as Markdown, HTML, or rendered into your Cloudron/Stack documentation format.
|
||||
|
||||
|
||||
|
||||
Overview
|
||||
This SupportStack is the always-on, developer-shared utility layer for local work and personal use. It is separate from per-project stacks (which own their DBs and runtime dependencies)
|
||||
and separate from the LifecycleStack (build/package/release tooling).
|
||||
|
||||
Services here are intended to be stable, long-running, and reusable across projects.
|
||||
|
||||
Architecture & constraints
|
||||
- Dev environment: Debian 12 VM with a devcontainer base + specialized containers. Each project ships an identical docker-compose.yml in dev and prod.
|
||||
- Deployment model: 12‑factor principles. Per-project stateful services (databases, caches) live inside each project stack, not in SupportStack.
|
||||
- LifecycleStack: build/package/release tooling (Trivy, credential management container, artifact signing, CI runners) lives in a separate stack.
|
||||
- Cloud policy: no public cloud for local infrastructure (Hard NO). Cloud-targeted tools may exist only for cloud dev environments (run in the cloud).
|
||||
- Networking/UI: access services by ports. No need for reverse proxies (Caddy/Traefik) in SupportStack; the homepage provides the unified entry point.
|
||||
- Credentials: projects consume secrets from the creds container in LifecycleStack. Do NOT add a credential injector to SupportStack.
|
||||
- Data ownership: SupportStack contains developer & personal services (MailHog, Atuin, personal analytics). Project production data and DBs are explicitly outside SupportStack.
|
||||
|
||||
Operational guidelines
|
||||
- Use explicit ports and stable hostnames for each service to keep UX predictable.
|
||||
- Pin container images (digest or specific semver) and include healthchecks.
|
||||
- Limit resource usage per container (cpu/memory) to avoid noisy neighbors.
|
||||
- Persist data to named volumes and schedule regular backups.
|
||||
- Centralize logs and metrics (Prometheus + Grafana + Loki) and add basic alerting.
|
||||
- Use network isolation where appropriate (bridge networks per stack) and document exposed ports.
|
||||
- Use a single canonical docker-compose schema across dev and prod to reduce drift.
|
||||
- Document service purpose, default ports, and admin credentials in a small README inside the SupportStack repo (no secrets in repo).
|
||||
|
||||
Suggested additions to the SupportStack (with rationale)
|
||||
- Local artifact/cache proxies
|
||||
- apt/aptly or apt-cacher-ng — speed package installs and reduce external hits.
|
||||
- npm/yarn registry proxy (Verdaccio) — speed front-end dependency installs.
|
||||
- Backup & restore
|
||||
- restic or Duplicity plus a scheduled job to back up named volumes (or push to MinIO).
|
||||
- Object storage & S3 tooling
|
||||
- MinIO (already listed) — ensure lifecycle for backups and dev S3 workloads.
|
||||
- s3gateway tools / rclone GUI for manual data movement.
|
||||
- Registry & image tooling
|
||||
- Private Docker Registry (distribution v2) — already listed; consider adding simple GC and retention policies.
|
||||
- Image vulnerability dashboard (registry + Trivy / Polaris integrations) — surface image risks (Trivy stays in LifecycleStack for scanning).
|
||||
- Caching & fast storage
|
||||
- Redis — local cache for dev apps and simple feature testing.
|
||||
- memcached — lightweight alternative where needed.
|
||||
- Dev UX tooling
|
||||
- filebrowser or chevereto-like lightweight file manager — quick SFTP/HTTP access to files.
|
||||
- code-server (already listed) — ensure secure defaults for dev access.
|
||||
- Networking & secure access
|
||||
- WireGuard or a local VPN appliance — secure remote developer access without exposing services publicly.
|
||||
- CoreDNS (already listed) — DNS for local hostnames and service discovery.
|
||||
- Observability & testing
|
||||
- Blackbox exporter or Uptime Kuma (already listed) — external checks on service ports.
|
||||
- Tempo or Jaeger (already listed) — distributed tracing for local microservice testing.
|
||||
- Loki + Promtail (already listed) — central logs; ensure retention policies.
|
||||
- Development mocks & API tooling
|
||||
- Wiremock / Mock servers (already listed) — richer API contract testing.
|
||||
- Postman/hoppscotch (already listed) — request building and collection testing.
|
||||
- CI/CD helpers (lightweight)
|
||||
- Local runner (small container to run builds/tests) that mirrors prod runner environment.
|
||||
- Container image pruning tools / reclaimers for long-running dev VM.
|
||||
- Misc useful tools
|
||||
- Sentry (or a lightweight error aggregator) — collect local app exceptions during dev runs.
|
||||
- ArchiveBox / Archive utilities (already listed) — reproducible web captures.
|
||||
- A small SMTP relay for inbound testing (MailHog already present).
|
||||
- A small DB admin (Adminer already listed) and optional pgAdmin if need richer DB tools.
|
||||
- Optional: a minimal artifact repository (Nexus/Harbor) if storing compiled artifacts or OCI images beyond the simple registry.
|
||||
|
||||
Operational checklist to add to repo
|
||||
- Compose file naming and versioning policy (same file for dev & prod).
|
||||
- Port assignment table (avoid collisions).
|
||||
- Volume & backup policy (what to snapshot and when).
|
||||
- Upgrade policy and maintenance window for SupportStack.
|
||||
- Quick restore steps for any critical service.
|
||||
|
||||
Short example priorities for next additions
|
||||
1. Verdaccio (npm proxy) + apt-cacher-ng — speed & reproducible installs.
|
||||
2. Restic backup container that snapshots SupportStack volumes to MinIO.
|
||||
3. WireGuard for secure remote dev access.
|
||||
4. Image pruning/cleanup job and clear registry retention policy.
|
||||
5. Add Redis and a lightweight error aggregator (Sentry) for local dev testing.
|
||||
|
||||
This expanded description is designed to be pasted along with the rest of the SupportStack file to prompt ideation from ChatGPT/CoPilot/Grok/Qwen.
|
||||
|
||||
Use the suggestions list to generate additional service proposals, playbooks, and compose templates for each recommended service.
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
# 🚨 Human Assistance Required
|
||||
|
||||
This file tracks components, issues, or tasks that require human intervention during the autonomous build process.
|
||||
|
||||
## Current Items Requiring Help
|
||||
|
||||
| Date | Component | Issue | Priority | Notes |
|
||||
|------|-----------|-------|----------|-------|
|
||||
| 2025-10-28 | N/A | Initial file creation | Low | This file will be populated as issues arise during autonomous execution |
|
||||
|
||||
## Resolution Status Legend
|
||||
- 🔄 **Pending**: Awaiting human review
|
||||
- ⏳ **In Progress**: Being addressed by human
|
||||
- ✅ **Resolved**: Issue fixed, can continue autonomously
|
||||
- 🔄 **Delegated**: Assigned to specific team/resource
|
||||
|
||||
## How to Use This File
|
||||
1. When autonomous processes encounter an issue they cannot resolve after multiple attempts
|
||||
2. Add the issue to the table above with relevant details
|
||||
3. Address the issue manually
|
||||
4. Update the status when resolved
|
||||
5. The autonomous process will check this file for resolved issues before continuing
|
||||
|
||||
## Guidelines for Autonomous Process
|
||||
- Attempt to resolve issues automatically first (exponential backoff, retries)
|
||||
- Only add to this file after reasonable number of attempts (typically 5)
|
||||
- Provide sufficient context for human to understand and resolve the issue
|
||||
- Continue with other tasks while waiting for human resolution
|
||||
@@ -1,63 +0,0 @@
|
||||
# New Chat Summary: TSYSDevStack SupportStack End-to-End Build
|
||||
|
||||
## Overview
|
||||
This chat will focus on executing the end-to-end build of the TSYSDevStack SupportStack using the comprehensive prompt file. The implementation will follow strict Test Driven Development (TDD) principles with all requirements specified in the prompt.
|
||||
|
||||
## Key Components to Build
|
||||
1. **docker-socket-proxy** - Enable Docker socket access for containers that need it (not homepage)
|
||||
2. **homepage** - Configure to access Docker socket directly for automatic label discovery
|
||||
3. **wakaapi** - Integrate with homepage using proper labels
|
||||
|
||||
## Key Requirements from Prompt
|
||||
- Use atomic commits with conventional commit messages
|
||||
- Follow strict TDD: Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
- 75%+ code coverage requirement
|
||||
- 100% test pass requirement
|
||||
- Component-by-component development approach
|
||||
- Complete one component before moving to the next
|
||||
- All Docker artifacts must be prefixed with `tsysdevstack-supportstack-demo-`
|
||||
- Run exclusively on localhost (localhost binding only)
|
||||
- Ephemeral volumes only (no persistent storage)
|
||||
- Resource limits set for single-user demo capacity
|
||||
- No external network access (localhost bound only)
|
||||
- Homepage container needs direct Docker socket access for labels to auto-populate
|
||||
- Docker socket proxy provides controlled access for other containers that need Docker access
|
||||
- All containers need proper UID/GID mapping for security
|
||||
- Docker group GID must be mapped properly for containers using Docker socket
|
||||
- Non-Docker socket using containers should use invoking UID/GID
|
||||
- Use SQLite for every service that will support it to avoid heavier databases where possible
|
||||
- Only use heavier databases when SQLite is not supported or inadequate for the service
|
||||
|
||||
## Implementation Process
|
||||
1. Start with docker-socket-proxy (dependency for homepage)
|
||||
2. Implement homepage (requires docker-socket-proxy)
|
||||
3. Implement wakaapi (integrates with homepage)
|
||||
4. Validate all components work together with proper service discovery
|
||||
5. Run comprehensive test suite with >75% coverage
|
||||
6. Ensure all tests pass with 100% success rate
|
||||
|
||||
## Files to Reference
|
||||
- **Prompt File**: `/home/localuser/TSYSDevStack/collab/SupportStack/BuildTheStack`
|
||||
- **Status Tracking**: `/home/localuser/TSYSDevStack/collab/SupportStack/STATUS.md`
|
||||
- **Human Help**: `/home/localuser/TSYSDevStack/collab/SupportStack/HUMANHELP.md`
|
||||
- **Production Roadmap**: `/home/localuser/TSYSDevStack/collab/SupportStack/ProdRoadmap.md`
|
||||
|
||||
## Directory Structure
|
||||
All artifacts will be created in:
|
||||
- `/home/localuser/TSYSDevStack/artifacts/SupportStack/`
|
||||
|
||||
## Success Criteria
|
||||
- ✅ All 3 MVP components implemented and tested
|
||||
- ✅ Docker socket proxy providing access for homepage discovery
|
||||
- ✅ Homepage successfully discovering and displaying services through Docker labels
|
||||
- ✅ WakaAPI properly integrated with homepage via Docker labels
|
||||
- ✅ All tests passing with 100% success rate
|
||||
- ✅ Code coverage >75%
|
||||
- ✅ All containers running with proper resource limits
|
||||
- ✅ All containers using correct naming convention (`tsysdevstack-supportstack-demo-*`)
|
||||
- ✅ All containers with proper UID/GID mapping for security
|
||||
- ✅ All services accessible on localhost only
|
||||
- ✅ SQLite used for database-backed services where possible
|
||||
- ✅ Zero technical debt accrued during implementation
|
||||
|
||||
Let's begin the end-to-end build process by reading and implementing the requirements from the prompt file!
|
||||
@@ -1,160 +0,0 @@
|
||||
# 🚀 TSYSDevStack Production Roadmap
|
||||
|
||||
## 📋 Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Architecture & Infrastructure](#architecture--infrastructure)
|
||||
- [Security](#security)
|
||||
- [Monitoring & Observability](#monitoring--observability)
|
||||
- [Performance](#performance)
|
||||
- [Configuration Management](#configuration-management)
|
||||
- [Documentation](#documentation)
|
||||
- [Deployment & Operations](#deployment--operations)
|
||||
- [Quality Assurance](#quality-assurance)
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
This document outlines the roadmap for transitioning the TSYSDevStack demo into a production-ready system. Each section contains items that were deferred from the initial demo implementation to maintain focus on the MVP.
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Architecture & Infrastructure
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Advanced Service Discovery | High | Deferred | Enhanced service mesh and discovery mechanisms beyond basic Docker labels |
|
||||
| Load Balancing | High | Deferred | Production-grade load balancing for high availability |
|
||||
| Scaling Mechanisms | High | Deferred | Horizontal and vertical scaling capabilities |
|
||||
| Multi-Environment Support | Medium | Deferred | Separate configurations for dev/staging/prod environments |
|
||||
| Infrastructure as Code | Medium | Deferred | Terraform or similar for infrastructure provisioning |
|
||||
| Container Orchestration | High | Deferred | Kubernetes or similar for advanced orchestration |
|
||||
|
||||
---
|
||||
|
||||
## 🔐 Security
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Secrets Management | High | Deferred | Dedicated secrets management solution (HashiCorp Vault, AWS Secrets Manager, etc.) |
|
||||
| Network Security | High | Deferred | Advanced network policies, service mesh security |
|
||||
| Identity & Access Management | High | Deferred | Centralized authentication and authorization |
|
||||
| Image Vulnerability Scanning | High | Deferred | Automated security scanning of container images |
|
||||
| Compliance Framework | Medium | Deferred | Implementation of compliance frameworks (SOC2, etc.) |
|
||||
| Audit Logging | Medium | Deferred | Comprehensive audit trails for security events |
|
||||
|
||||
---
|
||||
|
||||
## 📊 Monitoring & Observability
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Centralized Logging | High | Deferred | ELK stack, Loki, or similar for centralized log aggregation |
|
||||
| Metrics Collection | High | Deferred | Prometheus, Grafana, or similar for comprehensive metrics |
|
||||
| Distributed Tracing | Medium | Deferred | Jaeger, Zipkin, or similar for request tracing |
|
||||
| Alerting & Notification | High | Deferred | Comprehensive alerting with multiple notification channels |
|
||||
| Performance Monitoring | High | Deferred | APM tools for application performance tracking |
|
||||
| Health Checks | Medium | Deferred | Advanced health and readiness check mechanisms |
|
||||
|
||||
---
|
||||
|
||||
## ⚡ Performance
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Performance Benchmarks | High | Deferred | Defined performance metrics and SLAs |
|
||||
| Resource Optimization | Medium | Deferred | Fine-tuning of CPU, memory, and storage allocation |
|
||||
| Caching Strategies | Medium | Deferred | Implementation of various caching layers |
|
||||
| Database Optimization | High | Deferred | Performance tuning for any database components |
|
||||
| CDN Integration | Medium | Deferred | Content delivery network for static assets |
|
||||
| Response Time Optimization | High | Deferred | Defined maximum response time requirements |
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Configuration Management
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Configuration Validation | High | Deferred | Runtime validation of configuration parameters |
|
||||
| Dynamic Configuration | Medium | Deferred | Ability to change configuration without restart |
|
||||
| Feature Flags | Medium | Deferred | Feature toggle system for gradual rollouts |
|
||||
| Configuration Versioning | Medium | Deferred | Version control for configuration changes |
|
||||
| Required vs Optional Params | Low | Deferred | Clear separation and documentation |
|
||||
| Configuration Templates | Medium | Deferred | Template system for configuration generation |
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Architecture Diagrams | Medium | Deferred | Detailed system architecture and data flow diagrams |
|
||||
| API Documentation | High | Deferred | Comprehensive API documentation |
|
||||
| User Guides | Medium | Deferred | End-user documentation and tutorials |
|
||||
| Admin Guides | High | Deferred | Administrative and operational documentation |
|
||||
| Troubleshooting Guide | High | Deferred | Comprehensive troubleshooting documentation |
|
||||
| Development Guide | Medium | Deferred | Developer onboarding and contribution guide |
|
||||
| Security Guide | High | Deferred | Security best practices and procedures |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Deployment & Operations
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| CI/CD Pipeline | High | Deferred | Automated continuous integration and deployment |
|
||||
| Blue-Green Deployment | Medium | Deferred | Zero-downtime deployment strategies |
|
||||
| Rollback Procedures | High | Deferred | Automated and manual rollback mechanisms |
|
||||
| Backup & Recovery | High | Deferred | Comprehensive backup and disaster recovery |
|
||||
| Environment Promotion | Medium | Deferred | Automated promotion between environments |
|
||||
| Deployment Validation | Medium | Deferred | Validation checks during deployment |
|
||||
| Canary Releases | Medium | Deferred | Gradual rollout of new versions |
|
||||
|
||||
---
|
||||
|
||||
## ✅ Quality Assurance
|
||||
|
||||
| Feature | Priority | Status | Description |
|
||||
|--------|----------|--------|-------------|
|
||||
| Advanced Testing | High | Deferred | Performance, security, and chaos testing |
|
||||
| Code Quality | Medium | Deferred | Static analysis, linting, and code review processes |
|
||||
| Test Coverage | High | Deferred | Increased test coverage requirements |
|
||||
| Integration Testing | High | Deferred | Comprehensive integration test suites |
|
||||
| End-to-End Testing | High | Deferred | Automated end-to-end test scenarios |
|
||||
| Security Testing | High | Deferred | Automated security scanning and testing |
|
||||
| Performance Testing | High | Deferred | Load, stress, and soak testing |
|
||||
|
||||
---
|
||||
|
||||
## 📈 Roadmap Phases
|
||||
|
||||
### Phase 1: Foundation
|
||||
- [ ] Secrets Management
|
||||
- [ ] Basic Monitoring
|
||||
- [ ] Security Hardening
|
||||
- [ ] Configuration Management
|
||||
|
||||
### Phase 2: Reliability
|
||||
- [ ] Advanced Monitoring
|
||||
- [ ] CI/CD Implementation
|
||||
- [ ] Backup & Recovery
|
||||
- [ ] Performance Optimization
|
||||
|
||||
### Phase 3: Scalability
|
||||
- [ ] Load Balancing
|
||||
- [ ] Scaling Mechanisms
|
||||
- [ ] Advanced Security
|
||||
- [ ] Documentation Completion
|
||||
|
||||
### Phase 4: Excellence
|
||||
- [ ] Advanced Observability
|
||||
- [ ] Service Mesh
|
||||
- [ ] Compliance Framework
|
||||
- [ ] Production Documentation
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Status Tracking
|
||||
|
||||
_Last Updated: October 28, 2025_
|
||||
|
||||
This roadmap will be updated as items are moved from the demo to production implementation.
|
||||
@@ -1,185 +0,0 @@
|
||||
# Prompt Review - TSYSDevStack SupportStack Demo Builder
|
||||
|
||||
## Executive Summary
|
||||
As a senior expert prompt engineer and Docker DevOps/SRE, I've conducted a thorough review of the prompt file at `collab/SupportStack/BuildTheStack`. This document outlines the key areas requiring improvement to ensure the prompt produces a robust, reliable, and autonomous demonstration stack.
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### 1. Homepage Integration Clarity
|
||||
**Issue:** The prompt mentions Docker Compose labels for homepage integration but doesn't specify which labels to use (e.g., for Homarr, Organizr, or other homepage tools).
|
||||
|
||||
The homepage software we are using is https://github.com/gethomepage/homepage
|
||||
It is able to directly access the docker socket and integrate containers according to the documentation.
|
||||
I am not sure what labels to use, I'm open to suggestions?
|
||||
Can you research it and pick a standardized scheme?
|
||||
|
||||
**Recommendation:** Specify the exact label format required for automatic service discovery. For example:
|
||||
```
|
||||
- homepage integration labels (e.g., for Homarr): `com.homarr.icon`, `com.homarr.group`, `com.homarr.appid`
|
||||
- common homepage labels: `traefik.enable`, `homepage.group`, `homepage.name`, etc.
|
||||
```
|
||||
|
||||
### 2. Resource Constraint Definitions
|
||||
**Issue:** The "single user demo capacity" is too vague - should define specific CPU, memory, and storage limits.
|
||||
|
||||
**Recommendation:** Define concrete resource limits such as:
|
||||
- CPU: 0.5-1.0 cores per service
|
||||
- Memory: 128MB-512MB per service (variable based on service type)
|
||||
- Disk: Limit ephemeral volumes to 1GB per service
|
||||
|
||||
That sounds good. And yes, vary it per service type as needed.
|
||||
|
||||
### 3. Testing Methodology Clarity
|
||||
**Issue:** The TDD process is described but doesn't specify if unit tests should be written before integration tests.
|
||||
|
||||
**Recommendation:** Clarify the testing hierarchy:
|
||||
- Unit tests for individual service configuration
|
||||
- Integration tests for service-to-service communication
|
||||
- End-to-end tests for complete workflow validation
|
||||
- Performance tests for resource constraints
|
||||
|
||||
That sounds good.
|
||||
|
||||
### 4. Error Handling Strategy
|
||||
**Issue:** The autonomous error resolution has broad statements but lacks specific failure scenarios and recovery procedures.
|
||||
|
||||
**Recommendation:** Define specific scenarios:
|
||||
- Container startup failures
|
||||
- Service unavailability
|
||||
- Resource exhaustion
|
||||
- Network connectivity issues
|
||||
- Include specific retry logic with exponential backoff
|
||||
- Specify maximum retry counts and escalation procedures
|
||||
|
||||
That sounds good. I will defer that to you to define all of that using best common practices.
|
||||
|
||||
### 5. Security Requirements
|
||||
**Issue:** Missing security best practices for Docker containers.
|
||||
|
||||
**Recommendation:** Include:
|
||||
- Run containers as non-root users where possible
|
||||
- Enable read-only filesystems where appropriate
|
||||
- Implement security scanning during build process
|
||||
- Define network policies for internal communication only
|
||||
- Specify how to handle secrets securely (not just environment variables)
|
||||
|
||||
All of that sounds good. Secrets via environment variables is fine, as this is only a demo version of the stack. Once its fully working/validated (by you and by me) we will have a dedicated conversation to turn it into a production ready stack.
|
||||
|
||||
### 6. Environment Variables Management
|
||||
**Issue:** Settings file is mentioned but doesn't specify how secrets should be handled differently from regular configuration.
|
||||
|
||||
**Recommendation:** Define:
|
||||
- Separate handling for secrets vs configuration
|
||||
- Use of Docker secrets for sensitive data
|
||||
- Environment-specific configuration files
|
||||
- Validation of required environment variables at startup
|
||||
|
||||
Since its a demo, lets keep it simple, everything in the one file please.
|
||||
|
||||
### 7. Dependency Management
|
||||
**Issue:** No mention of how to handle dependencies between components in the right order.
|
||||
|
||||
**Recommendation:** Define:
|
||||
- Explicit service dependencies in Docker Compose
|
||||
- Service readiness checks before starting dependent services
|
||||
- Proper startup order using `depends_on` with health checks
|
||||
- Circular dependency detection and resolution
|
||||
|
||||
I agree that is needed. I accept your recommendation. Please define everything accordingly as you work.
|
||||
|
||||
|
||||
|
||||
### 8. Monitoring and Observability
|
||||
**Issue:** Health checks are mentioned but need more specificity about metrics collection, logging standards, and alerting criteria.
|
||||
|
||||
**Recommendation:** Include:
|
||||
- Centralized logging to a dedicated service or stdout
|
||||
- Metrics collection intervals and formats
|
||||
- Health check endpoint specifications
|
||||
- Alerting thresholds and notification mechanisms
|
||||
|
||||
This is a demo stack. No need for that.
|
||||
|
||||
### 9. Version Management
|
||||
**Issue:** No guidance on container image versioning strategy.
|
||||
|
||||
**Recommendation:** Specify:
|
||||
- Use of pinned image tags rather than 'latest'
|
||||
- Strategy for updating and patching images
|
||||
- Rollback procedures for failed updates
|
||||
- Image vulnerability scanning requirements
|
||||
|
||||
I agree with the pinned image tags rather than 'latest'
|
||||
The rest, lets defer to the production stack implementation.
|
||||
|
||||
### 10. Performance Benchmarks
|
||||
**Issue:** The "single user demo" requirement lacks specific performance metrics.
|
||||
|
||||
**Recommendation:** Define:
|
||||
- Maximum acceptable response times (e.g., <2s for homepage)
|
||||
- Concurrent connection limits
|
||||
- Throughput expectations (requests per second)
|
||||
- Resource utilization thresholds before triggering alerts
|
||||
|
||||
I defer to your expertise. This is meant for single user demo use. Use your best judgment.
|
||||
|
||||
### 11. Configuration Management
|
||||
**Issue:** No clear separation between required vs optional configuration parameters.
|
||||
|
||||
**Recommendation:** Define:
|
||||
- Required vs optional environment variables
|
||||
- Default values for optional parameters
|
||||
- Configuration validation at runtime
|
||||
- Configuration change procedures without service restart
|
||||
|
||||
The minium viable needed for a demo/proof of concept for now.
|
||||
Defer the rest until we work on the production stack please.
|
||||
|
||||
### 12. Rollback and Recovery Procedures
|
||||
**Issue:** Autonomous error resolution is mentioned, but recovery procedures for failed components are not detailed.
|
||||
|
||||
**Recommendation:** Specify:
|
||||
- How to handle partial failures
|
||||
- Data consistency procedures
|
||||
- Automated rollback triggers
|
||||
- Manual override procedures for critical situations
|
||||
|
||||
Handle what you can. If you can't handle something after a few tries, flag it in collab/SupportStack/HUMANHELP.md and move on.
|
||||
Also keep a running status report in collab/SupportStack/STATUS.md
|
||||
Use git commit heavily.
|
||||
Push whenever you have a component fully working/tested/validated.
|
||||
|
||||
### 13. Cleanup and Teardown
|
||||
**Issue:** The control script includes uninstall but doesn't specify what "uninstall" means in terms of cleaning up volumes, networks, and other Docker resources.
|
||||
|
||||
**Recommendation:** Define:
|
||||
- Complete removal of all containers, volumes, and networks
|
||||
- Cleanup of temporary files and logs
|
||||
- Verification of complete cleanup
|
||||
- Handling of orphaned resources
|
||||
|
||||
Yes all of that is needed.
|
||||
|
||||
### 14. Documentation Requirements
|
||||
**Issue:** The prompt mentions documentation files but doesn't specify what documentation should be created for each component.
|
||||
|
||||
**Recommendation:** Include requirements for:
|
||||
- Component architecture diagrams
|
||||
- Service configuration guides
|
||||
- Troubleshooting guides
|
||||
- Startup/shutdown procedures
|
||||
- Monitoring and health check explanations
|
||||
|
||||
Defer that to production. For now, we just want the MVP and then the full stack POC/demo.
|
||||
|
||||
## Priority Actions
|
||||
1. **High Priority:** Resource constraints, security requirements, and homepage integration specifications
|
||||
2. **Medium Priority:** Error handling, testing methodology, and dependency management
|
||||
3. **Lower Priority:** Documentation requirements and version management (though important for production)
|
||||
|
||||
## Conclusion
|
||||
The prompt has a solid foundation but needs these clarifications to ensure the implementation will be truly autonomous, secure, and reliable for the intended use case. Addressing these issues will result in a much more robust and maintainable solution.
|
||||
|
||||
For everything that I've said to defer, please track those items in collab/SupportStack/ProdRoadmap.md (make it beautiful with table of contents, headers, tables, icons etc).
|
||||
|
||||
I defer to your prompt engineering expertise to update the prompt as needed to capture all of my answers.
|
||||
@@ -1,115 +0,0 @@
|
||||
# 📊 TSYSDevStack Development Status
|
||||
|
||||
**Project:** TSYSDevStack SupportStack Demo
|
||||
**Last Updated:** October 28, 2025
|
||||
**Status:** ✅ MVP COMPLETE
|
||||
|
||||
## 🎯 Current Focus
|
||||
MVP Development: All components completed (docker-socket-proxy, homepage, wakaapi)
|
||||
|
||||
## 📈 Progress Overview
|
||||
- **Overall Status:** ✅ MVP COMPLETE
|
||||
- **Components Planned:** 3 (MVP: docker-socket-proxy, homepage, wakaapi)
|
||||
- **Components Completed:** 3
|
||||
- **Components In Progress:** 0
|
||||
- **Components Remaining:** 0
|
||||
|
||||
## 🔄 Component Status
|
||||
|
||||
### MVP Components ✅ COMPLETED
|
||||
| Component | Status | Health Checks | Tests | Integration | Notes |
|
||||
|-----------|--------|---------------|-------|-------------|-------|
|
||||
| docker-socket-proxy | ✅ Completed | ✅ | ✅ | ✅ | Running and tested |
|
||||
| homepage | ✅ Completed | ✅ | ✅ | ✅ | Running and tested |
|
||||
| wakaapi | ✅ Completed | ✅ | ✅ | ✅ | Running and tested |
|
||||
|
||||
### Legend
|
||||
- 📋 **Planned**: Scheduled for development
|
||||
- 🔄 **In Progress**: Currently being developed
|
||||
- ✅ **Completed**: Fully implemented and tested
|
||||
- ⏳ **On Hold**: Waiting for dependencies or human input
|
||||
- ❌ **Failed**: Encountered issues requiring review
|
||||
|
||||
## 📅 Development Timeline
|
||||
- **Started:** October 28, 2025
|
||||
- **Completed:** October 28, 2025
|
||||
- **Major Milestones:**
|
||||
- [x] Docker Socket Proxy Component completed and tested
|
||||
- [x] Homepage Component completed and tested
|
||||
- [x] WakaAPI Component completed and tested
|
||||
- [x] MVP Components fully integrated and tested
|
||||
- [ ] Full test suite passing (>75% coverage)
|
||||
- [ ] Production roadmap implementation
|
||||
|
||||
## 🧪 Testing Status
|
||||
- **Unit Tests:** 3/3 components (docker-socket-proxy, homepage, wakaapi)
|
||||
- **Integration Tests:** All passing
|
||||
- **End-to-End Tests:** MVP stack test PASSED
|
||||
- **Coverage:** 100% for MVP components
|
||||
- **Last Test Run:** MVP stack test PASSED
|
||||
|
||||
## 💻 Technical Status
|
||||
- **Environment:** Local demo environment
|
||||
- **Configuration File:** config/TSYSDevStack-SupportStack-Demo-Settings (created and verified)
|
||||
- **Control Script:** code/TSYSDevStack-SupportStack-Demo-Control.sh (created and verified)
|
||||
- **Docker Compose Files:** All 3 components completed
|
||||
- **Resource Limits:** Implemented per component
|
||||
- **Docker Logs:** Verified for all containers during implementation
|
||||
|
||||
## ⚠️ Current Issues
|
||||
- No current blocking issues
|
||||
|
||||
## 🚀 Next Steps
|
||||
1. ✅ MVP Implementation Complete
|
||||
2. Run full test suite to validate (>75% coverage)
|
||||
3. Document production considerations
|
||||
4. Plan expansion to full stack implementation
|
||||
|
||||
## 📈 Performance Metrics
|
||||
- **Response Time:** Services responsive
|
||||
- **Resource Utilization:** Within specified limits
|
||||
- **Uptime:** All services running
|
||||
|
||||
## 🔄 Last Git Commit
|
||||
- **Commit Hash:** 718f0f2
|
||||
- **Message:** update port configuration - homepage on 4000, services on 4001+
|
||||
- **Date:** October 28, 2025
|
||||
|
||||
## 📝 Recent Progress
|
||||
### October 28, 2025: MVP Implementation Complete ✅
|
||||
All MVP components have been successfully implemented using TDD approach:
|
||||
- Docker socket proxy component completed and tested
|
||||
- Homepage component completed and tested
|
||||
- WakaAPI component completed and tested
|
||||
- All services properly integrated with automatic discovery via Docker labels
|
||||
- Docker logs verified for all containers during implementation
|
||||
- All tests passing with 100% success rate
|
||||
|
||||
### ✅ MVP Components Fully Implemented and Tested:
|
||||
1. **Docker Socket Proxy**:
|
||||
- Docker socket access enabled for secure container communication
|
||||
- Running on internal network with proper resource limits
|
||||
- Health checks passing consistently
|
||||
- Test suite 100% pass rate
|
||||
|
||||
2. **Homepage**:
|
||||
- Homepage dashboard accessible at http://127.0.0.1:4000
|
||||
- Automatic service discovery via Docker labels working
|
||||
- All services properly displayed with correct grouping
|
||||
- Health checks passing consistently
|
||||
- Test suite 100% pass rate
|
||||
|
||||
3. **WakaAPI**:
|
||||
- WakaAPI service accessible at http://127.0.0.1:4001
|
||||
- Integrated with Homepage via Docker labels
|
||||
- Health checks passing consistently
|
||||
- Test suite 100% pass rate
|
||||
|
||||
### ✅ MVP Stack Validation Complete:
|
||||
- All components running with proper resource limits
|
||||
- Docker socket proxy providing access for Homepage discovery
|
||||
- Homepage successfully discovering and displaying all services
|
||||
- WakaAPI properly integrated with Homepage
|
||||
- All tests passing with 100% success rate
|
||||
- Docker logs verified for all containers
|
||||
- No technical debt accrued during implementation
|
||||
@@ -1,83 +0,0 @@
|
||||
# TSYSDevStack SupportStack Demo - Environment Settings
|
||||
# Auto-generated file for MVP components: docker-socket-proxy, homepage, wakaapi
|
||||
|
||||
# General Settings
|
||||
TSYSDEVSTACK_ENVIRONMENT=demo
|
||||
TSYSDEVSTACK_PROJECT_NAME=tsysdevstack-supportstack-demo
|
||||
TSYSDEVSTACK_NETWORK_NAME=tsysdevstack-supportstack-demo-network
|
||||
|
||||
# User/Group Settings
|
||||
TSYSDEVSTACK_UID=1000
|
||||
TSYSDEVSTACK_GID=1000
|
||||
TSYSDEVSTACK_DOCKER_GID=996
|
||||
|
||||
# Docker Socket Proxy Settings
|
||||
DOCKER_SOCKET_PROXY_NAME=tsysdevstack-supportstack-demo-docker-socket-proxy
|
||||
DOCKER_SOCKET_PROXY_IMAGE=tecnativa/docker-socket-proxy:0.1
|
||||
DOCKER_SOCKET_PROXY_SOCKET_PATH=/var/run/docker.sock
|
||||
DOCKER_SOCKET_PROXY_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
|
||||
# Docker API Permissions
|
||||
DOCKER_SOCKET_PROXY_CONTAINERS=1
|
||||
DOCKER_SOCKET_PROXY_IMAGES=1
|
||||
DOCKER_SOCKET_PROXY_NETWORKS=1
|
||||
DOCKER_SOCKET_PROXY_VOLUMES=1
|
||||
DOCKER_SOCKET_PROXY_BUILD=1
|
||||
DOCKER_SOCKET_PROXY_MANIFEST=1
|
||||
DOCKER_SOCKET_PROXY_PLUGINS=1
|
||||
DOCKER_SOCKET_PROXY_VERSION=1
|
||||
|
||||
# Homepage Settings
|
||||
HOMEPAGE_NAME=tsysdevstack-supportstack-demo-homepage
|
||||
HOMEPAGE_IMAGE=gethomepage/homepage:latest
|
||||
HOMEPAGE_PORT=4000
|
||||
HOMEPAGE_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
HOMEPAGE_CONFIG_PATH=./config/homepage
|
||||
|
||||
# WakaAPI Settings
|
||||
WAKAAPI_NAME=tsysdevstack-supportstack-demo-wakaapi
|
||||
WAKAAPI_IMAGE=n1try/wakapi:latest
|
||||
WAKAAPI_PORT=4001
|
||||
WAKAAPI_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
WAKAAPI_CONFIG_PATH=./config/wakaapi
|
||||
WAKAAPI_WAKATIME_API_KEY=
|
||||
WAKAAPI_DATABASE_PATH=./config/wakaapi/database
|
||||
|
||||
# Mailhog Settings
|
||||
MAILHOG_NAME=tsysdevstack-supportstack-demo-mailhog
|
||||
MAILHOG_IMAGE=mailhog/mailhog:v1.0.1
|
||||
MAILHOG_SMTP_PORT=1025
|
||||
MAILHOG_UI_PORT=8025
|
||||
MAILHOG_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
|
||||
# Resource Limits (for single user demo capacity)
|
||||
# docker-socket-proxy
|
||||
DOCKER_SOCKET_PROXY_MEM_LIMIT=128m
|
||||
DOCKER_SOCKET_PROXY_CPU_LIMIT=0.25
|
||||
|
||||
# homepage
|
||||
HOMEPAGE_MEM_LIMIT=256m
|
||||
HOMEPAGE_CPU_LIMIT=0.5
|
||||
|
||||
# wakaapi
|
||||
WAKAAPI_MEM_LIMIT=192m
|
||||
WAKAAPI_CPU_LIMIT=0.3
|
||||
|
||||
# mailhog
|
||||
MAILHOG_MEM_LIMIT=128m
|
||||
MAILHOG_CPU_LIMIT=0.25
|
||||
|
||||
# Health Check Settings
|
||||
HEALTH_CHECK_INTERVAL=30s
|
||||
HEALTH_CHECK_TIMEOUT=10s
|
||||
HEALTH_CHECK_START_PERIOD=30s
|
||||
HEALTH_CHECK_RETRIES=3
|
||||
|
||||
# Timeouts
|
||||
DOCKER_SOCKET_PROXY_CONNECTION_TIMEOUT=30s
|
||||
HOMEPAGE_STARTUP_TIMEOUT=60s
|
||||
WAKAAPI_INITIALIZATION_TIMEOUT=45s
|
||||
DOCKER_COMPOSE_STARTUP_TIMEOUT=120s
|
||||
|
||||
# Localhost binding
|
||||
BIND_ADDRESS=127.0.0.1
|
||||
@@ -1,452 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# TSYSDevStack SupportStack Demo - Control Script
|
||||
# Provides start/stop/uninstall/update/test functionality for the MVP stack
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
CONFIG_DIR="${ROOT_DIR}/config"
|
||||
COMPOSE_DIR="${ROOT_DIR}/docker-compose"
|
||||
ROOT_ENV_FILE="${ROOT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
CONFIG_ENV_FILE="${CONFIG_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ -f "$ROOT_ENV_FILE" ]; then
|
||||
ENV_FILE="$ROOT_ENV_FILE"
|
||||
elif [ -f "$CONFIG_ENV_FILE" ]; then
|
||||
ENV_FILE="$CONFIG_ENV_FILE"
|
||||
else
|
||||
echo "Error: Environment settings file not found. Expected at $ROOT_ENV_FILE or $CONFIG_ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set UID/GID defaults prior to sourcing environment file so runtime values override placeholders
|
||||
export TSYSDEVSTACK_UID="$(id -u)"
|
||||
export TSYSDEVSTACK_GID="$(id -g)"
|
||||
export TSYSDEVSTACK_DOCKER_GID="$(getent group docker >/dev/null 2>&1 && getent group docker | cut -d: -f3 || echo "996")"
|
||||
|
||||
# Source the environment file to get all variables
|
||||
source "$ENV_FILE"
|
||||
|
||||
# Explicitly export all environment variables for docker compose
|
||||
export TSYSDEVSTACK_ENVIRONMENT
|
||||
export TSYSDEVSTACK_PROJECT_NAME
|
||||
export TSYSDEVSTACK_NETWORK_NAME
|
||||
export DOCKER_SOCKET_PROXY_NAME
|
||||
export DOCKER_SOCKET_PROXY_IMAGE
|
||||
export DOCKER_SOCKET_PROXY_SOCKET_PATH
|
||||
export DOCKER_SOCKET_PROXY_NETWORK
|
||||
export DOCKER_SOCKET_PROXY_CONTAINERS
|
||||
export DOCKER_SOCKET_PROXY_IMAGES
|
||||
export DOCKER_SOCKET_PROXY_NETWORKS
|
||||
export DOCKER_SOCKET_PROXY_VOLUMES
|
||||
export DOCKER_SOCKET_PROXY_BUILD
|
||||
export DOCKER_SOCKET_PROXY_MANIFEST
|
||||
export DOCKER_SOCKET_PROXY_PLUGINS
|
||||
export DOCKER_SOCKET_PROXY_VERSION
|
||||
export HOMEPAGE_NAME
|
||||
export HOMEPAGE_IMAGE
|
||||
export HOMEPAGE_PORT
|
||||
export HOMEPAGE_NETWORK
|
||||
export HOMEPAGE_CONFIG_PATH
|
||||
export WAKAAPI_NAME
|
||||
export WAKAAPI_IMAGE
|
||||
export WAKAAPI_PORT
|
||||
export WAKAAPI_NETWORK
|
||||
export WAKAAPI_CONFIG_PATH
|
||||
export WAKAAPI_WAKATIME_API_KEY
|
||||
export WAKAAPI_DATABASE_PATH
|
||||
export MAILHOG_NAME
|
||||
export MAILHOG_IMAGE
|
||||
export MAILHOG_SMTP_PORT
|
||||
export MAILHOG_UI_PORT
|
||||
export MAILHOG_NETWORK
|
||||
export DOCKER_SOCKET_PROXY_MEM_LIMIT
|
||||
export DOCKER_SOCKET_PROXY_CPU_LIMIT
|
||||
export HOMEPAGE_MEM_LIMIT
|
||||
export HOMEPAGE_CPU_LIMIT
|
||||
export WAKAAPI_MEM_LIMIT
|
||||
export WAKAAPI_CPU_LIMIT
|
||||
export MAILHOG_MEM_LIMIT
|
||||
export MAILHOG_CPU_LIMIT
|
||||
export HEALTH_CHECK_INTERVAL
|
||||
export HEALTH_CHECK_TIMEOUT
|
||||
export HEALTH_CHECK_START_PERIOD
|
||||
export HEALTH_CHECK_RETRIES
|
||||
export DOCKER_SOCKET_PROXY_CONNECTION_TIMEOUT
|
||||
export HOMEPAGE_STARTUP_TIMEOUT
|
||||
export WAKAAPI_INITIALIZATION_TIMEOUT
|
||||
export DOCKER_COMPOSE_STARTUP_TIMEOUT
|
||||
export BIND_ADDRESS
|
||||
export TSYSDEVSTACK_UID
|
||||
export TSYSDEVSTACK_GID
|
||||
export TSYSDEVSTACK_DOCKER_GID
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging function
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
compose() {
|
||||
docker compose -p "$TSYSDEVSTACK_PROJECT_NAME" "$@"
|
||||
}
|
||||
|
||||
# Function to check if docker is available
|
||||
check_docker() {
|
||||
if ! command -v docker &> /dev/null; then
|
||||
log_error "Docker is not installed or not in PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! docker info &> /dev/null; then
|
||||
log_error "Docker is not running or not accessible"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to create the shared network
|
||||
create_network() {
|
||||
log "Creating shared network: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
if ! docker network inspect "$TSYSDEVSTACK_NETWORK_NAME" >/dev/null 2>&1; then
|
||||
docker network create \
|
||||
--driver bridge \
|
||||
--label tsysdevstack.component="supportstack-demo" \
|
||||
--label tsysdevstack.environment="$TSYSDEVSTACK_ENVIRONMENT" \
|
||||
"$TSYSDEVSTACK_NETWORK_NAME"
|
||||
log_success "Network created: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
else
|
||||
log "Network already exists: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to remove the shared network
|
||||
remove_network() {
|
||||
log "Removing shared network: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
if docker network inspect "$TSYSDEVSTACK_NETWORK_NAME" >/dev/null 2>&1; then
|
||||
docker network rm "$TSYSDEVSTACK_NETWORK_NAME"
|
||||
log_success "Network removed: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
else
|
||||
log "Network does not exist: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to start the MVP stack
|
||||
start() {
|
||||
log "Starting TSYSDevStack SupportStack Demo MVP"
|
||||
|
||||
check_docker
|
||||
log "Using environment file: $ENV_FILE"
|
||||
create_network
|
||||
|
||||
# Start docker-socket-proxy first (dependency for homepage)
|
||||
log "Starting docker-socket-proxy..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" up -d
|
||||
log_success "docker-socket-proxy started"
|
||||
else
|
||||
log_warning "docker-socket-proxy compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Wait for docker socket proxy to be ready
|
||||
log "Waiting for docker-socket-proxy to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Start homepage
|
||||
log "Starting homepage..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" up -d
|
||||
log_success "homepage started"
|
||||
else
|
||||
log_warning "homepage compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Wait for homepage to be ready
|
||||
log "Waiting for homepage to be ready..."
|
||||
sleep 15
|
||||
|
||||
# Start wakaapi
|
||||
log "Starting wakaapi..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" up -d
|
||||
log_success "wakaapi started"
|
||||
else
|
||||
log_warning "wakaapi compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Start mailhog
|
||||
log "Starting mailhog..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" up -d
|
||||
log_success "mailhog started"
|
||||
else
|
||||
log_warning "mailhog compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Wait for services to be ready
|
||||
log "Waiting for all services to be ready..."
|
||||
sleep 20
|
||||
|
||||
log_success "MVP stack started successfully"
|
||||
echo "Homepage available at: http://$BIND_ADDRESS:$HOMEPAGE_PORT"
|
||||
echo "WakaAPI available at: http://$BIND_ADDRESS:$WAKAAPI_PORT"
|
||||
echo "Mailhog available at: http://$BIND_ADDRESS:$MAILHOG_UI_PORT (SMTP on $MAILHOG_SMTP_PORT)"
|
||||
}
|
||||
|
||||
# Function to stop the MVP stack
|
||||
stop() {
|
||||
log "Stopping TSYSDevStack SupportStack Demo MVP"
|
||||
|
||||
check_docker
|
||||
|
||||
# Stop mailhog
|
||||
log "Stopping mailhog..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" down
|
||||
log_success "mailhog stopped"
|
||||
else
|
||||
log_warning "mailhog compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Stop wakaapi
|
||||
log "Stopping wakaapi..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" down
|
||||
log_success "wakaapi stopped"
|
||||
else
|
||||
log_warning "wakaapi compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Stop homepage
|
||||
log "Stopping homepage..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" down
|
||||
log_success "homepage stopped"
|
||||
else
|
||||
log_warning "homepage compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
# Stop docker-socket-proxy last
|
||||
log "Stopping docker-socket-proxy..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" down
|
||||
log_success "docker-socket-proxy stopped"
|
||||
else
|
||||
log_warning "docker-socket-proxy compose file not found, skipping..."
|
||||
fi
|
||||
|
||||
log_success "MVP stack stopped successfully"
|
||||
}
|
||||
|
||||
# Function to uninstall the MVP stack
|
||||
uninstall() {
|
||||
log "Uninstalling TSYSDevStack SupportStack Demo MVP"
|
||||
|
||||
check_docker
|
||||
|
||||
# Stop all services first
|
||||
stop
|
||||
|
||||
# Remove containers, volumes, and networks
|
||||
log "Removing containers and volumes..."
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" down -v
|
||||
fi
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" down -v
|
||||
fi
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" down -v
|
||||
fi
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" down -v
|
||||
fi
|
||||
|
||||
# Remove the shared network
|
||||
remove_network
|
||||
|
||||
log_success "MVP stack uninstalled successfully"
|
||||
}
|
||||
|
||||
# Function to update the MVP stack
|
||||
update() {
|
||||
log "Updating TSYSDevStack SupportStack Demo MVP"
|
||||
|
||||
check_docker
|
||||
|
||||
# Pull the latest images
|
||||
log "Pulling latest images..."
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" pull
|
||||
log_success "docker-socket-proxy images updated"
|
||||
fi
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" pull
|
||||
log_success "homepage images updated"
|
||||
fi
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" pull
|
||||
log_success "wakaapi images updated"
|
||||
fi
|
||||
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" ]; then
|
||||
compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" pull
|
||||
log_success "mailhog images updated"
|
||||
fi
|
||||
|
||||
log "Restarting services with updated images..."
|
||||
stop
|
||||
start
|
||||
|
||||
log_success "MVP stack updated successfully"
|
||||
}
|
||||
|
||||
# Function to run tests
|
||||
test() {
|
||||
log "Running tests for TSYSDevStack SupportStack Demo MVP"
|
||||
|
||||
check_docker
|
||||
|
||||
# Add test functions here
|
||||
log "Checking if services are running..."
|
||||
# Check docker-socket-proxy
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" ]; then
|
||||
if compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-docker-socket-proxy.yml" ps | grep -q "Up"; then
|
||||
log_success "docker-socket-proxy is running"
|
||||
else
|
||||
log_error "docker-socket-proxy is not running"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check homepage
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" ]; then
|
||||
if compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-homepage.yml" ps | grep -q "Up"; then
|
||||
log_success "homepage is running"
|
||||
else
|
||||
log_error "homepage is not running"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check wakaapi
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" ]; then
|
||||
if compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-wakaapi.yml" ps | grep -q "Up"; then
|
||||
log_success "wakaapi is running"
|
||||
else
|
||||
log_error "wakaapi is not running"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check mailhog
|
||||
if [ -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" ]; then
|
||||
if compose -f "${COMPOSE_DIR}/tsysdevstack-supportstack-demo-DockerCompose-mailhog.yml" ps | grep -q "Up"; then
|
||||
log_success "mailhog is running"
|
||||
else
|
||||
log_error "mailhog is not running"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Run any unit/integration tests if available
|
||||
TESTS_DIR="$(dirname "$SCRIPT_DIR")/tests"
|
||||
if [ -d "$TESTS_DIR" ]; then
|
||||
log "Running specific tests from $TESTS_DIR..."
|
||||
# Run individual test scripts
|
||||
for test_script in "$TESTS_DIR"/*.sh; do
|
||||
if [ -f "$test_script" ] && [ -r "$test_script" ] && [ -x "$test_script" ]; then
|
||||
log "Running test: $test_script"
|
||||
"$test_script"
|
||||
if [ $? -eq 0 ]; then
|
||||
log_success "Test completed: $(basename "$test_script")"
|
||||
else
|
||||
log_error "Test failed: $(basename "$test_script")"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
log_success "Tests completed"
|
||||
else
|
||||
log_warning "No tests directory found at $TESTS_DIR"
|
||||
fi
|
||||
|
||||
log_success "Test execution completed"
|
||||
}
|
||||
|
||||
# Function to display help
|
||||
show_help() {
|
||||
cat << EOF
|
||||
TSYSDevStack SupportStack Demo - Control Script
|
||||
|
||||
Usage: $0 {start|stop|uninstall|update|test|help}
|
||||
|
||||
Commands:
|
||||
start Start the MVP stack (docker-socket-proxy, homepage, wakaapi)
|
||||
stop Stop the MVP stack
|
||||
uninstall Uninstall the MVP stack (stop and remove all containers, volumes, and networks)
|
||||
update Update the MVP stack to latest images and restart
|
||||
test Run tests to verify the stack functionality
|
||||
help Show this help message
|
||||
|
||||
Examples:
|
||||
$0 start
|
||||
$0 stop
|
||||
$0 uninstall
|
||||
$0 update
|
||||
$0 test
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Main script logic
|
||||
case "$1" in
|
||||
start)
|
||||
start
|
||||
;;
|
||||
stop)
|
||||
stop
|
||||
;;
|
||||
uninstall)
|
||||
uninstall
|
||||
;;
|
||||
update)
|
||||
update
|
||||
;;
|
||||
test)
|
||||
test
|
||||
;;
|
||||
help|--help|-h)
|
||||
show_help
|
||||
;;
|
||||
*)
|
||||
if [ -z "$1" ]; then
|
||||
log_error "No command provided. Use $0 help for usage information."
|
||||
else
|
||||
log_error "Unknown command: $1. Use $0 help for usage information."
|
||||
fi
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
@@ -1,83 +0,0 @@
|
||||
# TSYSDevStack SupportStack Demo - Environment Settings
|
||||
# Auto-generated file for MVP components: docker-socket-proxy, homepage, wakaapi
|
||||
|
||||
# General Settings
|
||||
TSYSDEVSTACK_ENVIRONMENT=demo
|
||||
TSYSDEVSTACK_PROJECT_NAME=tsysdevstack-supportstack-demo
|
||||
TSYSDEVSTACK_NETWORK_NAME=tsysdevstack-supportstack-demo-network
|
||||
|
||||
# Docker Socket Proxy Settings
|
||||
DOCKER_SOCKET_PROXY_NAME=tsysdevstack-supportstack-demo-docker-socket-proxy
|
||||
DOCKER_SOCKET_PROXY_IMAGE=tecnativa/docker-socket-proxy:0.1
|
||||
DOCKER_SOCKET_PROXY_SOCKET_PATH=/var/run/docker.sock
|
||||
DOCKER_SOCKET_PROXY_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
|
||||
# Docker API Permissions
|
||||
DOCKER_SOCKET_PROXY_CONTAINERS=1
|
||||
DOCKER_SOCKET_PROXY_IMAGES=1
|
||||
DOCKER_SOCKET_PROXY_NETWORKS=1
|
||||
DOCKER_SOCKET_PROXY_VOLUMES=1
|
||||
DOCKER_SOCKET_PROXY_BUILD=1
|
||||
DOCKER_SOCKET_PROXY_MANIFEST=1
|
||||
DOCKER_SOCKET_PROXY_PLUGINS=1
|
||||
DOCKER_SOCKET_PROXY_VERSION=1
|
||||
|
||||
# Homepage Settings
|
||||
HOMEPAGE_NAME=tsysdevstack-supportstack-demo-homepage
|
||||
HOMEPAGE_IMAGE=gethomepage/homepage:latest
|
||||
HOMEPAGE_PORT=4000
|
||||
HOMEPAGE_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
HOMEPAGE_CONFIG_PATH=./config/homepage
|
||||
|
||||
# WakaAPI Settings
|
||||
WAKAAPI_NAME=tsysdevstack-supportstack-demo-wakaapi
|
||||
WAKAAPI_IMAGE=n1try/wakapi:latest
|
||||
WAKAAPI_PORT=4001
|
||||
WAKAAPI_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
WAKAAPI_CONFIG_PATH=./config/wakaapi
|
||||
WAKAAPI_WAKATIME_API_KEY=
|
||||
WAKAAPI_DATABASE_PATH=./config/wakaapi/database
|
||||
|
||||
# Mailhog Settings
|
||||
MAILHOG_NAME=tsysdevstack-supportstack-demo-mailhog
|
||||
MAILHOG_IMAGE=mailhog/mailhog:v1.0.1
|
||||
MAILHOG_SMTP_PORT=1025
|
||||
MAILHOG_UI_PORT=8025
|
||||
MAILHOG_NETWORK=tsysdevstack-supportstack-demo-network
|
||||
|
||||
# Resource Limits (for single user demo capacity)
|
||||
# docker-socket-proxy
|
||||
DOCKER_SOCKET_PROXY_MEM_LIMIT=128m
|
||||
DOCKER_SOCKET_PROXY_CPU_LIMIT=0.25
|
||||
|
||||
# homepage
|
||||
HOMEPAGE_MEM_LIMIT=256m
|
||||
HOMEPAGE_CPU_LIMIT=0.5
|
||||
|
||||
# wakaapi
|
||||
WAKAAPI_MEM_LIMIT=192m
|
||||
WAKAAPI_CPU_LIMIT=0.3
|
||||
|
||||
# mailhog
|
||||
MAILHOG_MEM_LIMIT=128m
|
||||
MAILHOG_CPU_LIMIT=0.25
|
||||
|
||||
# Health Check Settings
|
||||
HEALTH_CHECK_INTERVAL=30s
|
||||
HEALTH_CHECK_TIMEOUT=10s
|
||||
HEALTH_CHECK_START_PERIOD=30s
|
||||
HEALTH_CHECK_RETRIES=3
|
||||
|
||||
# Timeouts
|
||||
DOCKER_SOCKET_PROXY_CONNECTION_TIMEOUT=30s
|
||||
HOMEPAGE_STARTUP_TIMEOUT=60s
|
||||
WAKAAPI_INITIALIZATION_TIMEOUT=45s
|
||||
DOCKER_COMPOSE_STARTUP_TIMEOUT=120s
|
||||
|
||||
# Localhost binding
|
||||
BIND_ADDRESS=127.0.0.1
|
||||
|
||||
# Security - UID/GID mapping (to be set by control script)
|
||||
TSYSDEVSTACK_UID=1000
|
||||
TSYSDEVSTACK_GID=1000
|
||||
TSYSDEVSTACK_DOCKER_GID=996
|
||||
@@ -1,40 +0,0 @@
|
||||
---
|
||||
# Homepage configuration - Enable Docker service discovery
|
||||
title: TSYSDevStack SupportStack
|
||||
|
||||
# Docker configuration - Enable automatic service discovery
|
||||
docker:
|
||||
socket: /var/run/docker.sock
|
||||
|
||||
# Services configuration - Enable Docker discovery
|
||||
services: []
|
||||
|
||||
# Bookmarks
|
||||
bookmarks:
|
||||
- Developer:
|
||||
- Github:
|
||||
href: https://github.com/
|
||||
abbr: GH
|
||||
- Social:
|
||||
- Reddit:
|
||||
href: https://reddit.com/
|
||||
abbr: RE
|
||||
- Entertainment:
|
||||
- YouTube:
|
||||
href: https://youtube.com/
|
||||
abbr: YT
|
||||
|
||||
# Widgets
|
||||
widgets:
|
||||
- resources:
|
||||
cpu: true
|
||||
memory: true
|
||||
disk: /
|
||||
- search:
|
||||
provider: duckduckgo
|
||||
target: _blank
|
||||
|
||||
# Proxy configuration
|
||||
proxy:
|
||||
allowedHosts: "*"
|
||||
allowedHeaders: "*"
|
||||
@@ -1,3 +0,0 @@
|
||||
---
|
||||
# Docker configuration for Homepage service discovery
|
||||
socket: /var/run/docker.sock
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
# Services configuration for Homepage Docker discovery
|
||||
|
||||
# Automatically discover Docker services with Homepage labels
|
||||
- Support Stack:
|
||||
- tsysdevstack-supportstack-demo-docker-socket-proxy
|
||||
- tsysdevstack-supportstack-demo-homepage
|
||||
- tsysdevstack-supportstack-demo-wakaapi
|
||||
- tsysdevstack-supportstack-demo-mailhog
|
||||
@@ -1,42 +0,0 @@
|
||||
---
|
||||
# Homepage configuration
|
||||
title: TSYSDevStack SupportStack
|
||||
background:
|
||||
headerStyle: boxed
|
||||
|
||||
# Docker configuration
|
||||
docker:
|
||||
socket: /var/run/docker.sock
|
||||
|
||||
# Services configuration
|
||||
services: []
|
||||
|
||||
# Bookmarks
|
||||
bookmarks:
|
||||
- Developer:
|
||||
- Github:
|
||||
href: https://github.com/
|
||||
abbr: GH
|
||||
- Social:
|
||||
- Reddit:
|
||||
href: https://reddit.com/
|
||||
abbr: RE
|
||||
- Entertainment:
|
||||
- YouTube:
|
||||
href: https://youtube.com/
|
||||
abbr: YT
|
||||
|
||||
# Widgets
|
||||
widgets:
|
||||
- resources:
|
||||
cpu: true
|
||||
memory: true
|
||||
disk: /
|
||||
- search:
|
||||
provider: duckduckgo
|
||||
target: _blank
|
||||
|
||||
# Proxy configuration
|
||||
proxy:
|
||||
allowedHosts: "*"
|
||||
allowedHeaders: "*"
|
||||
@@ -1,49 +0,0 @@
|
||||
services:
|
||||
docker-socket-proxy:
|
||||
image: ${DOCKER_SOCKET_PROXY_IMAGE}
|
||||
container_name: ${DOCKER_SOCKET_PROXY_NAME}
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- tsysdevstack-supportstack-demo-network
|
||||
environment:
|
||||
CONTAINERS: ${DOCKER_SOCKET_PROXY_CONTAINERS}
|
||||
IMAGES: ${DOCKER_SOCKET_PROXY_IMAGES}
|
||||
NETWORKS: ${DOCKER_SOCKET_PROXY_NETWORKS}
|
||||
VOLUMES: ${DOCKER_SOCKET_PROXY_VOLUMES}
|
||||
BUILD: ${DOCKER_SOCKET_PROXY_BUILD}
|
||||
MANIFEST: ${DOCKER_SOCKET_PROXY_MANIFEST}
|
||||
PLUGINS: ${DOCKER_SOCKET_PROXY_PLUGINS}
|
||||
VERSION: ${DOCKER_SOCKET_PROXY_VERSION}
|
||||
volumes:
|
||||
- ${DOCKER_SOCKET_PROXY_SOCKET_PATH}:${DOCKER_SOCKET_PROXY_SOCKET_PATH}
|
||||
mem_limit: ${DOCKER_SOCKET_PROXY_MEM_LIMIT}
|
||||
mem_reservation: ${DOCKER_SOCKET_PROXY_MEM_LIMIT}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '${DOCKER_SOCKET_PROXY_CPU_LIMIT}'
|
||||
memory: ${DOCKER_SOCKET_PROXY_MEM_LIMIT}
|
||||
reservations:
|
||||
cpus: '${DOCKER_SOCKET_PROXY_CPU_LIMIT}'
|
||||
memory: ${DOCKER_SOCKET_PROXY_MEM_LIMIT}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/"]
|
||||
interval: ${HEALTH_CHECK_INTERVAL}
|
||||
timeout: ${HEALTH_CHECK_TIMEOUT}
|
||||
start_period: ${HEALTH_CHECK_START_PERIOD}
|
||||
retries: ${HEALTH_CHECK_RETRIES}
|
||||
# Homepage integration labels for automatic discovery
|
||||
labels:
|
||||
homepage.group: "Support Stack"
|
||||
homepage.name: "Docker Socket Proxy"
|
||||
homepage.icon: "docker.png"
|
||||
homepage.href: "http://${BIND_ADDRESS}:${HOMEPAGE_PORT}"
|
||||
homepage.description: "Docker socket proxy for secure access"
|
||||
homepage.type: "docker"
|
||||
# NOTE: Docker-socket-proxy must run as root to configure HAProxy
|
||||
# user: "${TSYSDEVSTACK_UID}:${TSYSDEVSTACK_DOCKER_GID}" # Read-only access to Docker socket
|
||||
|
||||
networks:
|
||||
tsysdevstack-supportstack-demo-network:
|
||||
external: true
|
||||
name: ${TSYSDEVSTACK_NETWORK_NAME}
|
||||
@@ -1,47 +0,0 @@
|
||||
services:
|
||||
homepage:
|
||||
image: ${HOMEPAGE_IMAGE}
|
||||
container_name: ${HOMEPAGE_NAME}
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- tsysdevstack-supportstack-demo-network
|
||||
ports:
|
||||
- "${BIND_ADDRESS}:${HOMEPAGE_PORT}:3000"
|
||||
environment:
|
||||
- PORT=3000
|
||||
- HOMEPAGE_URL=http://${BIND_ADDRESS}:${HOMEPAGE_PORT}
|
||||
- BASE_URL=http://${BIND_ADDRESS}:${HOMEPAGE_PORT}
|
||||
- HOMEPAGE_ALLOWED_HOSTS=${BIND_ADDRESS}:${HOMEPAGE_PORT},localhost:${HOMEPAGE_PORT}
|
||||
volumes:
|
||||
- ${HOMEPAGE_CONFIG_PATH}:/app/config
|
||||
- ${DOCKER_SOCKET_PROXY_SOCKET_PATH}:${DOCKER_SOCKET_PROXY_SOCKET_PATH}:ro # For Docker integration
|
||||
mem_limit: ${HOMEPAGE_MEM_LIMIT}
|
||||
mem_reservation: ${HOMEPAGE_MEM_LIMIT}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '${HOMEPAGE_CPU_LIMIT}'
|
||||
memory: ${HOMEPAGE_MEM_LIMIT}
|
||||
reservations:
|
||||
cpus: '${HOMEPAGE_CPU_LIMIT}'
|
||||
memory: ${HOMEPAGE_MEM_LIMIT}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/api/health"]
|
||||
interval: ${HEALTH_CHECK_INTERVAL}
|
||||
timeout: ${HEALTH_CHECK_TIMEOUT}
|
||||
start_period: ${HOMEPAGE_STARTUP_TIMEOUT} # Longer start period for homepage
|
||||
retries: ${HEALTH_CHECK_RETRIES}
|
||||
# Homepage integration labels for automatic discovery
|
||||
labels:
|
||||
homepage.group: "Support Stack"
|
||||
homepage.name: "Homepage Dashboard"
|
||||
homepage.icon: "homepage.png"
|
||||
homepage.href: "http://${BIND_ADDRESS}:${HOMEPAGE_PORT}"
|
||||
homepage.description: "Homepage dashboard for Support Stack"
|
||||
homepage.type: "homepage"
|
||||
user: "${TSYSDEVSTACK_UID}:${TSYSDEVSTACK_DOCKER_GID}" # Direct access to Docker socket for discovery
|
||||
|
||||
networks:
|
||||
tsysdevstack-supportstack-demo-network:
|
||||
external: true
|
||||
name: ${TSYSDEVSTACK_NETWORK_NAME}
|
||||
@@ -1,43 +0,0 @@
|
||||
services:
|
||||
mailhog:
|
||||
image: ${MAILHOG_IMAGE}
|
||||
container_name: ${MAILHOG_NAME}
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- tsysdevstack-supportstack-demo-network
|
||||
ports:
|
||||
- "${BIND_ADDRESS}:${MAILHOG_SMTP_PORT}:1025"
|
||||
- "${BIND_ADDRESS}:${MAILHOG_UI_PORT}:8025"
|
||||
environment:
|
||||
- MH_HOSTNAME=mailhog
|
||||
- MH_UI_BIND_ADDR=0.0.0.0:8025
|
||||
- MH_SMTP_BIND_ADDR=0.0.0.0:1025
|
||||
mem_limit: ${MAILHOG_MEM_LIMIT}
|
||||
mem_reservation: ${MAILHOG_MEM_LIMIT}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '${MAILHOG_CPU_LIMIT}'
|
||||
memory: ${MAILHOG_MEM_LIMIT}
|
||||
reservations:
|
||||
cpus: '${MAILHOG_CPU_LIMIT}'
|
||||
memory: ${MAILHOG_MEM_LIMIT}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8025/"]
|
||||
interval: ${HEALTH_CHECK_INTERVAL}
|
||||
timeout: ${HEALTH_CHECK_TIMEOUT}
|
||||
start_period: ${HEALTH_CHECK_START_PERIOD}
|
||||
retries: ${HEALTH_CHECK_RETRIES}
|
||||
labels:
|
||||
homepage.group: "Support Stack"
|
||||
homepage.name: "Mailhog"
|
||||
homepage.icon: "mailhog.png"
|
||||
homepage.href: "http://${BIND_ADDRESS}:${MAILHOG_UI_PORT}"
|
||||
homepage.description: "Mailhog SMTP testing inbox"
|
||||
homepage.type: "mailhog"
|
||||
user: "${TSYSDEVSTACK_UID}:${TSYSDEVSTACK_GID}"
|
||||
|
||||
networks:
|
||||
tsysdevstack-supportstack-demo-network:
|
||||
external: true
|
||||
name: ${TSYSDEVSTACK_NETWORK_NAME}
|
||||
@@ -1,49 +0,0 @@
|
||||
services:
|
||||
wakaapi:
|
||||
image: ${WAKAAPI_IMAGE}
|
||||
container_name: ${WAKAAPI_NAME}
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- tsysdevstack-supportstack-demo-network
|
||||
ports:
|
||||
- "${BIND_ADDRESS}:${WAKAAPI_PORT}:3000"
|
||||
environment:
|
||||
- WAKAPI_PASSWORD_SALT=TSYSDevStackSupportStackDemoSalt12345678
|
||||
- WAKAPI_DB_TYPE=sqlite3
|
||||
- WAKAPI_DB_NAME=/data/wakapi.db
|
||||
- WAKAPI_PORT=3000
|
||||
- WAKAPI_PUBLIC_URL=http://${BIND_ADDRESS}:${WAKAAPI_PORT}
|
||||
- WAKAPI_ALLOW_SIGNUP=true
|
||||
- WAKAPI_WAKATIME_API_KEY=${WAKAAPI_WAKATIME_API_KEY:-""}
|
||||
tmpfs:
|
||||
- /data:rw,size=128m,uid=${TSYSDEVSTACK_UID},gid=${TSYSDEVSTACK_GID},mode=0750
|
||||
mem_limit: ${WAKAAPI_MEM_LIMIT}
|
||||
mem_reservation: ${WAKAAPI_MEM_LIMIT}
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '${WAKAAPI_CPU_LIMIT}'
|
||||
memory: ${WAKAAPI_MEM_LIMIT}
|
||||
reservations:
|
||||
cpus: '${WAKAAPI_CPU_LIMIT}'
|
||||
memory: ${WAKAAPI_MEM_LIMIT}
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/api"]
|
||||
interval: ${HEALTH_CHECK_INTERVAL}
|
||||
timeout: ${HEALTH_CHECK_TIMEOUT}
|
||||
start_period: ${WAKAAPI_INITIALIZATION_TIMEOUT} # Longer start period for wakaapi
|
||||
retries: ${HEALTH_CHECK_RETRIES}
|
||||
# Homepage integration labels for automatic discovery
|
||||
labels:
|
||||
homepage.group: "Development Tools"
|
||||
homepage.name: "WakaAPI"
|
||||
homepage.icon: "wakapi.png"
|
||||
homepage.href: "http://${BIND_ADDRESS}:${WAKAAPI_PORT}"
|
||||
homepage.description: "WakaTime API for coding metrics"
|
||||
homepage.type: "wakapi"
|
||||
user: "${TSYSDEVSTACK_UID}:${TSYSDEVSTACK_GID}" # Regular user access for non-Docker containers
|
||||
|
||||
networks:
|
||||
tsysdevstack-supportstack-demo-network:
|
||||
external: true
|
||||
name: ${TSYSDEVSTACK_NETWORK_NAME}
|
||||
@@ -1,97 +0,0 @@
|
||||
|
||||
# 🚀 Support Stack — Tools & Repos
|
||||
|
||||
Below is a categorized, linked reference of the tools in the selection. Use the GitHub links where available. Items without a clear canonical repo are marked.
|
||||
|
||||
---
|
||||
|
||||
## 🧰 Developer Tools & IDEs
|
||||
| Tool | Repo | Notes |
|
||||
|:---|:---|:---|
|
||||
| [code-server](https://coder.com/docs/code-server) | [cdr/code-server](https://github.com/cdr/code-server) | VS Code in the browser |
|
||||
| [Atuin](https://atuin.sh) | [ellie/atuin](https://github.com/ellie/atuin) | Shell history manager |
|
||||
| [Dozzle](https://dozzle.dev) | [amir20/dozzle](https://github.com/amir20/dozzle) | Lightweight log viewer |
|
||||
| [Adminer](https://www.adminer.org) | [vrana/adminer](https://github.com/vrana/adminer) | Database admin tool |
|
||||
| [Watchtower](https://containrrr.github.io/watchtower/) | [containrrr/watchtower](https://github.com/containrrr/watchtower) | Auto-updates containers |
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Containers, Registry & Orchestration
|
||||
| Tool | Repo | Notes |
|
||||
|:---|:---|:---|
|
||||
| [Portainer](https://www.portainer.io) | [portainer/portainer](https://github.com/portainer/portainer) | Container management UI |
|
||||
| [Docker Registry (v2)](https://docs.docker.com/registry/) | [distribution/distribution](https://github.com/distribution/distribution) | Docker image registry |
|
||||
| [docker-socket-proxy](https://github.com/pires/docker-socket-proxy) | [pires/docker-socket-proxy](https://github.com/pires/docker-socket-proxy) | Protect Docker socket |
|
||||
| [cAdvisor](https://github.com/google/cadvisor) | [google/cadvisor](https://github.com/google/cadvisor) | Container metrics (host) |
|
||||
| [pumba](https://github.com/alexei-led/pumba) | [alexei-led/pumba](https://github.com/alexei-led/pumba) | Chaos testing for containers |
|
||||
| [CoreDNS](https://coredns.io) | [coredns/coredns](https://github.com/coredns/coredns) | DNS for clusters |
|
||||
|
||||
---
|
||||
|
||||
## 📡 Observability, Metrics & Tracing
|
||||
| Tool | Repo | Notes |
|
||||
|:---|:---|:---|
|
||||
| [Prometheus node_exporter](https://prometheus.io/docs/guides/node-exporter/) | [prometheus/node_exporter](https://github.com/prometheus/node_exporter) | Host metrics |
|
||||
| [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) | [open-telemetry/opentelemetry-collector](https://github.com/open-telemetry/opentelemetry-collector) | Telemetry pipeline |
|
||||
| [Jaeger (tracing)](https://www.jaegertracing.io) | [jaegertracing/jaeger](https://github.com/jaegertracing/jaeger) | Tracing backend |
|
||||
| [Loki (logs)](https://grafana.com/oss/loki) | [grafana/loki](https://github.com/grafana/loki) | Log aggregation |
|
||||
| [Promtail](https://grafana.com/oss/loki) | [grafana/loki](https://github.com/grafana/loki) | Log shipper (part of Loki) |
|
||||
| [cAdvisor (host/container metrics)](https://github.com/google/cadvisor) | [google/cadvisor](https://github.com/google/cadvisor) | (duplicate reference included in list) |
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing, Mocks & API Tools
|
||||
| Tool | Repo / Link | Notes |
|
||||
|:---|:---|:---|
|
||||
| [httpbin](https://httpbin.org) | [postmanlabs/httpbin](https://github.com/postmanlabs/httpbin) | HTTP request & response testing |
|
||||
| [WireMock](https://wiremock.org) | [wiremock/wiremock](https://github.com/wiremock/wiremock) | HTTP mock server |
|
||||
| [webhook.site](https://webhook.site) | [webhooksite/webhook.site](https://github.com/webhooksite/webhook.site) | Hosted request inspector (no canonical GitHub) |
|
||||
| [Pact Broker](https://docs.pact.io/brokers) | [pact-foundation/pact_broker](https://github.com/pact-foundation/pact_broker) | Consumer contract broker |
|
||||
| [Hoppscotch](https://hoppscotch.io) | [hoppscotch/hoppscotch](https://github.com/hoppscotch/hoppscotch) | API development tool |
|
||||
| [swagger-ui](https://swagger.io/tools/swagger-ui/) | [swagger-api/swagger-ui](https://github.com/swagger-api/swagger-ui) | OpenAPI UI |
|
||||
| [mailhog](https://github.com/mailhog/MailHog) | [mailhog/MailHog](https://github.com/mailhog/MailHog) | SMTP testing / inbox UI |
|
||||
|
||||
---
|
||||
|
||||
## 🧾 Documentation & Rendering
|
||||
| Tool | Repo | Notes |
|
||||
|:---|:---|:---|
|
||||
| [Redoc](https://redoc.ly) | [Redocly/redoc](https://github.com/Redocly/redoc) | OpenAPI docs renderer |
|
||||
| [Kroki](https://kroki.io) | [yuzutech/kroki](https://github.com/yuzutech/kroki) | Diagrams from text |
|
||||
|
||||
---
|
||||
|
||||
## 🔐 Security, Auth & Policy
|
||||
| Tool | Repo | Notes |
|
||||
|:---|:---|:---|
|
||||
| [step-ca (Smallstep)](https://smallstep.com/docs/step-ca) | [smallstep/step-ca](https://github.com/smallstep/step-ca) | Private CA / certs |
|
||||
| [Open Policy Agent (OPA)](https://www.openpolicyagent.org) | [open-policy-agent/opa](https://github.com/open-policy-agent/opa) | Policy engine |
|
||||
| [Unleash (feature flags)](https://www.getunleash.io) | [Unleash/unleash](https://github.com/Unleash/unleash) | Feature toggle system |
|
||||
| [Toxiproxy](https://shopify.github.io/toxiproxy/) | [Shopify/toxiproxy](https://github.com/Shopify/toxiproxy) | Network failure injection |
|
||||
|
||||
---
|
||||
|
||||
## 🗃️ Archiving, Backup & Content
|
||||
| Tool | Repo / Notes |
|
||||
|:---|:---|
|
||||
| [ArchiveBox](https://archivebox.io) | [ArchiveBox/ArchiveBox](https://github.com/ArchiveBox/ArchiveBox) |
|
||||
| [tubearchivist](https://github.com/tubearchivist/tubearchivist) | [tubearchivist/tubearchivist](https://github.com/tubearchivist/tubearchivist) |
|
||||
| [pumba (also in containers/chaos)](https://github.com/alexei-led/pumba) | [alexei-led/pumba](https://github.com/alexei-led/pumba) |
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Workflow & Orchestration Engines
|
||||
| Tool | Repo |
|
||||
|:---|:---|
|
||||
| [Cadence (workflow engine)](https://cadenceworkflow.io/) | [uber/cadence](https://github.com/uber/cadence) |
|
||||
|
||||
---
|
||||
|
||||
## 🧩 Misc / Other
|
||||
| Tool | Repo / Notes |
|
||||
|:---|:---|
|
||||
| [Registry2 (likely Docker Registry v2)](https://docs.docker.com/registry/) | [distribution/distribution](https://github.com/distribution/distribution) |
|
||||
| [node-exporter (host exporter)](https://prometheus.io/docs/guides/node-exporter/) | [prometheus/node_exporter](https://github.com/prometheus/node_exporter) |
|
||||
| [atomic tracker](#) | Repo not found — please confirm exact project name/URL |
|
||||
| [wakaapi](#) | Repo not found — please confirm exact project name/URL |
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Unit test for docker-socket-proxy component
|
||||
# Following TDD: Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
# Test function to validate docker-socket-proxy
|
||||
test_docker_socket_proxy() {
|
||||
echo "Testing docker-socket-proxy availability and functionality..."
|
||||
|
||||
# Check if the container exists and is running
|
||||
echo "Looking for container: $DOCKER_SOCKET_PROXY_NAME"
|
||||
if docker ps | grep -q "$DOCKER_SOCKET_PROXY_NAME"; then
|
||||
echo "✓ docker-socket-proxy container is running"
|
||||
else
|
||||
echo "✗ docker-socket-proxy container is NOT running"
|
||||
# Check if another container with similar name is running
|
||||
echo "Checking all containers:"
|
||||
docker ps | grep -i docker
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Additional tests can be added here to validate the proxy functionality
|
||||
# For example, testing if it can access the Docker socket and respond appropriately
|
||||
echo "✓ Basic docker-socket-proxy test passed"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Execute the test
|
||||
if test_docker_socket_proxy; then
|
||||
echo "✓ docker-socket-proxy test PASSED"
|
||||
exit 0
|
||||
else
|
||||
echo "✗ docker-socket-proxy test FAILED"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Unit test for homepage component
|
||||
# Following TDD: Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
# Test function to validate homepage
|
||||
test_homepage() {
|
||||
echo "Testing homepage availability and functionality..."
|
||||
|
||||
# Check if the container exists and is running
|
||||
if docker ps | grep -q "$HOMEPAGE_NAME"; then
|
||||
echo "✓ homepage container is running"
|
||||
else
|
||||
echo "✗ homepage container is NOT running"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test if homepage is accessible on the expected port (after allowing some startup time)
|
||||
sleep 15 # Allow time for homepage to fully start
|
||||
|
||||
if curl -f -s "http://$BIND_ADDRESS:$HOMEPAGE_PORT" > /dev/null; then
|
||||
echo "✓ homepage is accessible via HTTP"
|
||||
else
|
||||
echo "✗ homepage is NOT accessible via HTTP at http://$BIND_ADDRESS:$HOMEPAGE_PORT"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test if homepage can connect to Docker socket proxy (basic connectivity test)
|
||||
# This would be more complex in a real test, but for now we'll check if the container can see the network
|
||||
echo "✓ Basic homepage test passed"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Execute the test
|
||||
if test_homepage; then
|
||||
echo "✓ homepage test PASSED"
|
||||
exit 0
|
||||
else
|
||||
echo "✗ homepage test FAILED"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,47 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test for homepage host validation issue
|
||||
# Following TDD: Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings for dynamic container naming
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
echo "Testing homepage host validation issue..."
|
||||
|
||||
# Check if homepage container is running
|
||||
if ! docker ps | grep -q "$HOMEPAGE_NAME"; then
|
||||
echo "❌ Homepage container is not running"
|
||||
echo "Test failed: Homepage host validation test failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test if we get the host validation error by checking the HTTP response
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" "http://${BIND_ADDRESS}:${HOMEPAGE_PORT}/" 2>/dev/null || echo "ERROR")
|
||||
|
||||
if [ "$response" = "ERROR" ] || [ "$response" != "200" ]; then
|
||||
# Let's also check the page content to see if it contains the host validation error message
|
||||
content=$(curl -s "http://${BIND_ADDRESS}:${HOMEPAGE_PORT}/" 2>/dev/null || echo "")
|
||||
if [[ "$content" == *"Host validation failed"* ]]; then
|
||||
echo "❌ Homepage is showing 'Host validation failed' error"
|
||||
echo "Test confirmed: Host validation issue exists"
|
||||
exit 1
|
||||
else
|
||||
echo "⚠️ Homepage is not accessible but not showing host validation error"
|
||||
echo "Test failed: Homepage not accessible"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "✅ Homepage is accessible and host validation is working"
|
||||
echo "Test passed: No host validation issue"
|
||||
exit 0
|
||||
fi
|
||||
@@ -1,50 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Unit test for Mailhog component
|
||||
# TDD flow: test first to ensure failure prior to implementation
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
echo "Testing Mailhog availability and functionality..."
|
||||
|
||||
# Ensure Mailhog container is running
|
||||
if ! docker ps | grep -q "$MAILHOG_NAME"; then
|
||||
echo "❌ Mailhog container is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Allow service time to respond
|
||||
sleep 3
|
||||
|
||||
# Verify Mailhog UI is reachable
|
||||
if curl -f -s "http://${BIND_ADDRESS}:${MAILHOG_UI_PORT}/" > /dev/null 2>&1; then
|
||||
echo "✅ Mailhog UI is accessible at http://${BIND_ADDRESS}:${MAILHOG_UI_PORT}"
|
||||
else
|
||||
echo "❌ Mailhog UI is not accessible at http://${BIND_ADDRESS}:${MAILHOG_UI_PORT}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Optional SMTP port check (basic TCP connect)
|
||||
if command -v nc >/dev/null 2>&1; then
|
||||
if timeout 3 nc -z "${BIND_ADDRESS}" "${MAILHOG_SMTP_PORT}" >/dev/null 2>&1; then
|
||||
echo "✅ Mailhog SMTP port ${MAILHOG_SMTP_PORT} is reachable"
|
||||
else
|
||||
echo "⚠️ Mailhog SMTP port ${MAILHOG_SMTP_PORT} not reachable (informational)"
|
||||
fi
|
||||
else
|
||||
echo "⚠️ nc command not available; skipping SMTP connectivity check"
|
||||
fi
|
||||
|
||||
echo "✅ Mailhog component test passed"
|
||||
exit 0
|
||||
@@ -1,40 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test to ensure Mailhog appears in Homepage discovery
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
echo "Testing Mailhog discovery on homepage..."
|
||||
|
||||
# Validate required containers are running
|
||||
if ! docker ps | grep -q "$MAILHOG_NAME"; then
|
||||
echo "❌ Mailhog container is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! docker ps | grep -q "$HOMEPAGE_NAME"; then
|
||||
echo "❌ Homepage container is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Allow homepage time to refresh discovery
|
||||
sleep 5
|
||||
|
||||
services_payload=$(curl -s "http://${BIND_ADDRESS}:${HOMEPAGE_PORT}/api/services")
|
||||
if echo "$services_payload" | grep -q "\"container\":\"$MAILHOG_NAME\""; then
|
||||
echo "✅ Mailhog is discoverable on homepage"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ Mailhog is NOT discoverable on homepage"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,107 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# End-to-End test for the complete MVP stack (docker-socket-proxy, homepage, wakaapi)
|
||||
# This test verifies that all components are running and integrated properly
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
echo "Starting MVP Stack End-to-End Test..."
|
||||
echo "====================================="
|
||||
|
||||
# Test 1: Verify all containers are running
|
||||
echo "Test 1: Checking if all containers are running..."
|
||||
containers=($DOCKER_SOCKET_PROXY_NAME $HOMEPAGE_NAME $WAKAAPI_NAME $MAILHOG_NAME)
|
||||
|
||||
all_running=true
|
||||
for container in "${containers[@]}"; do
|
||||
if docker ps | grep -q "$container"; then
|
||||
echo "✓ $container is running"
|
||||
else
|
||||
echo "✗ $container is NOT running"
|
||||
all_running=false
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$all_running" = false ]; then
|
||||
echo "✗ MVP Stack Test FAILED: Not all containers are running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test 2: Verify services are accessible
|
||||
echo ""
|
||||
echo "Test 2: Checking if services are accessible..."
|
||||
|
||||
# Wait a bit to ensure services are fully ready
|
||||
sleep 10
|
||||
|
||||
# Test homepage accessibility
|
||||
if curl -f -s "http://$BIND_ADDRESS:$HOMEPAGE_PORT" > /dev/null; then
|
||||
echo "✓ Homepage is accessible at http://$BIND_ADDRESS:$HOMEPAGE_PORT"
|
||||
else
|
||||
echo "✗ Homepage is NOT accessible at http://$BIND_ADDRESS:$HOMEPAGE_PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test wakaapi accessibility (try multiple endpoints)
|
||||
if curl -f -s "http://$BIND_ADDRESS:$WAKAAPI_PORT/" > /dev/null || curl -f -s "http://$BIND_ADDRESS:$WAKAAPI_PORT/api/users" > /dev/null; then
|
||||
echo "✓ WakaAPI is accessible at http://$BIND_ADDRESS:$WAKAAPI_PORT"
|
||||
else
|
||||
echo "✗ WakaAPI is NOT accessible at http://$BIND_ADDRESS:$WAKAAPI_PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test Mailhog accessibility
|
||||
if curl -f -s "http://$BIND_ADDRESS:$MAILHOG_UI_PORT" > /dev/null; then
|
||||
echo "✓ Mailhog UI is accessible at http://$BIND_ADDRESS:$MAILHOG_UI_PORT"
|
||||
else
|
||||
echo "✗ Mailhog UI is NOT accessible at http://$BIND_ADDRESS:$MAILHOG_UI_PORT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test 3: Verify homepage integration labels (basic check)
|
||||
echo ""
|
||||
echo "Test 3: Checking service configurations..."
|
||||
|
||||
# Check if Docker socket proxy is running and accessible by other services
|
||||
if docker exec $DOCKER_SOCKET_PROXY_NAME sh -c "nc -z localhost 2375 && echo 'ok'" > /dev/null 2>&1; then
|
||||
echo "✓ Docker socket proxy is running internally"
|
||||
else
|
||||
echo "⚠ Docker socket proxy internal connection check skipped (not required to pass)"
|
||||
fi
|
||||
|
||||
# Test 4: Check network connectivity between services
|
||||
echo ""
|
||||
echo "Test 4: Checking inter-service connectivity..."
|
||||
|
||||
# This is more complex to test without being inside the containers, but we can verify network existence
|
||||
if docker network ls | grep -q "$TSYSDEVSTACK_NETWORK_NAME"; then
|
||||
echo "✓ Shared network $TSYSDEVSTACK_NETWORK_NAME exists"
|
||||
else
|
||||
echo "✗ Shared network $TSYSDEVSTACK_NETWORK_NAME does not exist"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "All MVP Stack tests PASSED! 🎉"
|
||||
echo "=================================="
|
||||
echo "Components successfully implemented and tested:"
|
||||
echo "- Docker Socket Proxy: Running on internal network"
|
||||
echo "- Homepage: Accessible at http://$BIND_ADDRESS:$HOMEPAGE_PORT with labels for service discovery"
|
||||
echo "- WakaAPI: Accessible at http://$BIND_ADDRESS:$WAKAAPI_PORT with proper configuration"
|
||||
echo "- Mailhog: Accessible at http://$BIND_ADDRESS:$MAILHOG_UI_PORT with SMTP on port $MAILHOG_SMTP_PORT"
|
||||
echo "- Shared Network: $TSYSDEVSTACK_NETWORK_NAME"
|
||||
echo ""
|
||||
echo "MVP Stack is ready for use!"
|
||||
|
||||
exit 0
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Unit test for wakaapi component
|
||||
# Following TDD: Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
# Test function to validate wakaapi
|
||||
test_wakaapi() {
|
||||
echo "Testing wakaapi availability and functionality..."
|
||||
|
||||
# Check if the container exists and is running
|
||||
if docker ps | grep -q "$WAKAAPI_NAME"; then
|
||||
echo "✓ wakaapi container is running"
|
||||
else
|
||||
echo "✗ wakaapi container is NOT running"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test if wakaapi is accessible on the expected port (after allowing some startup time)
|
||||
sleep 15 # Allow time for wakaapi to fully start
|
||||
|
||||
# Try the main endpoint (health check might not be at /api in Wakapi)
|
||||
# WakaAPI is a Go-based web app that listens on port 3000
|
||||
if curl -f -s "http://$BIND_ADDRESS:$WAKAAPI_PORT/" > /dev/null; then
|
||||
echo "✓ wakaapi is accessible via HTTP"
|
||||
else
|
||||
echo "✗ wakaapi is NOT accessible via HTTP at http://$BIND_ADDRESS:$WAKAAPI_PORT/"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✓ Basic wakaapi test passed"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Execute the test
|
||||
if test_wakaapi; then
|
||||
echo "✓ wakaapi test PASSED"
|
||||
exit 0
|
||||
else
|
||||
echo "✗ wakaapi test FAILED"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Test to verify WakaAPI is discovered and displayed on homepage
|
||||
# Following TDD: Write test → Execute test → Test fails → Write minimal code to pass test
|
||||
|
||||
set -e
|
||||
|
||||
# Load environment settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
ENV_FILE="${SCRIPT_DIR}/TSYSDevStack-SupportStack-Demo-Settings"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment settings file not found at $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source "$ENV_FILE"
|
||||
|
||||
echo "Testing WakaAPI discovery on homepage..."
|
||||
|
||||
# Check if WakaAPI container is running
|
||||
if ! docker ps | grep -q "$WAKAAPI_NAME"; then
|
||||
echo "❌ WakaAPI container is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if homepage container is running
|
||||
if ! docker ps | grep -q "$HOMEPAGE_NAME"; then
|
||||
echo "❌ Homepage container is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Give services a moment to stabilise
|
||||
sleep 5
|
||||
|
||||
# Test if we can access WakaAPI directly
|
||||
if ! curl -f -s "http://${BIND_ADDRESS}:${WAKAAPI_PORT}/" > /dev/null 2>&1; then
|
||||
echo "❌ WakaAPI is not accessible at http://${BIND_ADDRESS}:${WAKAAPI_PORT}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if WakaAPI appears on the homepage services API
|
||||
services_payload=$(curl -s "http://${BIND_ADDRESS}:${HOMEPAGE_PORT}/api/services")
|
||||
if echo "$services_payload" | grep -q "\"container\":\"$WAKAAPI_NAME\""; then
|
||||
echo "✅ WakaAPI is displayed on homepage"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ WakaAPI is NOT displayed on homepage"
|
||||
echo "Test failed: WakaAPI not discovered by homepage"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,61 +0,0 @@
|
||||
# QWEN Chat Context - Toolbox Component
|
||||
|
||||
## Overview
|
||||
I am the QWEN instance operating in the ToolboxStack component of the TSYSDevStack project. My role is to help develop, maintain, and enhance the ToolboxStack functionality.
|
||||
|
||||
## Current Context
|
||||
- **Date**: Wednesday, October 29, 2025
|
||||
- **Directory**: /home/localuser/TSYSDevStack/ToolboxStack
|
||||
- **OS**: Linux
|
||||
|
||||
## Project Structure
|
||||
The TSYSDevStack consists of four main categories:
|
||||
- CloudronStack (Free/libre/open software packages for Cloudron hosting)
|
||||
- LifecycleStack (build/test/package/release tooling)
|
||||
- SupportStack (always on tooling for developer workstations)
|
||||
- **ToolboxStack** (devcontainer base and functional area specific devcontainers) - *This component*
|
||||
|
||||
## Current Directory Tree
|
||||
```
|
||||
/home/localuser/TSYSDevStack/ToolboxStack/
|
||||
├── README.md
|
||||
├── collab/
|
||||
│ └── TSYSDevStack-toolbox-prompt.md
|
||||
└── output/
|
||||
├── NewToolbox.sh
|
||||
├── PROMPT
|
||||
├── toolbox-base/
|
||||
│ ├── aqua.yaml
|
||||
│ ├── build.sh
|
||||
│ ├── docker-compose.yml
|
||||
│ ├── Dockerfile
|
||||
│ ├── PROMPT
|
||||
│ ├── README.md
|
||||
│ ├── release.sh
|
||||
│ ├── run.sh
|
||||
│ ├── .build-cache/
|
||||
│ └── .devcontainer/
|
||||
└── toolbox-template/
|
||||
├── build.sh
|
||||
├── docker-compose.yml
|
||||
├── ...
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Key Components
|
||||
- **toolbox-base**: The primary dev container with Ubuntu 24.04 base, shell tooling (zsh, Starship, oh-my-zsh), core CLI utilities, aqua, and mise
|
||||
- **NewToolbox.sh**: Script to scaffold new toolbox-* directories from the template
|
||||
- **toolbox-template**: Template directory for creating new toolboxes
|
||||
- **PROMPT files**: Guidance for AI collaboration in various components
|
||||
|
||||
## My Responsibilities
|
||||
- Maintain and enhance the ToolboxStack component
|
||||
- Assist with creating new toolboxes from the template
|
||||
- Ensure documentation stays current (README.md and PROMPT files)
|
||||
- Follow collaboration guidelines for non-destructive operations
|
||||
- Use proper build and release workflows (build.sh, release.sh)
|
||||
|
||||
## Git Operations Notice
|
||||
- IMPORTANT: Git operations (commits and pushes) are handled exclusively by the Topside agent
|
||||
- ToolboxBot should NOT perform git commits or pushes
|
||||
- All changes should be coordinated through the Topside agent for repository consistency
|
||||
@@ -1,50 +0,0 @@
|
||||
# 🧰 ToolboxStack
|
||||
|
||||
ToolboxStack provides reproducible developer workspaces for TSYSDevStack contributors. The current `toolbox-base` image captures the daily-driver container environment used across the project.
|
||||
|
||||
---
|
||||
|
||||
## Contents
|
||||
| Area | Description | Path |
|
||||
|------|-------------|------|
|
||||
| Dev Container Image | Ubuntu 24.04 base with shell tooling, mise, aqua-managed CLIs, and Docker socket access. | [`output/toolbox-base/Dockerfile`](output/toolbox-base/Dockerfile) |
|
||||
| Build Helpers | Wrapper scripts for building (`build.sh`) and running (`run.sh`) the Compose service. | [`output/toolbox-base/`](output/toolbox-base) |
|
||||
| Devcontainer Config | VS Code Remote Container definition referencing the Compose service. | [`output/toolbox-base/.devcontainer/devcontainer.json`](output/toolbox-base/.devcontainer/devcontainer.json) |
|
||||
| Prompt & Docs | Onboarding prompt plus a feature-rich README for future collaborators. | [`output/toolbox-base/PROMPT`](output/toolbox-base/PROMPT), [`output/toolbox-base/README.md`](output/toolbox-base/README.md) |
|
||||
| Collaboration Notes | Shared design prompts and coordination notes for toolbox evolution. | [`collab/`](collab) |
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
```bash
|
||||
cd output/toolbox-base
|
||||
./build.sh # build the image with UID/GID matching your host
|
||||
./run.sh up # launch the toolbox-base service in the background
|
||||
docker exec -it tsysdevstack-toolboxstack-toolbox-base zsh
|
||||
```
|
||||
Use `./run.sh down` to stop the container when you are finished.
|
||||
|
||||
---
|
||||
|
||||
## Contribution Tips
|
||||
- Document every tooling change in both the `PROMPT` and `README.md`.
|
||||
- Prefer installing CLIs via `aqua` and language runtimes via `mise` to keep the environment reproducible.
|
||||
- Keep cache directories (`.build-cache/`, mise mounts) out of Git—they are already covered by the repo's `.gitignore`.
|
||||
|
||||
---
|
||||
|
||||
## 🧭 Working Agreement
|
||||
- **Stacks stay in sync.** When you add or modify automation, update both the relevant stack README and any linked prompts/docs.
|
||||
- **Collab vs Output.** Use `collab/` for planning and prompts, keep runnable artifacts under `output/`.
|
||||
- **Document forward.** New workflows should land alongside tests and a short entry in the appropriate README table.
|
||||
- **AI Agent Coordination.** Use Qwen agents for documentation updates, code changes, and maintaining consistency across stacks.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI Agent
|
||||
This stack is maintained by **ToolboxBot**, an AI agent focused on ToolboxStack workspace management.
|
||||
|
||||
---
|
||||
|
||||
## 📄 License
|
||||
See [LICENSE](../LICENSE) for full terms. Contributions are welcome—open a discussion in the relevant stack's `collab/` area to kick things off.
|
||||
@@ -1,31 +0,0 @@
|
||||
# TSYS Dev Stack Project - DevStack - Toolbox
|
||||
|
||||
This prompt file is the starting off point for the ToolboxStack category of the complete TSYSDevStack.
|
||||
|
||||
## Category Context
|
||||
|
||||
The TSYSDevStack consists of four categories:
|
||||
|
||||
- CloudronStack (Free/libre/open software packages that Known Element Enterprises has packaged up for Cloudron hosting)
|
||||
- LifecycleStack (build/test/package/release tooling)
|
||||
- SupportStack (always on tooling meant to run on developer workstations)
|
||||
- ToolboxStack (devcontainer base and various functional area specific devcontainers).
|
||||
|
||||
## Introduction
|
||||
|
||||
|
||||
## Artifact Naming
|
||||
|
||||
|
||||
## Common Service Dependencies
|
||||
|
||||
|
||||
## toolbox-base
|
||||
|
||||
- mise
|
||||
- zsh / oh-my-zsh / completions /
|
||||
-
|
||||
- See `output/PROMPT` for shared toolbox contributor guidance, `output/toolbox-base/PROMPT` for the image-specific snapshot, and `output/NewToolbox.sh` for bootstrapping new toolboxes from the template (edit each toolbox's `SEED` once to set goals, then load its PROMPT when starting work). Toolbox images follow a `dev` → `release-current` → `vX.Y.Z` tagging scheme; use `build.sh` for local iteration and `release.sh <semver>` (clean tree) to promote builds (details in README).
|
||||
|
||||
## toolbox-gis
|
||||
## toolbox-weather
|
||||
@@ -1,7 +0,0 @@
|
||||
I need to add the following tools to the toolbox-base image:
|
||||
|
||||
- https://github.com/just-every/code
|
||||
- https://github.com/QwenLM/qwen-code
|
||||
- https://github.com/google-gemini/gemini-cli
|
||||
- https://github.com/openai/codex
|
||||
- https://github.com/sst/opencode
|
||||
@@ -1,52 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [[ $# -ne 1 ]]; then
|
||||
echo "Usage: $0 <toolbox-name>" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
RAW_NAME="$1"
|
||||
if [[ "${RAW_NAME}" == toolbox-* ]]; then
|
||||
TOOLBOX_NAME="${RAW_NAME}"
|
||||
else
|
||||
TOOLBOX_NAME="toolbox-${RAW_NAME}"
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
TEMPLATE_DIR="${SCRIPT_DIR}/toolbox-template"
|
||||
TARGET_DIR="${SCRIPT_DIR}/${TOOLBOX_NAME}"
|
||||
|
||||
if [[ ! -d "${TEMPLATE_DIR}" ]]; then
|
||||
echo "Error: template directory not found at ${TEMPLATE_DIR}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -e "${TARGET_DIR}" ]]; then
|
||||
echo "Error: ${TARGET_DIR} already exists" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cp -R "${TEMPLATE_DIR}" "${TARGET_DIR}"
|
||||
|
||||
python3 - "$TARGET_DIR" "$TOOLBOX_NAME" <<'PY'
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
base = Path(sys.argv[1])
|
||||
toolbox_name = sys.argv[2]
|
||||
|
||||
for path in base.rglob("*"):
|
||||
if not path.is_file():
|
||||
continue
|
||||
text = path.read_text()
|
||||
updated = text.replace("{{toolbox_name}}", toolbox_name)
|
||||
if updated != text:
|
||||
path.write_text(updated)
|
||||
PY
|
||||
|
||||
echo "Created ${TARGET_DIR} from template."
|
||||
echo "Next steps:"
|
||||
echo " 1) Edit ${TARGET_DIR}/SEED once to describe the toolbox goals."
|
||||
echo " 2) Load ${TARGET_DIR}/PROMPT in Codex; it will instruct you to read SEED and proceed."
|
||||
@@ -1,19 +0,0 @@
|
||||
You are Codex helping with TSYSDevStack ToolboxStack deliverables.
|
||||
|
||||
Global toolbox guidance:
|
||||
- Directory layout: each toolbox-* directory carries its own Dockerfile/README/PROMPT; shared scaffolds live in toolbox-template/.devcontainer and docker-compose.yml.
|
||||
- Use ./NewToolbox.sh <name> to scaffold a new toolbox-* directory from toolbox-template.
|
||||
- Keep aqua/mise usage consistent across the family; prefer aqua-managed CLIs and mise-managed runtimes.
|
||||
- Reference toolbox-template when bootstrapping a new toolbox. Copy the directory, rename it, and replace {{toolbox_name}} placeholders in compose/devcontainer.
|
||||
- Each toolbox maintains a `SEED` file to seed the initial goals—edit it once before kicking off work, then rely on the toolbox PROMPT for ongoing updates (which begins by reading SEED).
|
||||
- Default build workflow: `./build.sh` produces a `:dev` tag; `./release.sh <semver>` (clean git tree required) rebuilds and pushes `:dev`, `:release-current`, and `v<semver>` (use `--dry-run`/`--allow-dirty` to rehearse).
|
||||
- Downstream Dockerfiles should inherit from `:release-current` by default; pin to version tags when reproducibility matters.
|
||||
|
||||
Commit discipline:
|
||||
- Craft atomic commits with clear intent; do not mix unrelated changes.
|
||||
- Follow Conventional Commits (`type(scope): summary`) with concise, descriptive language.
|
||||
- Commit frequently as features evolve, keeping diffs reviewable.
|
||||
- After documentation/tooling changes, run ./build.sh to ensure the image builds, then push once the build succeeds.
|
||||
- Use git best practices: clean history, no force pushes without coordination, and resolve conflicts promptly.
|
||||
|
||||
Per-toolbox prompts are responsible for fine-grained inventories and verification steps.
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"name": "TSYSDevStack Toolbox Base",
|
||||
"dockerComposeFile": [
|
||||
"../docker-compose.yml"
|
||||
],
|
||||
"service": "toolbox-base",
|
||||
"workspaceFolder": "/workspace",
|
||||
"remoteUser": "toolbox",
|
||||
"runServices": [
|
||||
"toolbox-base"
|
||||
],
|
||||
"overrideCommand": false,
|
||||
"postCreateCommand": "zsh -lc 'starship --version >/dev/null'"
|
||||
}
|
||||
@@ -1,132 +0,0 @@
|
||||
FROM ubuntu:24.04
|
||||
|
||||
ARG USER_ID=1000
|
||||
ARG GROUP_ID=1000
|
||||
ARG USERNAME=toolbox
|
||||
ARG TEA_VERSION=0.11.1
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt/lists,sharing=locked \
|
||||
apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
curl \
|
||||
fish \
|
||||
fzf \
|
||||
git \
|
||||
jq \
|
||||
bc \
|
||||
htop \
|
||||
btop \
|
||||
locales \
|
||||
openssh-client \
|
||||
ripgrep \
|
||||
tmux \
|
||||
screen \
|
||||
entr \
|
||||
fd-find \
|
||||
bat \
|
||||
httpie \
|
||||
build-essential \
|
||||
pkg-config \
|
||||
libssl-dev \
|
||||
zlib1g-dev \
|
||||
libffi-dev \
|
||||
libsqlite3-dev \
|
||||
libreadline-dev \
|
||||
wget \
|
||||
zsh \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Provide common aliases for fd and bat binaries
|
||||
RUN ln -sf /usr/bin/fdfind /usr/local/bin/fd \
|
||||
&& ln -sf /usr/bin/batcat /usr/local/bin/bat
|
||||
|
||||
# Install Gitea tea CLI
|
||||
RUN curl -fsSL "https://dl.gitea.io/tea/${TEA_VERSION}/tea-${TEA_VERSION}-linux-amd64" -o /tmp/tea \
|
||||
&& curl -fsSL "https://dl.gitea.io/tea/${TEA_VERSION}/tea-${TEA_VERSION}-linux-amd64.sha256" -o /tmp/tea.sha256 \
|
||||
&& sed -n 's/ .*//p' /tmp/tea.sha256 | awk '{print $1 " /tmp/tea"}' | sha256sum -c - \
|
||||
&& install -m 0755 /tmp/tea /usr/local/bin/tea \
|
||||
&& rm -f /tmp/tea /tmp/tea.sha256
|
||||
|
||||
# Configure locale to ensure consistent tool behavior
|
||||
RUN locale-gen en_US.UTF-8
|
||||
ENV LANG=en_US.UTF-8 \
|
||||
LANGUAGE=en_US:en \
|
||||
LC_ALL=en_US.UTF-8
|
||||
|
||||
# Install Starship prompt
|
||||
RUN curl -fsSL https://starship.rs/install.sh | sh -s -- -y -b /usr/local/bin
|
||||
|
||||
# Install aqua package manager (manages additional CLI tooling)
|
||||
RUN curl -sSfL https://raw.githubusercontent.com/aquaproj/aqua-installer/v2.3.1/aqua-installer | AQUA_ROOT_DIR=/usr/local/share/aquaproj-aqua bash \
|
||||
&& ln -sf /usr/local/share/aquaproj-aqua/bin/aqua /usr/local/bin/aqua
|
||||
|
||||
# Install mise for runtime management (no global toolchains pre-installed)
|
||||
RUN curl -sSfL https://mise.jdx.dev/install.sh | env MISE_INSTALL_PATH=/usr/local/bin/mise MISE_INSTALL_HELP=0 sh
|
||||
|
||||
# Install Node.js via mise to enable npm package installation
|
||||
RUN mise install node@22.13.0 && mise global node@22.13.0
|
||||
|
||||
# Create non-root user with matching UID/GID for host mapping
|
||||
RUN if getent passwd "${USER_ID}" >/dev/null; then \
|
||||
existing_user="$(getent passwd "${USER_ID}" | cut -d: -f1)"; \
|
||||
userdel --remove "${existing_user}"; \
|
||||
fi \
|
||||
&& if ! getent group "${GROUP_ID}" >/dev/null; then \
|
||||
groupadd --gid "${GROUP_ID}" "${USERNAME}"; \
|
||||
fi \
|
||||
&& useradd --uid "${USER_ID}" --gid "${GROUP_ID}" --shell /usr/bin/zsh --create-home "${USERNAME}"
|
||||
|
||||
# Install Oh My Zsh and configure shells for the unprivileged user
|
||||
RUN su - "${USERNAME}" -c 'git clone --depth=1 https://github.com/ohmyzsh/ohmyzsh.git ~/.oh-my-zsh' \
|
||||
&& su - "${USERNAME}" -c 'cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'mkdir -p ~/.config' \
|
||||
&& su - "${USERNAME}" -c 'sed -i "s/^plugins=(git)$/plugins=(git fzf)/" ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\nexport PATH=\"\$HOME/.local/share/aquaproj-aqua/bin:\$HOME/.local/share/mise/shims:\$HOME/.local/bin:\$PATH\"\n" >> ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\nexport AQUA_GLOBAL_CONFIG=\"\$HOME/.config/aquaproj-aqua/aqua.yaml\"\n" >> ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# Starship prompt\neval \"\$(starship init zsh)\"\n" >> ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# mise runtime manager\neval \"\$(mise activate zsh)\"\n" >> ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# direnv\nexport DIRENV_LOG_FORMAT=\"\"\neval \"\$(direnv hook zsh)\"\n" >> ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# zoxide\neval \"\$(zoxide init zsh)\"\n" >> ~/.zshrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\nexport AQUA_GLOBAL_CONFIG=\"\$HOME/.config/aquaproj-aqua/aqua.yaml\"\n" >> ~/.bashrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# mise runtime manager (bash)\neval \"\$(mise activate bash)\"\n" >> ~/.bashrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# direnv\nexport DIRENV_LOG_FORMAT=\"\"\neval \"\$(direnv hook bash)\"\n" >> ~/.bashrc' \
|
||||
&& su - "${USERNAME}" -c 'printf "\n# zoxide\neval \"\$(zoxide init bash)\"\n" >> ~/.bashrc' \
|
||||
&& su - "${USERNAME}" -c 'mkdir -p ~/.config/fish' \
|
||||
&& su - "${USERNAME}" -c 'printf "\nset -gx AQUA_GLOBAL_CONFIG \$HOME/.config/aquaproj-aqua/aqua.yaml\n# Shell prompt and runtime manager\nstarship init fish | source\nmise activate fish | source\ndirenv hook fish | source\nzoxide init fish | source\n" >> ~/.config/fish/config.fish'
|
||||
|
||||
# Install Node.js for the toolbox user and set up the environment
|
||||
RUN su - "${USERNAME}" -c 'mise install node@22.13.0 && mise use -g node@22.13.0'
|
||||
|
||||
COPY aqua.yaml /tmp/aqua.yaml
|
||||
|
||||
# Install aqua packages at both root and user level to ensure they're baked into the image
|
||||
RUN chown "${USER_ID}:${GROUP_ID}" /tmp/aqua.yaml \
|
||||
&& su - "${USERNAME}" -c 'mkdir -p ~/.config/aquaproj-aqua' \
|
||||
&& su - "${USERNAME}" -c 'cp /tmp/aqua.yaml ~/.config/aquaproj-aqua/aqua.yaml' \
|
||||
&& AQUA_GLOBAL_CONFIG=/tmp/aqua.yaml aqua install
|
||||
|
||||
# Install AI CLI tools via npm using mise to ensure Node.js is available
|
||||
RUN mise exec -- npm install -g @just-every/code@0.4.6 @qwen-code/qwen-code@0.1.1 @google/gemini-cli@0.11.0 @openai/codex@0.50.0 opencode-ai@0.15.29
|
||||
|
||||
# Install the same AI CLI tools for the toolbox user so they are available in the container runtime
|
||||
RUN su - "${USERNAME}" -c 'mise exec -- npm install -g @just-every/code@0.4.6 @qwen-code/qwen-code@0.1.1 @google/gemini-cli@0.11.0 @openai/codex@0.50.0 opencode-ai@0.15.29' && \
|
||||
# Ensure mise shims are properly generated for the installed tools
|
||||
su - "${USERNAME}" -c 'mise reshim'
|
||||
|
||||
# Prepare workspace directory with appropriate ownership
|
||||
RUN mkdir -p /workspace \
|
||||
&& chown "${USER_ID}:${GROUP_ID}" /workspace
|
||||
|
||||
ENV SHELL=/usr/bin/zsh \
|
||||
AQUA_GLOBAL_CONFIG=/home/${USERNAME}/.config/aquaproj-aqua/aqua.yaml \
|
||||
PATH=/home/${USERNAME}/.local/share/aquaproj-aqua/bin:/home/${USERNAME}/.local/share/mise/shims:/home/${USERNAME}/.local/bin:${PATH}
|
||||
|
||||
WORKDIR /workspace
|
||||
USER ${USERNAME}
|
||||
|
||||
CMD ["/usr/bin/zsh"]
|
||||
@@ -1,29 +0,0 @@
|
||||
You are Codex, collaborating with a human on the TSYSDevStack ToolboxStack project.
|
||||
|
||||
Context snapshot (toolbox-base):
|
||||
- Working directory: artifacts/ToolboxStack/toolbox-base
|
||||
- Image: tsysdevstack-toolboxstack-toolbox-base (Ubuntu 24.04)
|
||||
- Container user: toolbox (non-root, UID/GID mapped to host)
|
||||
- Mounted workspace: current repo at /workspace (rw)
|
||||
|
||||
Current state:
|
||||
- Dockerfile installs shell tooling (zsh/bash/fish with Starship & oh-my-zsh), core CLI utilities (curl, wget, git, tmux, screen, htop, btop, entr, httpie, tea, bc, etc.), build-essential + headers, aqua, and mise. Aqua is pinned to specific versions for gh, lazygit, direnv, git-delta, zoxide, just, yq, xh, curlie, chezmoi, shfmt, shellcheck, hadolint, uv, watchexec; direnv/zoxide hooks are enabled for all shells (direnv logging muted).
|
||||
- aqua-managed CLI inventory lives in README.md alongside usage notes; tea installs via direct download with checksum verification (TEA_VERSION build arg).
|
||||
- aqua packages are baked into the image during the build process for consistency, reproducibility and performance.
|
||||
- mise handles language/tool runtimes; activation wired into zsh, bash, and fish. Node.js is pinned to version 22.13.0 for build consistency.
|
||||
- AI CLI tools (just-every/code, QwenLM/qwen-code, google-gemini/gemini-cli, openai/codex, sst/opencode) are installed via npm and baked into the image with pinned versions.
|
||||
- Host directories for AI tool configuration and cache are mounted to maintain persistent settings across container runs.
|
||||
- docker-compose.yml runs container with host UID/GID, `sleep infinity`, and docker socket mount; run via run.sh/build.sh. Host directories `~/.local/share/mise` and `~/.cache/mise` are mounted for persistent runtimes.
|
||||
- Devcontainer config ( .devcontainer/devcontainer.json ) references the compose service.
|
||||
- Documentation: README.md (tooling inventory & workflow) and this PROMPT must stay current, and both should stay aligned with the shared guidance in ../PROMPT. README also notes that build.sh now uses docker buildx with a local cache directory and documents the `dev` → `release-current` → semantic tagging workflow.
|
||||
|
||||
Collaboration guidelines:
|
||||
1. Default to non-destructive operations; respect existing scripts run.sh/build.sh.
|
||||
2. Any tooling changes require updating README.md (inventory) and this prompt summary, rebuilding via `./build.sh` (local dev tag), then committing (Conventional Commits, atomic diffs) and pushing after a successful build per ../PROMPT. Use `./release.sh <semver>` (clean git tree required; `--dry-run`/`--allow-dirty` only for rehearsal) to promote to `release-current` + semantic tag.
|
||||
3. Keep configurations reproducible: prefer aqua/mise for new CLI/runtimes over apt unless prerequisites.
|
||||
4. Mention verification steps (build/test) after changes and note which tag was built/pushed.
|
||||
5. Downstream consumers should inherit from `:release-current` (or a pinned semantic tag); maintain UID/GID mapping and non-root execution.
|
||||
|
||||
Active focus:
|
||||
- Extend toolbox-base as a "daily driver" dev container while preserving reproducibility and documentation.
|
||||
- Next contributor should review README.md before modifying tooling and ensure both README and this prompt reflect new state.
|
||||
@@ -1,94 +0,0 @@
|
||||
# 🧰 TSYSDevStack Toolbox Base
|
||||
|
||||
Daily-driver development container for ToolboxStack work. It provides a reproducible Ubuntu 24.04 environment with curated shell tooling, package managers, and helper scripts.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. **Build the image (local dev tag)**
|
||||
```bash
|
||||
./build.sh
|
||||
```
|
||||
> Builds and tags the image as `tsysdevstack-toolboxstack-toolbox-base:dev`. Uses `docker buildx` with a local cache at `.build-cache/` for faster rebuilds.
|
||||
2. **Start the container**
|
||||
```bash
|
||||
./run.sh up
|
||||
```
|
||||
> Defaults to the `release-current` tag; override with `TOOLBOX_IMAGE_OVERRIDE=...` when testing other tags. Mise runtimes persist to your host in `~/.local/share/mise` and `~/.cache/mise` so language/tool downloads are shared across projects.
|
||||
3. **Attach to a shell**
|
||||
```bash
|
||||
docker exec -it tsysdevstack-toolboxstack-toolbox-base zsh
|
||||
# or: bash / fish
|
||||
```
|
||||
4. **Stop the container**
|
||||
```bash
|
||||
./run.sh down
|
||||
```
|
||||
|
||||
The compose service mounts the current repo to `/workspace` (read/write) and runs as the mapped host user (`toolbox`).
|
||||
|
||||
---
|
||||
|
||||
## 🏷️ Image Tagging & Releases
|
||||
|
||||
- `./build.sh` (no overrides) ⇒ builds `:dev` for active development.
|
||||
- `./release.sh <semver>` ⇒ rebuilds, retags, and pushes `:dev`, `:release-current`, and `v<semver>` (e.g., `./release.sh 0.2.0`). Requires a clean git tree.
|
||||
- Add `--dry-run` to rehearse the release without pushing (optionally `--allow-dirty` for experimentation only).
|
||||
- Downstream Dockerfiles should inherit from `tsysdevstack-toolboxstack-toolbox-base:release-current` (or pin to a semantic tag for reproducibility).
|
||||
|
||||
---
|
||||
|
||||
## 🧩 Tooling Inventory
|
||||
|
||||
| Category | Tooling | Notes |
|
||||
|----------|---------|-------|
|
||||
| **Shells & Prompts** | 🐚 `zsh` • 🐟 `fish` • 🧑💻 `bash` • ⭐ `starship` • 💎 `oh-my-zsh` | Starship prompt enabled for all shells; oh-my-zsh configured with `git` + `fzf` plugins. |
|
||||
| **Runtime & CLI Managers** | 🪄 `mise` • 💧 `aqua` | `mise` handles language/tool runtimes (activation wired into zsh/bash/fish); `aqua` manages standalone CLIs with config at `~/.config/aquaproj-aqua/aqua.yaml`. |
|
||||
| **Core CLI Utilities** | 📦 `curl` • 📥 `wget` • 🔐 `ca-certificates` • 🧭 `git` • 🔧 `build-essential` + headers (`pkg-config`, `libssl-dev`, `zlib1g-dev`, `libffi-dev`, `libsqlite3-dev`, `libreadline-dev`, `make`) • 🔍 `ripgrep` • 🧭 `fzf` • 📁 `fd` • 📖 `bat` • 🔗 `openssh-client` • 🧵 `tmux` • 🖥️ `screen` • 📈 `htop` • 📉 `btop` • ♻️ `entr` • 📊 `jq` • 🌐 `httpie` • ☕ `tea` • 🧮 `bc` | Provides ergonomic defaults plus toolchain deps for compiling runtimes (no global language installs). |
|
||||
| **Aqua-Managed CLIs** | 🐙 `gh` • 🌀 `lazygit` • 🪄 `direnv` • 🎨 `git-delta` • 🧭 `zoxide` • 🧰 `just` • 🧾 `yq` • ⚡ `xh` • 🌍 `curlie` • 🏠 `chezmoi` • 🛠️ `shfmt` • ✅ `shellcheck` • 🐳 `hadolint` • 🐍 `uv` • 🔁 `watchexec` | Extend via `~/.config/aquaproj-aqua/aqua.yaml`. These packages are baked into the image at build time for consistency and reproducibility. Direnv logging is muted and hooks for direnv/zoxide are pre-configured for zsh, bash, and fish. |
|
||||
| **AI CLI Tools** | 🧠 `@just-every/code` • 🤖 `@qwen-code/qwen-code` • 💎 `@google/gemini-cli` • 🔮 `@openai/codex` • 🌐 `opencode-ai` | AI-powered command-line tools for enhanced development workflows. Node.js is installed via mise to support npm package installation. |
|
||||
| **Container Workflow** | 🐳 Docker socket mount (`/var/run/docker.sock`) | Enables Docker CLIs inside the container; host Docker daemon required. |
|
||||
| **AI Tool Configuration** | 🧠 Host directories for AI tools | Host directories for AI tool configuration and cache are mounted to maintain persistent settings and data across container runs. |
|
||||
| **Runtime Environment** | 👤 Non-root user `toolbox` (UID/GID mapped) • 🗂️ `/workspace` mount | Maintains host permissions and isolates artifacts under `artifacts/ToolboxStack/toolbox-base`. |
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Extending the Sandbox
|
||||
|
||||
- **Add a runtime**: `mise use python@3.12` (per project). Run inside `/workspace` to persist `.mise.toml`.
|
||||
- **Add a CLI tool**: update `~/.config/aquaproj-aqua/aqua.yaml`, then run `aqua install`.
|
||||
- **Adjust base image**: modify `Dockerfile`, run `./build.sh`, and keep this README & `PROMPT` in sync.
|
||||
|
||||
> 🔁 **Documentation policy:** Whenever you add/remove tooling or change the developer experience, update both this README and the `PROMPT` file so the next collaborator has an accurate snapshot.
|
||||
|
||||
---
|
||||
|
||||
## 📂 Project Layout
|
||||
|
||||
| Path | Purpose |
|
||||
|------|---------|
|
||||
| `Dockerfile` | Defines the toolbox-base image. |
|
||||
| `docker-compose.yml` | Compose service providing the container runtime. |
|
||||
| `build.sh` | Wrapper around `docker build` with host UID/GID mapping. |
|
||||
| `run.sh` | Helper to bring the compose service up/down (exports UID/GID env vars). |
|
||||
| `.devcontainer/devcontainer.json` | VS Code remote container definition. |
|
||||
| `aqua.yaml` | Default aqua configuration (gh, tea, lazygit). |
|
||||
| `PROMPT` | LLM onboarding prompt for future contributors (must remain current). |
|
||||
|
||||
---
|
||||
|
||||
## ✅ Verification Checklist
|
||||
|
||||
After any image changes:
|
||||
1. Run `./build.sh` and ensure it succeeds.
|
||||
2. Optionally `./run.sh up` and sanity-check key tooling (e.g., `mise --version`, `gh --version`).
|
||||
3. Update this README and the `PROMPT` with any new or removed tooling.
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Collaboration Notes
|
||||
|
||||
- Container always runs as the mapped non-root user; avoid adding steps that require root login.
|
||||
- Prefer `mise`/`aqua` for new tooling to keep installations reproducible.
|
||||
- Keep documentation synchronized (README + PROMPT) so future contributors can resume quickly.
|
||||
@@ -1,20 +0,0 @@
|
||||
version: 1.0.0
|
||||
registries:
|
||||
- type: standard
|
||||
ref: v4.431.0
|
||||
packages:
|
||||
- name: cli/cli@v2.82.1
|
||||
- name: jesseduffield/lazygit@v0.55.1
|
||||
- name: direnv/direnv@v2.37.1
|
||||
- name: dandavison/delta@0.18.2
|
||||
- name: ajeetdsouza/zoxide@v0.9.8
|
||||
- name: casey/just@1.43.0
|
||||
- name: mikefarah/yq@v4.48.1
|
||||
- name: ducaale/xh@v0.25.0
|
||||
- name: rs/curlie@v1.8.2
|
||||
- name: twpayne/chezmoi@v2.66.1
|
||||
- name: mvdan/sh@v3.12.0
|
||||
- name: koalaman/shellcheck@v0.11.0
|
||||
- name: hadolint/hadolint@v2.14.0
|
||||
- name: astral-sh/uv@0.9.6
|
||||
- name: watchexec/watchexec@v2.3.2
|
||||
@@ -1,82 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Validate dependencies
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "Error: docker is required but not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! docker buildx version &> /dev/null; then
|
||||
echo "Error: docker buildx is required but not available." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
IMAGE_NAME="tsysdevstack-toolboxstack-toolbox-base"
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
USER_ID="${USER_ID_OVERRIDE:-$(id -u)}"
|
||||
GROUP_ID="${GROUP_ID_OVERRIDE:-$(id -g)}"
|
||||
USERNAME="${USERNAME_OVERRIDE:-toolbox}"
|
||||
TEA_VERSION="${TEA_VERSION_OVERRIDE:-0.11.1}"
|
||||
BUILDER_NAME="${BUILDER_NAME:-tsysdevstack-toolboxstack-builder}"
|
||||
CACHE_DIR="${SCRIPT_DIR}/.build-cache"
|
||||
TAG="${TAG_OVERRIDE:-dev}"
|
||||
RELEASE_TAG="${RELEASE_TAG_OVERRIDE:-release-current}"
|
||||
VERSION_TAG="${VERSION_TAG_OVERRIDE:-}"
|
||||
PUSH="${PUSH_OVERRIDE:-false}"
|
||||
|
||||
echo "Building ${IMAGE_NAME} with UID=${USER_ID} GID=${GROUP_ID} USERNAME=${USERNAME}"
|
||||
echo "Primary tag: ${TAG}"
|
||||
|
||||
if ! docker buildx inspect "${BUILDER_NAME}" >/dev/null 2>&1; then
|
||||
echo "Creating builder: ${BUILDER_NAME}"
|
||||
docker buildx create --driver docker-container --name "${BUILDER_NAME}" --use >/dev/null
|
||||
else
|
||||
echo "Using existing builder: ${BUILDER_NAME}"
|
||||
docker buildx use "${BUILDER_NAME}" >/dev/null
|
||||
fi
|
||||
|
||||
mkdir -p "${CACHE_DIR}"
|
||||
|
||||
echo "Starting build..."
|
||||
docker buildx build \
|
||||
--builder "${BUILDER_NAME}" \
|
||||
--load \
|
||||
--progress=plain \
|
||||
--build-arg USER_ID="${USER_ID}" \
|
||||
--build-arg GROUP_ID="${GROUP_ID}" \
|
||||
--build-arg USERNAME="${USERNAME}" \
|
||||
--build-arg TEA_VERSION="${TEA_VERSION}" \
|
||||
--cache-from "type=local,src=${CACHE_DIR}" \
|
||||
--cache-to "type=local,dest=${CACHE_DIR},mode=max" \
|
||||
--tag "${IMAGE_NAME}:${TAG}" \
|
||||
"${SCRIPT_DIR}"
|
||||
|
||||
if [[ "${PUSH}" == "true" ]]; then
|
||||
echo "Pushing ${IMAGE_NAME}:${TAG}"
|
||||
docker push "${IMAGE_NAME}:${TAG}"
|
||||
|
||||
if [[ "${TAG}" == "dev" && -n "${VERSION_TAG}" ]]; then
|
||||
docker tag "${IMAGE_NAME}:${TAG}" "${IMAGE_NAME}:${VERSION_TAG}"
|
||||
echo "Pushing ${IMAGE_NAME}:${VERSION_TAG}"
|
||||
docker push "${IMAGE_NAME}:${VERSION_TAG}"
|
||||
fi
|
||||
|
||||
if [[ "${TAG}" == "dev" ]]; then
|
||||
docker tag "${IMAGE_NAME}:${TAG}" "${IMAGE_NAME}:${RELEASE_TAG}"
|
||||
echo "Pushing ${IMAGE_NAME}:${RELEASE_TAG}"
|
||||
docker push "${IMAGE_NAME}:${RELEASE_TAG}"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "Build completed successfully."
|
||||
|
||||
# Run security scan if TRIVY is available
|
||||
if command -v trivy &> /dev/null; then
|
||||
echo "Running security scan with Trivy..."
|
||||
trivy image --exit-code 0 --severity HIGH,CRITICAL "${IMAGE_NAME}:${TAG}"
|
||||
else
|
||||
echo "Trivy not found. Install Trivy to perform security scanning."
|
||||
fi
|
||||
@@ -1,31 +0,0 @@
|
||||
services:
|
||||
toolbox-base:
|
||||
container_name: tsysdevstack-toolboxstack-toolbox-base
|
||||
image: ${TOOLBOX_IMAGE:-tsysdevstack-toolboxstack-toolbox-base:release-current}
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
USER_ID: ${LOCAL_UID:-1000}
|
||||
GROUP_ID: ${LOCAL_GID:-1000}
|
||||
USERNAME: ${LOCAL_USERNAME:-toolbox}
|
||||
user: "${LOCAL_UID:-1000}:${LOCAL_GID:-1000}"
|
||||
working_dir: /workspace
|
||||
command: ["sleep", "infinity"]
|
||||
init: true
|
||||
tty: true
|
||||
stdin_open: true
|
||||
volumes:
|
||||
- .:/workspace:rw
|
||||
- ${HOME}/.local/share/mise:/home/toolbox/.local/share/mise:rw
|
||||
- ${HOME}/.cache/mise:/home/toolbox/.cache/mise:rw
|
||||
# AI CLI tool configuration and cache directories
|
||||
- ${HOME}/.config/openai:/home/toolbox/.config/openai:rw
|
||||
- ${HOME}/.config/gemini:/home/toolbox/.config/gemini:rw
|
||||
- ${HOME}/.config/qwen:/home/toolbox/.config/qwen:rw
|
||||
- ${HOME}/.config/code:/home/toolbox/.config/code:rw
|
||||
- ${HOME}/.config/opencode:/home/toolbox/.config/opencode:rw
|
||||
- ${HOME}/.cache/openai:/home/toolbox/.cache/openai:rw
|
||||
- ${HOME}/.cache/gemini:/home/toolbox/.cache/gemini:rw
|
||||
- ${HOME}/.cache/qwen:/home/toolbox/.cache/qwen:rw
|
||||
- ${HOME}/.cache/code:/home/toolbox/.cache/code:rw
|
||||
- ${HOME}/.cache/opencode:/home/toolbox/.cache/opencode:rw
|
||||
@@ -1,90 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'EOU'
|
||||
Usage: ./release.sh [--dry-run] [--allow-dirty] <semver>
|
||||
|
||||
Examples:
|
||||
./release.sh 0.2.0
|
||||
./release.sh --dry-run 0.2.0
|
||||
|
||||
This script rebuilds the toolbox-base image, tags it as:
|
||||
- tsysdevstack-toolboxstack-toolbox-base:dev
|
||||
- tsysdevstack-toolboxstack-toolbox-base:release-current
|
||||
- tsysdevstack-toolboxstack-toolbox-base:v<semver>
|
||||
|
||||
When run without --dry-run it pushes all three tags.
|
||||
EOU
|
||||
}
|
||||
|
||||
DRY_RUN=false
|
||||
ALLOW_DIRTY=false
|
||||
VERSION=""
|
||||
|
||||
while (( $# > 0 )); do
|
||||
case "$1" in
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
--allow-dirty)
|
||||
ALLOW_DIRTY=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option: $1" >&2
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
VERSION="$1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "${VERSION}" ]]; then
|
||||
echo "Error: semantic version is required." >&2
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "${VERSION}" =~ ^v?([0-9]+)\.([0-9]+)\.([0-9]+)$ ]]; then
|
||||
SEMVER="v${BASH_REMATCH[1]}.${BASH_REMATCH[2]}.${BASH_REMATCH[3]}"
|
||||
else
|
||||
echo "Error: version must be semantic (e.g., 0.2.0 or v0.2.0)." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}" && git rev-parse --show-toplevel 2>/dev/null || true)"
|
||||
|
||||
if [[ -n "${REPO_ROOT}" && "${ALLOW_DIRTY}" != "true" ]]; then
|
||||
if ! git -C "${REPO_ROOT}" diff --quiet --ignore-submodules --exit-code; then
|
||||
echo "Error: git working tree has uncommitted changes. Please commit or stash before releasing." >&2
|
||||
exit 1
|
||||
fi
|
||||
elif [[ -z "${REPO_ROOT}" ]]; then
|
||||
echo "Warning: unable to resolve git repository root; skipping clean tree check." >&2
|
||||
fi
|
||||
|
||||
echo "Preparing release for ${SEMVER}"
|
||||
echo " dry-run: ${DRY_RUN}"
|
||||
echo " allow-dirty: ${ALLOW_DIRTY}"
|
||||
|
||||
if [[ "${DRY_RUN}" == "true" ]]; then
|
||||
VERSION_TAG_OVERRIDE="${SEMVER}" PUSH_OVERRIDE=false "${SCRIPT_DIR}/build.sh"
|
||||
echo "[dry-run] Skipped pushing tags."
|
||||
else
|
||||
VERSION_TAG_OVERRIDE="${SEMVER}" PUSH_OVERRIDE=true "${SCRIPT_DIR}/build.sh"
|
||||
echo "Release ${SEMVER} pushed as:"
|
||||
echo " - tsysdevstack-toolboxstack-toolbox-base:dev"
|
||||
echo " - tsysdevstack-toolboxstack-toolbox-base:release-current"
|
||||
echo " - tsysdevstack-toolboxstack-toolbox-base:${SEMVER}"
|
||||
fi
|
||||
@@ -1,53 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Validate dependencies
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "Error: docker is required but not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v docker compose &> /dev/null; then
|
||||
echo "Error: docker compose is required but not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
COMPOSE_FILE="${SCRIPT_DIR}/docker-compose.yml"
|
||||
|
||||
export LOCAL_UID="${USER_ID_OVERRIDE:-$(id -u)}"
|
||||
export LOCAL_GID="${GROUP_ID_OVERRIDE:-$(id -g)}"
|
||||
export LOCAL_USERNAME="${USERNAME_OVERRIDE:-toolbox}"
|
||||
export TOOLBOX_IMAGE="${TOOLBOX_IMAGE_OVERRIDE:-tsysdevstack-toolboxstack-toolbox-base:release-current}"
|
||||
|
||||
if [[ ! -f "${COMPOSE_FILE}" ]]; then
|
||||
echo "Error: docker-compose.yml not found at ${COMPOSE_FILE}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ACTION="${1:-up}"
|
||||
shift || true
|
||||
|
||||
if [[ "${ACTION}" == "up" ]]; then
|
||||
# Create necessary directories for the toolbox tools
|
||||
mkdir -p "${HOME}/.local/share/mise" "${HOME}/.cache/mise"
|
||||
mkdir -p "${HOME}/.config" "${HOME}/.local/share"
|
||||
mkdir -p "${HOME}/.cache/openai" "${HOME}/.cache/gemini" "${HOME}/.cache/qwen" "${HOME}/.cache/code" "${HOME}/.cache/opencode"
|
||||
mkdir -p "${HOME}/.config/openai" "${HOME}/.config/gemini" "${HOME}/.config/qwen" "${HOME}/.config/code" "${HOME}/.config/opencode"
|
||||
fi
|
||||
|
||||
case "${ACTION}" in
|
||||
up)
|
||||
docker compose -f "${COMPOSE_FILE}" up --build --detach "$@"
|
||||
echo "Container started. Use 'docker exec -it tsysdevstack-toolboxstack-toolbox-base zsh' to access the shell."
|
||||
;;
|
||||
down)
|
||||
docker compose -f "${COMPOSE_FILE}" down "$@"
|
||||
echo "Container stopped."
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [up|down] [additional docker compose args]" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"name": "TSYSDevStack {{toolbox_name}}",
|
||||
"dockerComposeFile": [
|
||||
"../docker-compose.yml"
|
||||
],
|
||||
"service": "{{toolbox_name}}",
|
||||
"workspaceFolder": "/workspace",
|
||||
"remoteUser": "toolbox",
|
||||
"runServices": [
|
||||
"{{toolbox_name}}"
|
||||
],
|
||||
"overrideCommand": false,
|
||||
"postCreateCommand": "zsh -lc 'starship --version >/dev/null'"
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
# Extend from the toolbox-base image
|
||||
FROM tsysdevstack-toolboxstack-toolbox-base:release-current
|
||||
|
||||
# Set build arguments (these can be overridden at build time)
|
||||
ARG USER_ID=1000
|
||||
ARG GROUP_ID=1000
|
||||
ARG USERNAME=toolbox
|
||||
|
||||
# Ensure the non-root user exists with the correct UID/GID
|
||||
RUN if getent passwd "${USER_ID}" >/dev/null; then \
|
||||
existing_user="$(getent passwd "${USER_ID}" | cut -d: -f1)"; \
|
||||
userdel --remove "${existing_user}" 2>/dev/null || true; \
|
||||
fi \
|
||||
&& if ! getent group "${GROUP_ID}" >/dev/null; then \
|
||||
groupadd --gid "${GROUP_ID}" "${USERNAME}"; \
|
||||
fi \
|
||||
&& useradd --uid "${USER_ID}" --gid "${GROUP_ID}" --shell /usr/bin/zsh --create-home "${USERNAME}" \
|
||||
&& usermod -aG sudo "${USERNAME}" 2>/dev/null || true
|
||||
|
||||
# Switch to the non-root user
|
||||
USER ${USERNAME}
|
||||
WORKDIR /workspace
|
||||
|
||||
# Default command
|
||||
CMD ["/usr/bin/zsh"]
|
||||
@@ -1,27 +0,0 @@
|
||||
You are Codex, collaborating with a human on the TSYSDevStack ToolboxStack project.
|
||||
|
||||
- Seed context:
|
||||
- `SEED` captures the initial scope. Edit it once to define goals, then treat it as read-only unless the high-level objectives change.
|
||||
- Start each session by reading it (`cat SEED`) and summarize progress or adjustments here in PROMPT.
|
||||
|
||||
Context snapshot ({{toolbox_name}}):
|
||||
- Working directory: TSYSDevStack/ToolboxStack/{{toolbox_name}}
|
||||
- Image: extends from tsysdevstack-toolboxstack-toolbox-base (Ubuntu 24.04 base)
|
||||
- Container user: toolbox (non-root, UID/GID mapped to host)
|
||||
- Mounted workspace: current repo at /workspace (rw)
|
||||
|
||||
Current state:
|
||||
- Extends from the standard toolbox-base image, inheriting shell tooling (zsh/bash/fish with Starship & oh-my-zsh), core CLI utilities, aqua, and mise.
|
||||
- aqua packages are baked into the base image during the build process for consistency and reproducibility.
|
||||
- AI CLI tools from the base are available, with host directories mounted for configuration persistence.
|
||||
- See ../PROMPT for shared toolbox contribution expectations (documentation sync, build cadence, commit/push discipline, Conventional Commits, atomic history).
|
||||
|
||||
Collaboration checklist:
|
||||
1. Build upon the base tooling with {{toolbox_name}}-specific additions; mirror outcomes in README.md and this PROMPT.
|
||||
2. Prefer aqua-managed CLIs and mise-managed runtimes for reproducibility.
|
||||
3. After each tooling change, update README/PROMPT, run ./build.sh, commit (Conventional Commit message, focused diff), and push only once the build succeeds per ../PROMPT.
|
||||
4. Record verification steps (build/test commands) as they are performed.
|
||||
5. Maintain UID/GID mapping and non-root execution.
|
||||
|
||||
Active focus:
|
||||
- Initialize {{toolbox_name}} using the toolbox-template scaffolding; evolve the Dockerfile/tooling inventory to satisfy the SEED goals while maintaining consistency with the base image.
|
||||
@@ -1,6 +0,0 @@
|
||||
- This toolbox extends from the standard toolbox-base image, inheriting all base tooling (shells, CLIs, package managers).
|
||||
- Add {{toolbox_name}}-specific tools via aqua.yaml, Dockerfile, or mise configurations.
|
||||
- Document any additional host directory mounts needed in docker-compose.yml.
|
||||
- Ensure all tooling is compatible with the non-root toolbox user and UID/GID mapping.
|
||||
- Update README.md to document {{toolbox_name}}-specific features and tooling.
|
||||
- Follow the same build and run patterns as the base image for consistency.
|
||||
@@ -1,8 +0,0 @@
|
||||
version: 1.0.0
|
||||
registries:
|
||||
- type: standard
|
||||
ref: v4.431.0
|
||||
packages:
|
||||
# Add additional packages specific to your toolbox here
|
||||
# Example:
|
||||
# - name: cli/cli@v2.82.1
|
||||
@@ -1,62 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Validate dependencies
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "Error: docker is required but not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! docker buildx version &> /dev/null; then
|
||||
echo "Error: docker buildx is required but not available." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the toolbox name from the directory name (or you can pass it as an argument)
|
||||
TOOLBOX_NAME="${TOOLBOX_NAME_OVERRIDE:-$(basename "$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)")}"
|
||||
IMAGE_NAME="tsysdevstack-toolboxstack-${TOOLBOX_NAME#toolbox-}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
USER_ID="${USER_ID_OVERRIDE:-$(id -u)}"
|
||||
GROUP_ID="${GROUP_ID_OVERRIDE:-$(id -g)}"
|
||||
USERNAME="${USERNAME_OVERRIDE:-toolbox}"
|
||||
TEA_VERSION="${TEA_VERSION_OVERRIDE:-0.11.1}"
|
||||
BUILDER_NAME="${BUILDER_NAME:-tsysdevstack-toolboxstack-builder}"
|
||||
CACHE_DIR="${SCRIPT_DIR}/.build-cache"
|
||||
|
||||
echo "Building ${IMAGE_NAME} with UID=${USER_ID} GID=${GROUP_ID} USERNAME=${USERNAME}"
|
||||
|
||||
if ! docker buildx inspect "${BUILDER_NAME}" >/dev/null 2>&1; then
|
||||
echo "Creating builder: ${BUILDER_NAME}"
|
||||
docker buildx create --driver docker-container --name "${BUILDER_NAME}" --use >/dev/null
|
||||
else
|
||||
echo "Using existing builder: ${BUILDER_NAME}"
|
||||
docker buildx use "${BUILDER_NAME}" >/dev/null
|
||||
fi
|
||||
|
||||
mkdir -p "${CACHE_DIR}"
|
||||
|
||||
echo "Starting build..."
|
||||
docker buildx build \
|
||||
--builder "${BUILDER_NAME}" \
|
||||
--load \
|
||||
--progress=plain \
|
||||
--build-arg USER_ID="${USER_ID}" \
|
||||
--build-arg GROUP_ID="${GROUP_ID}" \
|
||||
--build-arg USERNAME="${USERNAME}" \
|
||||
--build-arg TEA_VERSION="${TEA_VERSION}" \
|
||||
--cache-from "type=local,src=${CACHE_DIR}" \
|
||||
--cache-to "type=local,dest=${CACHE_DIR},mode=max" \
|
||||
--tag "${IMAGE_NAME}" \
|
||||
"${SCRIPT_DIR}"
|
||||
|
||||
echo "Build completed successfully."
|
||||
|
||||
# Run security scan if TRIVY is available
|
||||
if command -v trivy &> /dev/null; then
|
||||
echo "Running security scan with Trivy..."
|
||||
trivy image --exit-code 0 --severity HIGH,CRITICAL "${IMAGE_NAME}"
|
||||
else
|
||||
echo "Trivy not found. Install Trivy to perform security scanning."
|
||||
fi
|
||||
@@ -1,31 +0,0 @@
|
||||
services:
|
||||
{{toolbox_name}}:
|
||||
container_name: tsysdevstack-toolboxstack-{{toolbox_name}}
|
||||
image: tsysdevstack-toolboxstack-{{toolbox_name}}
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
USER_ID: ${LOCAL_UID:-1000}
|
||||
GROUP_ID: ${LOCAL_GID:-1000}
|
||||
USERNAME: ${LOCAL_USERNAME:-toolbox}
|
||||
user: "${LOCAL_UID:-1000}:${LOCAL_GID:-1000}"
|
||||
working_dir: /workspace
|
||||
command: ["sleep", "infinity"]
|
||||
init: true
|
||||
tty: true
|
||||
stdin_open: true
|
||||
volumes:
|
||||
- .:/workspace:rw
|
||||
- ${HOME}/.local/share/mise:/home/toolbox/.local/share/mise:rw
|
||||
- ${HOME}/.cache/mise:/home/toolbox/.cache/mise:rw
|
||||
# AI CLI tool configuration and cache directories
|
||||
- ${HOME}/.config/openai:/home/toolbox/.config/openai:rw
|
||||
- ${HOME}/.config/gemini:/home/toolbox/.config/gemini:rw
|
||||
- ${HOME}/.config/qwen:/home/toolbox/.config/qwen:rw
|
||||
- ${HOME}/.config/code:/home/toolbox/.config/code:rw
|
||||
- ${HOME}/.config/opencode:/home/toolbox/.config/opencode:rw
|
||||
- ${HOME}/.cache/openai:/home/toolbox/.cache/openai:rw
|
||||
- ${HOME}/.cache/gemini:/home/toolbox/.cache/gemini:rw
|
||||
- ${HOME}/.cache/qwen:/home/toolbox/.cache/qwen:rw
|
||||
- ${HOME}/.cache/code:/home/toolbox/.cache/code:rw
|
||||
- ${HOME}/.cache/opencode:/home/toolbox/.cache/opencode:rw
|
||||
@@ -1,52 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Validate dependencies
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "Error: docker is required but not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v docker compose &> /dev/null; then
|
||||
echo "Error: docker compose is required but not installed." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
COMPOSE_FILE="${SCRIPT_DIR}/docker-compose.yml"
|
||||
|
||||
export LOCAL_UID="${USER_ID_OVERRIDE:-$(id -u)}"
|
||||
export LOCAL_GID="${GROUP_ID_OVERRIDE:-$(id -g)}"
|
||||
export LOCAL_USERNAME="${USERNAME_OVERRIDE:-toolbox}"
|
||||
|
||||
if [[ ! -f "${COMPOSE_FILE}" ]]; then
|
||||
echo "Error: docker-compose.yml not found at ${COMPOSE_FILE}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ACTION="${1:-up}"
|
||||
shift || true
|
||||
|
||||
if [[ "${ACTION}" == "up" ]]; then
|
||||
# Create necessary directories for the toolbox tools
|
||||
mkdir -p "${HOME}/.local/share/mise" "${HOME}/.cache/mise"
|
||||
mkdir -p "${HOME}/.config" "${HOME}/.local/share"
|
||||
mkdir -p "${HOME}/.cache/openai" "${HOME}/.cache/gemini" "${HOME}/.cache/qwen" "${HOME}/.cache/code" "${HOME}/.cache/opencode"
|
||||
mkdir -p "${HOME}/.config/openai" "${HOME}/.config/gemini" "${HOME}/.config/qwen" "${HOME}/.config/code" "${HOME}/.config/opencode"
|
||||
fi
|
||||
|
||||
case "${ACTION}" in
|
||||
up)
|
||||
docker compose -f "${COMPOSE_FILE}" up --build --detach "$@"
|
||||
echo "Container started. Use 'docker exec -it $(basename "$SCRIPT_DIR" | sed 's/toolbox-//') zsh' to access the shell."
|
||||
;;
|
||||
down)
|
||||
docker compose -f "${COMPOSE_FILE}" down "$@"
|
||||
echo "Container stopped."
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [up|down] [additional docker compose args]" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
@@ -1,15 +0,0 @@
|
||||
# Please use conventional commit format: type(scope): description
|
||||
# Example: feat(auth): add jwt token expiration
|
||||
#
|
||||
# Types: feat, fix, docs, style, refactor, test, chore, perf, ci, build, revert
|
||||
#
|
||||
# Explain what and why in imperative mood:
|
||||
#
|
||||
#
|
||||
#
|
||||
# Signed-off-by: Your Name <your.email@example.com>
|
||||
#
|
||||
# ------------------------ >8 ------------------------
|
||||
# Do not modify or remove the line above.
|
||||
# Everything below it will be ignored.
|
||||
diff --git a/...
|
||||
Reference in New Issue
Block a user