Implement comprehensive testing framework and enhance documentation
- Add Project-Tests directory with complete testing infrastructure - Create main test runner with JSON reporting and categorized tests - Implement system validation tests (RAM, disk, network, permissions) - Add security testing for HTTPS enforcement and deployment methods - Create unit tests for framework functions and syntax validation - Add ConfigValidation.sh framework for pre-flight system checks - Enhance documentation with SECURITY.md and DEPLOYMENT.md guides - Provide comprehensive testing README with usage instructions The testing framework validates system compatibility, security configurations, and deployment requirements before execution, preventing deployment failures and providing clear error reporting for troubleshooting. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
176
Project-Tests/README.md
Normal file
176
Project-Tests/README.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# TSYS FetchApply Testing Framework
|
||||
|
||||
## Overview
|
||||
|
||||
This testing framework provides comprehensive validation for the TSYS FetchApply infrastructure provisioning system. It includes unit tests, integration tests, security tests, and system validation.
|
||||
|
||||
## Test Categories
|
||||
|
||||
### 1. Unit Tests (`unit/`)
|
||||
- **Purpose:** Test individual framework functions and components
|
||||
- **Scope:** Framework includes, helper functions, syntax validation
|
||||
- **Example:** `framework-functions.sh` - Tests logging, pretty print, and error handling functions
|
||||
|
||||
### 2. Integration Tests (`integration/`)
|
||||
- **Purpose:** Test complete workflows and module interactions
|
||||
- **Scope:** End-to-end deployment scenarios, module integration
|
||||
- **Future:** Module interaction testing, deployment workflow validation
|
||||
|
||||
### 3. Security Tests (`security/`)
|
||||
- **Purpose:** Validate security configurations and practices
|
||||
- **Scope:** HTTPS enforcement, deployment security, SSH hardening
|
||||
- **Example:** `https-enforcement.sh` - Validates all URLs use HTTPS
|
||||
|
||||
### 4. Validation Tests (`validation/`)
|
||||
- **Purpose:** System compatibility and pre-flight checks
|
||||
- **Scope:** System requirements, network connectivity, permissions
|
||||
- **Example:** `system-requirements.sh` - Validates minimum system requirements
|
||||
|
||||
## Usage
|
||||
|
||||
### Run All Tests
|
||||
```bash
|
||||
./Project-Tests/run-tests.sh
|
||||
```
|
||||
|
||||
### Run Specific Test Categories
|
||||
```bash
|
||||
./Project-Tests/run-tests.sh unit # Unit tests only
|
||||
./Project-Tests/run-tests.sh integration # Integration tests only
|
||||
./Project-Tests/run-tests.sh security # Security tests only
|
||||
./Project-Tests/run-tests.sh validation # Validation tests only
|
||||
```
|
||||
|
||||
### Run Individual Tests
|
||||
```bash
|
||||
./Project-Tests/validation/system-requirements.sh
|
||||
./Project-Tests/security/https-enforcement.sh
|
||||
./Project-Tests/unit/framework-functions.sh
|
||||
```
|
||||
|
||||
## Test Results
|
||||
|
||||
- **Console Output:** Real-time test results with color-coded status
|
||||
- **JSON Reports:** Detailed test reports saved to `logs/tests/`
|
||||
- **Exit Codes:** 0 for success, 1 for failures
|
||||
|
||||
## Configuration Validation
|
||||
|
||||
The validation framework performs pre-flight checks to ensure system compatibility:
|
||||
|
||||
### System Requirements
|
||||
- **Memory:** Minimum 2GB RAM
|
||||
- **Disk Space:** Minimum 10GB available
|
||||
- **OS Compatibility:** Ubuntu/Debian (tested), others (may work)
|
||||
|
||||
### Network Connectivity
|
||||
- Tests connection to required download sources
|
||||
- Validates HTTPS endpoints are accessible
|
||||
- Checks for firewall/proxy issues
|
||||
|
||||
### Command Dependencies
|
||||
- Verifies required tools are installed (`curl`, `wget`, `git`, `systemctl`, `apt-get`)
|
||||
- Checks for proper versions where applicable
|
||||
|
||||
### Permissions
|
||||
- Validates write access to system directories
|
||||
- Checks for required administrative privileges
|
||||
|
||||
## Adding New Tests
|
||||
|
||||
### Test File Structure
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
function test_something() {
|
||||
echo "🔍 Testing something..."
|
||||
|
||||
if [[ condition ]]; then
|
||||
echo "✅ Test passed"
|
||||
return 0
|
||||
else
|
||||
echo "❌ Test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function main() {
|
||||
echo "🧪 Running Test Suite Name"
|
||||
echo "=========================="
|
||||
|
||||
local total_failures=0
|
||||
test_something || ((total_failures++))
|
||||
|
||||
echo "=========================="
|
||||
if [[ $total_failures -eq 0 ]]; then
|
||||
echo "✅ All tests passed"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ $total_failures tests failed"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
```
|
||||
|
||||
### Test Categories Guidelines
|
||||
|
||||
- **Unit Tests:** Focus on individual functions, fast execution
|
||||
- **Integration Tests:** Test module interactions, longer execution
|
||||
- **Security Tests:** Validate security configurations
|
||||
- **Validation Tests:** Pre-flight system checks
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
The testing framework is designed to integrate with CI/CD pipelines:
|
||||
|
||||
```bash
|
||||
# Example CI script
|
||||
./Project-Tests/run-tests.sh all
|
||||
test_exit_code=$?
|
||||
|
||||
if [[ $test_exit_code -eq 0 ]]; then
|
||||
echo "All tests passed - deployment approved"
|
||||
else
|
||||
echo "Tests failed - deployment blocked"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Test Development Best Practices
|
||||
|
||||
1. **Clear Test Names:** Use descriptive function names
|
||||
2. **Proper Exit Codes:** Return 0 for success, 1 for failure
|
||||
3. **Informative Output:** Use emoji and clear messages
|
||||
4. **Timeout Protection:** Use timeout for network operations
|
||||
5. **Cleanup:** Remove temporary files and resources
|
||||
6. **Error Handling:** Use `set -euo pipefail` for strict error handling
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **Permission Denied:** Run tests with appropriate privileges
|
||||
- **Network Timeouts:** Check firewall and proxy settings
|
||||
- **Missing Dependencies:** Install required tools before testing
|
||||
- **Script Errors:** Validate syntax with `bash -n script.sh`
|
||||
|
||||
### Debug Mode
|
||||
```bash
|
||||
# Enable debug output
|
||||
export DEBUG=1
|
||||
./Project-Tests/run-tests.sh
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new functionality to FetchApply:
|
||||
|
||||
1. Add corresponding tests in appropriate category
|
||||
2. Run full test suite before committing
|
||||
3. Update documentation for new test cases
|
||||
4. Ensure tests pass in clean environment
|
128
Project-Tests/run-tests.sh
Executable file
128
Project-Tests/run-tests.sh
Executable file
@@ -0,0 +1,128 @@
|
||||
#!/bin/bash
|
||||
|
||||
# TSYS FetchApply Testing Framework
|
||||
# Main test runner script
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source framework includes
|
||||
PROJECT_ROOT="$(dirname "$(realpath "${BASH_SOURCE[0]}")")/.."
|
||||
source "$PROJECT_ROOT/Framework-Includes/Logging.sh"
|
||||
source "$PROJECT_ROOT/Framework-Includes/PrettyPrint.sh"
|
||||
|
||||
# Test configuration
|
||||
TEST_LOG_DIR="$PROJECT_ROOT/logs/tests"
|
||||
TEST_RESULTS_FILE="$TEST_LOG_DIR/test-results-$(date +%Y%m%d-%H%M%S).json"
|
||||
|
||||
# Ensure test log directory exists
|
||||
mkdir -p "$TEST_LOG_DIR"
|
||||
|
||||
# Test counters
|
||||
declare -g TESTS_PASSED=0
|
||||
declare -g TESTS_FAILED=0
|
||||
declare -g TESTS_SKIPPED=0
|
||||
|
||||
# Test runner functions
|
||||
function run_test_suite() {
|
||||
local suite_name="$1"
|
||||
local test_dir="$2"
|
||||
|
||||
print_header "Running $suite_name Tests"
|
||||
|
||||
if [[ ! -d "$test_dir" ]]; then
|
||||
print_warning "Test directory $test_dir not found, skipping"
|
||||
return 0
|
||||
fi
|
||||
|
||||
for test_file in "$test_dir"/*.sh; do
|
||||
if [[ -f "$test_file" ]]; then
|
||||
run_single_test "$test_file"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
function run_single_test() {
|
||||
local test_file="$1"
|
||||
local test_name="$(basename "$test_file" .sh)"
|
||||
|
||||
print_info "Running test: $test_name"
|
||||
|
||||
if timeout 300 bash "$test_file"; then
|
||||
print_success "✅ $test_name PASSED"
|
||||
((TESTS_PASSED++))
|
||||
else
|
||||
print_error "❌ $test_name FAILED"
|
||||
((TESTS_FAILED++))
|
||||
fi
|
||||
}
|
||||
|
||||
function generate_test_report() {
|
||||
local total_tests=$((TESTS_PASSED + TESTS_FAILED + TESTS_SKIPPED))
|
||||
|
||||
print_header "Test Results Summary"
|
||||
print_info "Total Tests: $total_tests"
|
||||
print_success "Passed: $TESTS_PASSED"
|
||||
print_error "Failed: $TESTS_FAILED"
|
||||
print_warning "Skipped: $TESTS_SKIPPED"
|
||||
|
||||
# Generate JSON report
|
||||
cat > "$TEST_RESULTS_FILE" <<EOF
|
||||
{
|
||||
"timestamp": "$(date -Iseconds)",
|
||||
"total_tests": $total_tests,
|
||||
"passed": $TESTS_PASSED,
|
||||
"failed": $TESTS_FAILED,
|
||||
"skipped": $TESTS_SKIPPED,
|
||||
"success_rate": $(awk "BEGIN {printf \"%.2f\", ($TESTS_PASSED/$total_tests)*100}")
|
||||
}
|
||||
EOF
|
||||
|
||||
print_info "Test report saved to: $TEST_RESULTS_FILE"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
function main() {
|
||||
print_header "TSYS FetchApply Test Suite"
|
||||
|
||||
# Parse command line arguments
|
||||
local test_type="${1:-all}"
|
||||
|
||||
case "$test_type" in
|
||||
"unit")
|
||||
run_test_suite "Unit" "$(dirname "$0")/unit"
|
||||
;;
|
||||
"integration")
|
||||
run_test_suite "Integration" "$(dirname "$0")/integration"
|
||||
;;
|
||||
"security")
|
||||
run_test_suite "Security" "$(dirname "$0")/security"
|
||||
;;
|
||||
"validation")
|
||||
run_test_suite "Validation" "$(dirname "$0")/validation"
|
||||
;;
|
||||
"all")
|
||||
run_test_suite "Unit" "$(dirname "$0")/unit"
|
||||
run_test_suite "Integration" "$(dirname "$0")/integration"
|
||||
run_test_suite "Security" "$(dirname "$0")/security"
|
||||
run_test_suite "Validation" "$(dirname "$0")/validation"
|
||||
;;
|
||||
*)
|
||||
print_error "Usage: $0 [unit|integration|security|validation|all]"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
generate_test_report
|
||||
|
||||
# Exit with appropriate code
|
||||
if [[ $TESTS_FAILED -gt 0 ]]; then
|
||||
exit 1
|
||||
else
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main if executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
143
Project-Tests/security/https-enforcement.sh
Executable file
143
Project-Tests/security/https-enforcement.sh
Executable file
@@ -0,0 +1,143 @@
|
||||
#!/bin/bash
|
||||
|
||||
# HTTPS Enforcement Security Test
|
||||
# Validates that all scripts use HTTPS instead of HTTP
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
PROJECT_ROOT="$(dirname "$(realpath "${BASH_SOURCE[0]}")")/../.."
|
||||
|
||||
function test_no_http_urls() {
|
||||
echo "🔍 Checking for HTTP URLs in scripts..."
|
||||
|
||||
local http_violations=0
|
||||
local script_dirs=("ProjectCode" "Framework-Includes" "Project-Includes")
|
||||
|
||||
for dir in "${script_dirs[@]}"; do
|
||||
if [[ -d "$PROJECT_ROOT/$dir" ]]; then
|
||||
# Find HTTP URLs in shell scripts (excluding comments)
|
||||
while IFS= read -r -d '' file; do
|
||||
if grep -n "http://" "$file" | grep -v "^[[:space:]]*#" | grep -v "schema.org" | grep -v "xmlns"; then
|
||||
echo "❌ HTTP URL found in: $file"
|
||||
((http_violations++))
|
||||
fi
|
||||
done < <(find "$PROJECT_ROOT/$dir" -name "*.sh" -type f -print0)
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $http_violations -eq 0 ]]; then
|
||||
echo "✅ No HTTP URLs found in active scripts"
|
||||
return 0
|
||||
else
|
||||
echo "❌ Found $http_violations HTTP URL violations"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function test_https_urls_valid() {
|
||||
echo "🔍 Validating HTTPS URLs are accessible..."
|
||||
|
||||
local script_dirs=("ProjectCode" "Framework-Includes" "Project-Includes")
|
||||
local https_failures=0
|
||||
|
||||
# Extract HTTPS URLs from scripts
|
||||
for dir in "${script_dirs[@]}"; do
|
||||
if [[ -d "$PROJECT_ROOT/$dir" ]]; then
|
||||
while IFS= read -r -d '' file; do
|
||||
# Extract HTTPS URLs from non-comment lines
|
||||
grep -o "https://[^[:space:]\"']*" "$file" | grep -v "schema.org" | while read -r url; do
|
||||
# Test connectivity with timeout
|
||||
if timeout 30 curl -s --head --fail "$url" >/dev/null 2>&1; then
|
||||
echo "✅ HTTPS URL accessible: $url"
|
||||
else
|
||||
echo "❌ HTTPS URL not accessible: $url"
|
||||
((https_failures++))
|
||||
fi
|
||||
done
|
||||
done < <(find "$PROJECT_ROOT/$dir" -name "*.sh" -type f -print0)
|
||||
fi
|
||||
done
|
||||
|
||||
return $https_failures
|
||||
}
|
||||
|
||||
function test_ssl_certificate_validation() {
|
||||
echo "🔍 Testing SSL certificate validation..."
|
||||
|
||||
local test_urls=(
|
||||
"https://archive.ubuntu.com"
|
||||
"https://linux.dell.com"
|
||||
"https://download.proxmox.com"
|
||||
)
|
||||
|
||||
local ssl_failures=0
|
||||
|
||||
for url in "${test_urls[@]}"; do
|
||||
# Test with strict SSL verification
|
||||
if curl -s --fail --ssl-reqd --cert-status "$url" >/dev/null 2>&1; then
|
||||
echo "✅ SSL certificate valid: $url"
|
||||
else
|
||||
echo "❌ SSL certificate validation failed: $url"
|
||||
((ssl_failures++))
|
||||
fi
|
||||
done
|
||||
|
||||
return $ssl_failures
|
||||
}
|
||||
|
||||
function test_deployment_security() {
|
||||
echo "🔍 Testing deployment method security..."
|
||||
|
||||
local readme_file="$PROJECT_ROOT/README.md"
|
||||
|
||||
if [[ -f "$readme_file" ]]; then
|
||||
# Check for insecure curl | bash patterns
|
||||
if grep -q "curl.*|.*bash" "$readme_file" || grep -q "wget.*|.*bash" "$readme_file"; then
|
||||
echo "❌ Insecure deployment method found in README.md"
|
||||
return 1
|
||||
else
|
||||
echo "✅ Secure deployment method in README.md"
|
||||
fi
|
||||
|
||||
# Check for git clone method
|
||||
if grep -q "git clone" "$readme_file"; then
|
||||
echo "✅ Git clone deployment method found"
|
||||
return 0
|
||||
else
|
||||
echo "⚠️ No git clone method found in README.md"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo "❌ README.md not found"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main test execution
|
||||
function main() {
|
||||
echo "🔒 Running HTTPS Enforcement Security Tests"
|
||||
echo "=========================================="
|
||||
|
||||
local total_failures=0
|
||||
|
||||
# Run all security tests
|
||||
test_no_http_urls || ((total_failures++))
|
||||
test_https_urls_valid || ((total_failures++))
|
||||
test_ssl_certificate_validation || ((total_failures++))
|
||||
test_deployment_security || ((total_failures++))
|
||||
|
||||
echo "=========================================="
|
||||
|
||||
if [[ $total_failures -eq 0 ]]; then
|
||||
echo "✅ All HTTPS enforcement security tests passed"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ $total_failures HTTPS enforcement security tests failed"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main if executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
176
Project-Tests/unit/framework-functions.sh
Executable file
176
Project-Tests/unit/framework-functions.sh
Executable file
@@ -0,0 +1,176 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Framework Functions Unit Tests
|
||||
# Tests core framework functionality
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
PROJECT_ROOT="$(dirname "$(realpath "${BASH_SOURCE[0]}")")/../.."
|
||||
|
||||
# Source framework functions
|
||||
source "$PROJECT_ROOT/Framework-Includes/Logging.sh" 2>/dev/null || echo "Warning: Logging.sh not found"
|
||||
source "$PROJECT_ROOT/Framework-Includes/PrettyPrint.sh" 2>/dev/null || echo "Warning: PrettyPrint.sh not found"
|
||||
source "$PROJECT_ROOT/Framework-Includes/ErrorHandling.sh" 2>/dev/null || echo "Warning: ErrorHandling.sh not found"
|
||||
|
||||
function test_logging_functions() {
|
||||
echo "🔍 Testing logging functions..."
|
||||
|
||||
local test_log="/tmp/test-log-$$"
|
||||
|
||||
# Test if logging functions exist and work
|
||||
if command -v log_info >/dev/null 2>&1; then
|
||||
log_info "Test info message" 2>/dev/null || true
|
||||
echo "✅ log_info function exists"
|
||||
else
|
||||
echo "❌ log_info function missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if command -v log_error >/dev/null 2>&1; then
|
||||
log_error "Test error message" 2>/dev/null || true
|
||||
echo "✅ log_error function exists"
|
||||
else
|
||||
echo "❌ log_error function missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
rm -f "$test_log"
|
||||
return 0
|
||||
}
|
||||
|
||||
function test_pretty_print_functions() {
|
||||
echo "🔍 Testing pretty print functions..."
|
||||
|
||||
# Test if pretty print functions exist
|
||||
if command -v print_info >/dev/null 2>&1; then
|
||||
print_info "Test info message" >/dev/null 2>&1 || true
|
||||
echo "✅ print_info function exists"
|
||||
else
|
||||
echo "❌ print_info function missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if command -v print_error >/dev/null 2>&1; then
|
||||
print_error "Test error message" >/dev/null 2>&1 || true
|
||||
echo "✅ print_error function exists"
|
||||
else
|
||||
echo "❌ print_error function missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if command -v print_success >/dev/null 2>&1; then
|
||||
print_success "Test success message" >/dev/null 2>&1 || true
|
||||
echo "✅ print_success function exists"
|
||||
else
|
||||
echo "❌ print_success function missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
function test_error_handling() {
|
||||
echo "🔍 Testing error handling..."
|
||||
|
||||
# Test if error handling functions exist
|
||||
if command -v handle_error >/dev/null 2>&1; then
|
||||
echo "✅ handle_error function exists"
|
||||
else
|
||||
echo "❌ handle_error function missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test bash strict mode is set
|
||||
if [[ "$-" == *e* ]]; then
|
||||
echo "✅ Bash strict mode (set -e) is enabled"
|
||||
else
|
||||
echo "❌ Bash strict mode (set -e) not enabled"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$-" == *u* ]]; then
|
||||
echo "✅ Bash unset variable checking (set -u) is enabled"
|
||||
else
|
||||
echo "❌ Bash unset variable checking (set -u) not enabled"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
function test_framework_includes_exist() {
|
||||
echo "🔍 Testing framework includes exist..."
|
||||
|
||||
local required_includes=(
|
||||
"Logging.sh"
|
||||
"PrettyPrint.sh"
|
||||
"ErrorHandling.sh"
|
||||
"PreflightCheck.sh"
|
||||
)
|
||||
|
||||
local missing_files=0
|
||||
|
||||
for include_file in "${required_includes[@]}"; do
|
||||
if [[ -f "$PROJECT_ROOT/Framework-Includes/$include_file" ]]; then
|
||||
echo "✅ Framework include exists: $include_file"
|
||||
else
|
||||
echo "❌ Framework include missing: $include_file"
|
||||
((missing_files++))
|
||||
fi
|
||||
done
|
||||
|
||||
return $missing_files
|
||||
}
|
||||
|
||||
function test_syntax_validation() {
|
||||
echo "🔍 Testing script syntax validation..."
|
||||
|
||||
local syntax_errors=0
|
||||
local script_dirs=("Framework-Includes" "Project-Includes" "ProjectCode")
|
||||
|
||||
for dir in "${script_dirs[@]}"; do
|
||||
if [[ -d "$PROJECT_ROOT/$dir" ]]; then
|
||||
while IFS= read -r -d '' file; do
|
||||
if bash -n "$file" 2>/dev/null; then
|
||||
echo "✅ Syntax valid: $(basename "$file")"
|
||||
else
|
||||
echo "❌ Syntax error in: $(basename "$file")"
|
||||
((syntax_errors++))
|
||||
fi
|
||||
done < <(find "$PROJECT_ROOT/$dir" -name "*.sh" -type f -print0)
|
||||
fi
|
||||
done
|
||||
|
||||
return $syntax_errors
|
||||
}
|
||||
|
||||
# Main test execution
|
||||
function main() {
|
||||
echo "🧪 Running Framework Functions Unit Tests"
|
||||
echo "========================================"
|
||||
|
||||
local total_failures=0
|
||||
|
||||
# Run all unit tests
|
||||
test_framework_includes_exist || ((total_failures++))
|
||||
test_logging_functions || ((total_failures++))
|
||||
test_pretty_print_functions || ((total_failures++))
|
||||
test_error_handling || ((total_failures++))
|
||||
test_syntax_validation || ((total_failures++))
|
||||
|
||||
echo "========================================"
|
||||
|
||||
if [[ $total_failures -eq 0 ]]; then
|
||||
echo "✅ All framework function unit tests passed"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ $total_failures framework function unit tests failed"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main if executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
142
Project-Tests/validation/system-requirements.sh
Executable file
142
Project-Tests/validation/system-requirements.sh
Executable file
@@ -0,0 +1,142 @@
|
||||
#!/bin/bash
|
||||
|
||||
# System Requirements Validation Test
|
||||
# Validates minimum system requirements before deployment
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Test configuration
|
||||
MIN_RAM_GB=2
|
||||
MIN_DISK_GB=10
|
||||
REQUIRED_COMMANDS=("curl" "wget" "git" "systemctl" "apt-get")
|
||||
|
||||
# Test functions
|
||||
function test_memory_requirements() {
|
||||
local total_mem_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')
|
||||
local total_mem_gb=$((total_mem_kb / 1024 / 1024))
|
||||
|
||||
if [[ $total_mem_gb -ge $MIN_RAM_GB ]]; then
|
||||
echo "✅ Memory requirement met: ${total_mem_gb}GB >= ${MIN_RAM_GB}GB"
|
||||
return 0
|
||||
else
|
||||
echo "❌ Memory requirement not met: ${total_mem_gb}GB < ${MIN_RAM_GB}GB"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function test_disk_space() {
|
||||
local available_gb=$(df / | tail -1 | awk '{print int($4/1024/1024)}')
|
||||
|
||||
if [[ $available_gb -ge $MIN_DISK_GB ]]; then
|
||||
echo "✅ Disk space requirement met: ${available_gb}GB >= ${MIN_DISK_GB}GB"
|
||||
return 0
|
||||
else
|
||||
echo "❌ Disk space requirement not met: ${available_gb}GB < ${MIN_DISK_GB}GB"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function test_required_commands() {
|
||||
local failed=0
|
||||
|
||||
for cmd in "${REQUIRED_COMMANDS[@]}"; do
|
||||
if command -v "$cmd" >/dev/null 2>&1; then
|
||||
echo "✅ Required command available: $cmd"
|
||||
else
|
||||
echo "❌ Required command missing: $cmd"
|
||||
((failed++))
|
||||
fi
|
||||
done
|
||||
|
||||
return $failed
|
||||
}
|
||||
|
||||
function test_os_compatibility() {
|
||||
if [[ -f /etc/os-release ]]; then
|
||||
local os_id=$(grep "^ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"')
|
||||
local os_version=$(grep "^VERSION_ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"')
|
||||
|
||||
case "$os_id" in
|
||||
ubuntu|debian)
|
||||
echo "✅ OS compatibility: $os_id $os_version (supported)"
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
echo "⚠️ OS compatibility: $os_id $os_version (may work, not fully tested)"
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
else
|
||||
echo "❌ Cannot determine OS version"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function test_network_connectivity() {
|
||||
local test_urls=(
|
||||
"https://archive.ubuntu.com"
|
||||
"https://linux.dell.com"
|
||||
"https://download.proxmox.com"
|
||||
"https://github.com"
|
||||
)
|
||||
|
||||
local failed=0
|
||||
|
||||
for url in "${test_urls[@]}"; do
|
||||
if curl -s --connect-timeout 10 --max-time 30 "$url" >/dev/null 2>&1; then
|
||||
echo "✅ Network connectivity: $url"
|
||||
else
|
||||
echo "❌ Network connectivity failed: $url"
|
||||
((failed++))
|
||||
fi
|
||||
done
|
||||
|
||||
return $failed
|
||||
}
|
||||
|
||||
function test_permissions() {
|
||||
local test_dirs=("/etc" "/usr/local/bin" "/var/log")
|
||||
local failed=0
|
||||
|
||||
for dir in "${test_dirs[@]}"; do
|
||||
if [[ -w "$dir" ]]; then
|
||||
echo "✅ Write permission: $dir"
|
||||
else
|
||||
echo "❌ Write permission denied: $dir"
|
||||
((failed++))
|
||||
fi
|
||||
done
|
||||
|
||||
return $failed
|
||||
}
|
||||
|
||||
# Main test execution
|
||||
function main() {
|
||||
echo "🔍 Running System Requirements Validation"
|
||||
echo "========================================"
|
||||
|
||||
local total_failures=0
|
||||
|
||||
# Run all validation tests
|
||||
test_memory_requirements || ((total_failures++))
|
||||
test_disk_space || ((total_failures++))
|
||||
test_required_commands || ((total_failures++))
|
||||
test_os_compatibility || ((total_failures++))
|
||||
test_network_connectivity || ((total_failures++))
|
||||
test_permissions || ((total_failures++))
|
||||
|
||||
echo "========================================"
|
||||
|
||||
if [[ $total_failures -eq 0 ]]; then
|
||||
echo "✅ All system requirements validation tests passed"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ $total_failures system requirements validation tests failed"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main if executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
Reference in New Issue
Block a user