Add core architecture patterns and GIS/weather components from AIOS-Public

This commit is contained in:
2025-10-16 13:14:30 -05:00
parent 782eec63a5
commit 5887f4e729
32 changed files with 1970 additions and 1 deletions

118
AGENTS.md Normal file
View File

@@ -0,0 +1,118 @@
# AIOS-Public Agents
This document tracks the various agents, tools, and systems used in the AIOS-Public project.
## Documentation Tools
### RCEO-AIOS-Public-Tools-DocMaker Container Family
**Purpose**: Documentation generation containers with multiple document conversion tools, organized in a layered architecture.
**Container/Stack Names**:
- RCEO-AIOS-Public-Tools-DocMaker-Base: Base documentation environment
- RCEO-AIOS-Public-Tools-DocMaker-Light: Lightweight documentation tools (fast-starting)
- RCEO-AIOS-Public-Tools-DocMaker-Full: Full documentation with LaTeX-full
- RCEO-AIOS-Public-Tools-DocMaker-Computational: All documentation tools plus computational tools (R, Python, Jupyter, Octave)
**Technology Stack**:
- Base: Debian Bookworm slim
- Bash
- Python 3
- Node.js
- Rust (with Cargo)
- Pandoc
- LaTeX (varies by container: light packages in base/full in full)
- mdBook (installed via Cargo)
- mdbook-pdf (installed via Cargo)
- Typst
- Marp CLI
- Wandmalfarbe pandoc-latex-template: Beautiful Eisvogel LaTeX template for professional PDF generation
- Spell/Grammar checking:
- Hunspell (with en-US dictionary)
- Aspell (with en dictionary)
- Vale (style and grammar linter)
- Reading time estimation: mdstat
- Additional text processing tools
- Computational tools in Computational container:
- R programming language
- Python scientific stack (pandas, numpy, matplotlib, scipy)
- Jupyter notebooks
- GNU Octave
- bc (command-line calculator)
**Usage**:
- Use Light container for quick documentation tasks (COO mode)
- Use Full container for complex document generation (COO mode)
- Use Computational container for data analysis and R&D work (CTO mode)
- Base container serves as foundation for other containers
**Docker Configuration**:
- Located in the `Docker/` directory
- Each container has its own subdirectory with Dockerfile and docker-compose.yml
- Maps the project root directory to `/workspace` inside the container
- Uses UID/GID mapping for proper file permissions across environments
- Can be run with `docker-compose up` from each container's directory
**Container Usage Map**:
- Light container: COO mode, quick documentation tasks (CV, proposals, governance docs)
- Full container: COO mode, complex document generation with LaTeX-full
- Computational container: CTO mode, data analysis and R&D work (R, Python, Jupyter)
**Commands to run**:
# Using the wrapper script (recommended - handles UID/GID automatically):
```bash
# Build and start the lightweight container (COO mode)
cd Docker/RCEO-AIOS-Public-Tools-DocMaker-Light
./docker-compose-wrapper.sh up --build
# Build and start the full documentation container (COO mode)
cd Docker/RCEO-AIOS-Public-Tools-DocMaker-Full
./docker-compose-wrapper.sh up --build
# Build and start the computational container (CTO mode)
cd Docker/RCEO-AIOS-Public-Tools-DocMaker-Computational
./docker-compose-wrapper.sh up --build
# Run commands in containers with automatic user mapping:
./docker-compose-wrapper.sh run docmaker-light [command] # Light container
./docker-compose-wrapper.sh run docmaker-full [command] # Full container
./docker-compose-wrapper.sh run docmaker-computational [command] # Computational container
```
# Using docker-compose directly (requires manual environment variables):
```bash
# Set environment variables for proper file permissions
export LOCAL_USER_ID=$(id -u)
export LOCAL_GROUP_ID=$(id -g)
# Build and start containers
cd Docker/RCEO-AIOS-Public-Tools-DocMaker-Light
docker-compose up --build
# Example usage of documentation tools with wrapper script:
# Spell checking with hunspell
./docker-compose-wrapper.sh run docmaker-full hunspell -d en_US document.md
# Create timeline with Markwhen (not currently available)
# This will be enabled when Markwhen installation issue is resolved
# ./docker-compose-wrapper.sh run docmaker-full markwhen input.mw --output output.html
# Grammar/style checking with Vale
./docker-compose-wrapper.sh run docmaker-full vale document.md
# Reading time estimation
./docker-compose-wrapper.sh run docmaker-full python3 -m mdstat document.md
# Run R analysis (in computational container)
./docker-compose-wrapper.sh run docmaker-computational Rscript analysis.R
# Run Python analysis (in computational container)
./docker-compose-wrapper.sh run docmaker-computational python analysis.py
# Check spelling with aspell
./docker-compose-wrapper.sh run docmaker-full aspell -c document.md
```
**User Management**: All containers run as non-root user `ReachableCEO-Tools` with UID/GID mapping from the host environment to ensure proper file permissions.

40
Docker/README.md Normal file
View File

@@ -0,0 +1,40 @@
# TSYS-AIOS-GIS Docker Documentation
This directory contains organized Docker configurations for GIS and weather data processing tools in the TSYS-AIOS-GIS project.
## Container Structure
Each container has its own subdirectory with specific configuration files:
- `TSYS-AIOS-GIS-Tools-GIS-Base/` - Base GIS environment with core geospatial libraries
- `TSYS-AIOS-GIS-Tools-GIS-Processing/` - Advanced GIS processing with Jupyter notebooks
- `TSYS-AIOS-GIS-Tools-Weather-Base/` - Base weather data processing environment
- `TSYS-AIOS-GIS-Tools-Weather-Analysis/` - Advanced weather analysis with forecasting tools
## Container Naming Convention
All containers follow the `TSYS-AIOS-GIS-Tools-` naming convention with descriptive suffixes.
## Usage
### Building and Running Individual Containers
Each container has its own subdirectory with its Dockerfile and docker-compose.yml file.
```bash
# Navigate to the specific container directory
cd /home/localuser/AIWorkspace/TSYS-AIOS-GIS/Docker/TSYS-AIOS-GIS-Tools-GIS-Base
# Build the container
./docker-compose-wrapper.sh build
# Run the container
./docker-compose-wrapper.sh up --build
# Run a specific command in the container
./docker-compose-wrapper.sh run tsys-gis-base [command]
```
### Individual Container Documentation
For specific usage information for each container, see the README files in their respective subdirectories.

View File

@@ -0,0 +1,78 @@
FROM debian:bookworm-slim
# Avoid prompts from apt
ENV DEBIAN_FRONTEND=noninteractive
# Install base packages for GIS tools
RUN apt-get update && apt-get install -y \
bash \
curl \
wget \
git \
python3 \
python3-pip \
build-essential \
sudo \
&& rm -rf /var/lib/apt/lists/*
# Create symbolic link for python
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install GIS tools
RUN apt-get update && apt-get install -y \
gdal-bin \
libgdal-dev \
proj-bin \
proj-data \
libproj-dev \
postgis \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Install DuckDB with spatial extensions
RUN curl -L https://github.com/duckdb/duckdb/releases/latest/download/duckdb_cli-linux-amd64.zip \
-o /tmp/duckdb.zip && \
cd /tmp && \
unzip duckdb.zip && \
cp duckdb_*_linux_amd64 /usr/local/bin/duckdb && \
chmod +x /usr/local/bin/duckdb && \
rm -rf /tmp/duckdb*
# Install Python GIS libraries
RUN pip3 install --break-system-packages \
geopandas \
shapely \
rasterio \
folium \
plotly \
xarray \
cfgrib \
netcdf4 \
matplotlib \
seaborn \
duckdb \
dask
# Install R with spatial packages (if needed)
RUN apt-get update && apt-get install -y \
r-base \
r-base-dev \
&& rm -rf /var/lib/apt/lists/*
# Install additional tools for weather data processing
RUN apt-get update && apt-get install -y \
ftp \
&& rm -rf /var/lib/apt/lists/*
# Install R spatial packages
RUN R --slave -e "install.packages(c('raster', 'rgdal', 'ncdf4', 'rasterVis', 'leaflet'), repos='https://cran.rstudio.com/', dependencies=TRUE)"
# Add entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Create a working directory
WORKDIR /workspace
# Use the entrypoint script to handle user creation
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -0,0 +1,129 @@
# TSYS-AIOS-GIS-Tools-GIS-Base Container
This container is part of the TSYS-AIOS-GIS project and provides a base GIS and weather data processing environment.
## Overview
The TSYS-AIOS-GIS-Tools-GIS-Base container is designed for GIS data processing and weather data analysis tasks. It includes essential tools for handling geospatial data formats and weather datasets with a focus on self-hosted GIS stack capabilities.
## Tools Included
### Core Tools
- **Base OS**: Debian Bookworm slim
- **Shell**: Bash
- **Programming Languages**:
- Python 3 with geospatial libraries
- R with spatial packages
### GIS Libraries
- **GDAL/OGR**: Geospatial Data Abstraction Library for format translation and processing
- **PROJ**: Coordinate transformation software
- **PostGIS**: Client tools for spatial database operations
- **DuckDB**: With spatial extensions for efficient data processing
- **GeoPandas**: Python geospatial data handling
- **Shapely**: Python geometric operations
- **Rasterio**: Raster processing in Python
### Weather Data Processing
- **xarray**: Multi-dimensional data in Python
- **cfgrib**: GRIB format handling
- **netCDF4**: NetCDF file handling
- **MetPy**: Meteorological calculations (via Python libraries)
### Visualization
- **Folium**: Interactive maps
- **Plotly**: Time series visualization
- **Matplotlib/Seaborn**: Statistical plots
- **R visualization packages**: For statistical analysis
### Additional Tools
- **Dask**: For large data processing
- **FTP client**: For bulk data downloads
## Usage
### Building the Base Container
```bash
# From this directory
cd /home/localuser/AIWorkspace/TSYS-AIOS-GIS/Docker/TSYS-AIOS-GIS-Tools-GIS-Base
# Use the wrapper script to automatically detect and set user IDs
./docker-compose-wrapper.sh build
# Or run commands in the base container with automatic user mapping
./docker-compose-wrapper.sh run tsys-gis-base [command]
# Example: Process a shapefile with GDAL
./docker-compose-wrapper.sh run tsys-gis-base ogrinfo /workspace/path/to/shapefile.shp
# Example: Start Python with geospatial libraries
./docker-compose-wrapper.sh run tsys-gis-base python3
# Example: Start R with spatial packages
./docker-compose-wrapper.sh run tsys-gis-base R
```
### Using with docker-compose directly
```bash
# Set environment variables and run docker-compose directly
LOCAL_USER_ID=$(id -u) LOCAL_GROUP_ID=$(id -g) docker-compose up --build
# Or export variables first
export LOCAL_USER_ID=$(id -u)
export LOCAL_GROUP_ID=$(id -g)
docker-compose up
```
### Using the wrapper script
```bash
# Build and start the base GIS container with automatic user mapping
./docker-compose-wrapper.sh up --build
# Start without rebuilding
./docker-compose-wrapper.sh up
# View container status
./docker-compose-wrapper.sh ps
# Stop containers
./docker-compose-wrapper.sh down
```
## User ID Mapping (For File Permissions)
The container automatically detects and uses the host user's UID and GID to ensure proper file permissions. This means:
- Files created inside the container will have the correct ownership on the host
- No more root-owned files after container operations
- Works across different environments (development, production servers)
The container detects the user ID from the mounted workspace volume. If needed, you can override the default values by setting environment variables:
```bash
# Set specific user ID and group ID before running docker-compose
export LOCAL_USER_ID=1000
export LOCAL_GROUP_ID=1000
docker-compose up
```
Or run with inline environment variables:
```bash
LOCAL_USER_ID=1000 LOCAL_GROUP_ID=1000 docker-compose up
```
The container runs as a non-root user named `TSYS-Tools` with the detected host user's UID/GID.
## Data Processing Workflows
This container is designed to handle both:
- GIS data processing (shapefiles, GeoJSON, Parquet, etc.)
- Weather data processing (GRIB, NetCDF formats)
- ETL workflows for geospatial and meteorological datasets
- Integration with PostGIS for spatial database operations
- Output to MinIO buckets for business use
## Integration
- Can be used in CTO mode for R&D activities
- Compatible with existing documentation containers for report generation
- Designed for both workstation prototyping and server deployment

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# docker-compose-wrapper.sh - Wrapper script to detect host UID/GID and run docker-compose
set -e # Exit on any error
# Detect the UID and GID of the user that owns the workspace directory (parent directory)
WORKSPACE_DIR="$(cd "$(dirname "$0")/../.." && pwd)"
echo "Detecting user ID from workspace directory: $WORKSPACE_DIR"
if [ -d "$WORKSPACE_DIR" ]; then
DETECTED_USER_ID=$(stat -c %u "$WORKSPACE_DIR" 2>/dev/null || echo 0)
DETECTED_GROUP_ID=$(stat -c %g "$WORKSPACE_DIR" 2>/dev/null || echo 0)
# If detection failed, try current user
if [ "$DETECTED_USER_ID" = "0" ]; then
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
else
# Fallback to current user if workspace directory doesn't exist
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
echo "Detected USER_ID=$DETECTED_USER_ID and GROUP_ID=$DETECTED_GROUP_ID"
# Set environment variables for docker-compose
export LOCAL_USER_ID=$DETECTED_USER_ID
export LOCAL_GROUP_ID=$DETECTED_GROUP_ID
# Show usage information
echo ""
echo "Usage: $0 [build|up|run <service> <command>|exec <service> <command>|down|ps]"
echo ""
echo "Examples:"
echo " $0 up # Start services"
echo " $0 build # Build containers"
echo " $0 run tsys-gis-base bash # Run command in container"
echo " $0 down # Stop and remove containers"
echo ""
# Check if docker compose (new format) or docker-compose (old format) is available
if command -v docker &> /dev/null && docker compose version &> /dev/null; then
# Use new docker compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker compose up'..."
docker compose up
else
# Execute the provided docker compose command
echo "Running: docker compose $*"
docker compose "$@"
fi
elif command -v docker-compose &> /dev/null; then
# Fallback to old docker-compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker-compose up'..."
docker-compose up
else
# Execute the provided docker-compose command
echo "Running: docker-compose $*"
docker-compose "$@"
fi
else
echo "Error: Neither 'docker compose' nor 'docker-compose' command found."
echo "Please install Docker Compose to use this script."
exit 1
fi

View File

@@ -0,0 +1,18 @@
version: '3.8'
services:
tsys-gis-base:
build:
context: .
dockerfile: Dockerfile
container_name: TSYS-AIOS-GIS-Tools-GIS-Base
image: tsys-aios-gis-tools-gis-base:latest
volumes:
- ../../../:/workspace:rw
working_dir: /workspace
stdin_open: true
tty: true
environment:
- LOCAL_USER_ID=${LOCAL_USER_ID:-1000}
- LOCAL_GROUP_ID=${LOCAL_GROUP_ID:-1000}
user: "${LOCAL_USER_ID:-1000}:${LOCAL_GROUP_ID:-1000}"

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# entrypoint.sh - Entrypoint script to handle user creation and permission setup at runtime
# Set default values if not provided
USER_ID=${LOCAL_USER_ID:-$(id -u 1000)}
GROUP_ID=${LOCAL_GROUP_ID:-$(id -g 1000)}
# In case the environment variables are not set properly, detect them from the workspace volume
if [ "$USER_ID" = "$(id -u 0)" ] || [ "$USER_ID" = "0" ]; then
# Detect the UID and GID of the user that owns the workspace directory
if [ -d "/workspace" ]; then
USER_ID=$(stat -c %u /workspace 2>/dev/null || echo 1000)
GROUP_ID=$(stat -c %g /workspace 2>/dev/null || echo 1000)
else
USER_ID=${LOCAL_USER_ID:-1000}
GROUP_ID=${LOCAL_GROUP_ID:-1000}
fi
fi
echo "Starting with USER_ID=$USER_ID and GROUP_ID=$GROUP_ID"
# Create the group with specified GID
groupadd -f -g $GROUP_ID -o TSYS-Tools 2>/dev/null || groupmod -g $GROUP_ID -o TSYS-Tools
# Create the user with specified UID and add to the group
useradd -f -u $USER_ID -g $GROUP_ID -m -s /bin/bash -o TSYS-Tools 2>/dev/null || usermod -u $USER_ID -g $GROUP_ID -o TSYS-Tools
# Add user to sudo group for any necessary operations
usermod -aG sudo TSYS-Tools 2>/dev/null || true
# Make sure workspace directory exists and has proper permissions
mkdir -p /workspace
chown -R $USER_ID:$GROUP_ID /workspace
# Set up proper permissions for .local (if they exist)
mkdir -p /home/TSYS-Tools/.local
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/.local
# Set up proper permissions for R (if they exist)
mkdir -p /home/TSYS-Tools/R
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/R
# If there are additional arguments, run them as the created user
if [ $# -gt 0 ]; then
exec su -p TSYS-Tools -c "$*"
else
# Otherwise start an interactive bash shell as the created user
exec su -p TSYS-Tools -c "/bin/bash"
fi

View File

@@ -0,0 +1,32 @@
FROM tsys-aios-gis-tools-gis-base:latest
# Avoid prompts from apt
ENV DEBIAN_FRONTEND=noninteractive
# Install additional processing tools
RUN apt-get update && apt-get install -y \
jupyter-notebook \
nodejs \
npm \
&& rm -rf /var/lib/apt/lists/*
# Install additional Python libraries for processing
RUN pip3 install --break-system-packages \
jupyter \
notebook \
ipykernel \
apache-airflow \
prefect
# Set up Jupyter
RUN jupyter notebook --generate-config --allow-root --ip=0.0.0.0 --port=8888 --no-browser --notebook-dir=/workspace --NotebookApp.token='' --NotebookApp.password='' 2>/dev/null || true
# Create a working directory
WORKDIR /workspace
# Add entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Use the entrypoint script to handle user creation
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -0,0 +1,111 @@
# TSYS-AIOS-GIS-Tools-GIS-Processing Container
This container is part of the TSYS-AIOS-GIS project and provides advanced GIS data processing capabilities with Jupyter notebooks and workflow tools.
## Overview
The TSYS-AIOS-GIS-Tools-GIS-Processing container extends the base GIS container with advanced processing tools, Jupyter notebooks for interactive analysis, and workflow orchestration tools. This container is designed for in-depth geospatial data analysis and complex ETL workflows.
## Tools Included
### Extends from Base Container
- All tools from TSYS-AIOS-GIS-Tools-GIS-Base container
- GIS libraries, weather data processing libraries, visualization tools
### Advanced Processing Tools
- **Jupyter Notebook**: Interactive environment for data analysis
- **Node.js/npm**: JavaScript runtime and package manager
- **IPyKernel**: IPython kernel for Jupyter
### Workflow Tools
- **Apache Airflow**: Workflow orchestration platform
- **Prefect**: Modern workflow management
## Usage
### Building the Processing Container
```bash
# From this directory
cd /home/localuser/AIWorkspace/TSYS-AIOS-GIS/Docker/TSYS-AIOS-GIS-Tools-GIS-Processing
# Use the wrapper script to automatically detect and set user IDs
./docker-compose-wrapper.sh build
# Or run commands in the processing container with automatic user mapping
./docker-compose-wrapper.sh run tsys-gis-processing [command]
# Example: Start Jupyter notebook server
./docker-compose-wrapper.sh run tsys-gis-processing jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root --notebook-dir=/workspace --no-browser
# Example: Start an interactive bash session
./docker-compose-wrapper.sh run tsys-gis-processing bash
```
### Using with docker-compose directly
```bash
# Set environment variables and run docker-compose directly
LOCAL_USER_ID=$(id -u) LOCAL_GROUP_ID=$(id -g) docker-compose up --build
# Or export variables first
export LOCAL_USER_ID=$(id -u)
export LOCAL_GROUP_ID=$(id -g)
docker-compose up
```
### Using the wrapper script
```bash
# Build and start the processing container with automatic user mapping
./docker-compose-wrapper.sh up --build
# Start without rebuilding (Jupyter will be available on port 8888)
./docker-compose-wrapper.sh up
# View container status
./docker-compose-wrapper.sh ps
# Stop containers
./docker-compose-wrapper.sh down
```
## Jupyter Notebook Access
When running the container with `docker-compose up`, Jupyter notebook will be available at:
- http://localhost:8888
The notebook server is preconfigured to:
- Use the workspace directory as the notebook directory
- Allow access without authentication (in container only)
- Accept connections from any IP address
## User ID Mapping (For File Permissions)
The container automatically detects and uses the host user's UID and GID to ensure proper file permissions. This means:
- Files created inside the container will have the correct ownership on the host
- No more root-owned files after container operations
- Works across different environments (development, production servers)
The container detects the user ID from the mounted workspace volume. If needed, you can override the default values by setting environment variables:
```bash
# Set specific user ID and group ID before running docker-compose
export LOCAL_USER_ID=1000
export LOCAL_GROUP_ID=1000
docker-compose up
```
Or run with inline environment variables:
```bash
LOCAL_USER_ID=1000 LOCAL_GROUP_ID=1000 docker-compose up
```
The container runs as a non-root user named `TSYS-Tools` with the detected host user's UID/GID.
## Data Processing Workflows
This container is optimized for:
- Interactive geospatial analysis using Jupyter notebooks
- Complex ETL workflows using Apache Airflow or Prefect
- Advanced visualization and reporting
- Model development and testing
- Integration with PostGIS and other databases

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# docker-compose-wrapper.sh - Wrapper script to detect host UID/GID and run docker-compose
set -e # Exit on any error
# Detect the UID and GID of the user that owns the workspace directory (parent directory)
WORKSPACE_DIR="$(cd "$(dirname "$0")/../.." && pwd)"
echo "Detecting user ID from workspace directory: $WORKSPACE_DIR"
if [ -d "$WORKSPACE_DIR" ]; then
DETECTED_USER_ID=$(stat -c %u "$WORKSPACE_DIR" 2>/dev/null || echo 0)
DETECTED_GROUP_ID=$(stat -c %g "$WORKSPACE_DIR" 2>/dev/null || echo 0)
# If detection failed, try current user
if [ "$DETECTED_USER_ID" = "0" ]; then
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
else
# Fallback to current user if workspace directory doesn't exist
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
echo "Detected USER_ID=$DETECTED_USER_ID and GROUP_ID=$DETECTED_GROUP_ID"
# Set environment variables for docker-compose
export LOCAL_USER_ID=$DETECTED_USER_ID
export LOCAL_GROUP_ID=$DETECTED_GROUP_ID
# Show usage information
echo ""
echo "Usage: $0 [build|up|run <service> <command>|exec <service> <command>|down|ps]"
echo ""
echo "Examples:"
echo " $0 up # Start services"
echo " $0 build # Build containers"
echo " $0 run tsys-gis-processing bash # Run command in container"
echo " $0 down # Stop and remove containers"
echo ""
# Check if docker compose (new format) or docker-compose (old format) is available
if command -v docker &> /dev/null && docker compose version &> /dev/null; then
# Use new docker compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker compose up'..."
docker compose up
else
# Execute the provided docker compose command
echo "Running: docker compose $*"
docker compose "$@"
fi
elif command -v docker-compose &> /dev/null; then
# Fallback to old docker-compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker-compose up'..."
docker-compose up
else
# Execute the provided docker-compose command
echo "Running: docker-compose $*"
docker-compose "$@"
fi
else
echo "Error: Neither 'docker compose' nor 'docker-compose' command found."
echo "Please install Docker Compose to use this script."
exit 1
fi

View File

@@ -0,0 +1,20 @@
version: '3.8'
services:
tsys-gis-processing:
build:
context: .
dockerfile: Dockerfile
container_name: TSYS-AIOS-GIS-Tools-GIS-Processing
image: tsys-aios-gis-tools-gis-processing:latest
volumes:
- ../../../:/workspace:rw
working_dir: /workspace
stdin_open: true
tty: true
ports:
- "8888:8888"
environment:
- LOCAL_USER_ID=${LOCAL_USER_ID:-1000}
- LOCAL_GROUP_ID=${LOCAL_GROUP_ID:-1000}
user: "${LOCAL_USER_ID:-1000}:${LOCAL_GROUP_ID:-1000}"

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# entrypoint.sh - Entrypoint script to handle user creation and permission setup at runtime
# Set default values if not provided
USER_ID=${LOCAL_USER_ID:-$(id -u 1000)}
GROUP_ID=${LOCAL_GROUP_ID:-$(id -g 1000)}
# In case the environment variables are not set properly, detect them from the workspace volume
if [ "$USER_ID" = "$(id -u 0)" ] || [ "$USER_ID" = "0" ]; then
# Detect the UID and GID of the user that owns the workspace directory
if [ -d "/workspace" ]; then
USER_ID=$(stat -c %u /workspace 2>/dev/null || echo 1000)
GROUP_ID=$(stat -c %g /workspace 2>/dev/null || echo 1000)
else
USER_ID=${LOCAL_USER_ID:-1000}
GROUP_ID=${LOCAL_GROUP_ID:-1000}
fi
fi
echo "Starting with USER_ID=$USER_ID and GROUP_ID=$GROUP_ID"
# Create the group with specified GID
groupadd -f -g $GROUP_ID -o TSYS-Tools 2>/dev/null || groupmod -g $GROUP_ID -o TSYS-Tools
# Create the user with specified UID and add to the group
useradd -f -u $USER_ID -g $GROUP_ID -m -s /bin/bash -o TSYS-Tools 2>/dev/null || usermod -u $USER_ID -g $GROUP_ID -o TSYS-Tools
# Add user to sudo group for any necessary operations
usermod -aG sudo TSYS-Tools 2>/dev/null || true
# Make sure workspace directory exists and has proper permissions
mkdir -p /workspace
chown -R $USER_ID:$GROUP_ID /workspace
# Set up proper permissions for .local (if they exist)
mkdir -p /home/TSYS-Tools/.local
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/.local
# Set up proper permissions for Jupyter (if they exist)
mkdir -p /home/TSYS-Tools/.jupyter
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/.jupyter
# If there are additional arguments, run them as the created user
if [ $# -gt 0 ]; then
exec su -p TSYS-Tools -c "$*"
else
# Otherwise start an interactive bash shell as the created user
exec su -p TSYS-Tools -c "/bin/bash"
fi

View File

@@ -0,0 +1,40 @@
FROM tsys-aios-gis-tools-weather-base:latest
# Avoid prompts from apt
ENV DEBIAN_FRONTEND=noninteractive
# Install additional analysis tools
RUN apt-get update && apt-get install -y \
jupyter-notebook \
nodejs \
npm \
octave \
&& rm -rf /var/lib/apt/lists/*
# Install additional Python libraries for weather analysis
RUN pip3 install --break-system-packages \
jupyter \
notebook \
ipykernel \
apache-airflow \
prefect \
folium \
plotly \
cartopy \
geoviews
# Install additional R packages for weather analysis
RUN R --slave -e "install.packages(c('lubridate', 'ggplot2', 'dplyr', 'tidyr', 'forecast', 'RColorBrewer'), repos='https://cran.rstudio.com/', dependencies=TRUE)"
# Set up Jupyter
RUN jupyter notebook --generate-config --allow-root --ip=0.0.0.0 --port=8888 --no-browser --notebook-dir=/workspace --NotebookApp.token='' --NotebookApp.password='' 2>/dev/null || true
# Create a working directory
WORKDIR /workspace
# Add entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Use the entrypoint script to handle user creation
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -0,0 +1,124 @@
# TSYS-AIOS-GIS-Tools-Weather-Analysis Container
This container is part of the TSYS-AIOS-GIS project and provides advanced weather data analysis capabilities with Jupyter notebooks and specialized meteorological tools.
## Overview
The TSYS-AIOS-GIS-Tools-Weather-Analysis container extends the base weather container with advanced analysis tools, Jupyter notebooks for interactive analysis, and specialized meteorological libraries. This container is designed for in-depth weather data analysis and balloon path prediction work.
## Tools Included
### Extends from Base Container
- All tools from TSYS-AIOS-GIS-Tools-Weather-Base container
- Weather data processing libraries, APIs, bulk download tools
### Advanced Analysis Tools
- **Jupyter Notebook**: Interactive environment for weather analysis
- **Node.js/npm**: JavaScript runtime and package manager
- **IPyKernel**: IPython kernel for Jupyter
- **GNU Octave**: Numerical computations (similar to MATLAB)
### Visualization & Forecasting Libraries
- **Cartopy**: Geospatial processing and visualization
- **Geoviews**: Geospatial data visualization
- **Folium**: Interactive maps with weather data overlays
- **Plotly**: Interactive weather visualizations
- **Forecast R package**: Time series forecasting
### Additional R packages
- **Lubridate**: Time series manipulation
- **Ggplot2/Tidyr/Dplyr**: Data analysis and visualization
- **RColorBrewer**: Color palettes for weather maps
## Usage
### Building the Weather Analysis Container
```bash
# From this directory
cd /home/localuser/AIWorkspace/TSYS-AIOS-GIS/Docker/TSYS-AIOS-GIS-Tools-Weather-Analysis
# Use the wrapper script to automatically detect and set user IDs
./docker-compose-wrapper.sh build
# Or run commands in the analysis container with automatic user mapping
./docker-compose-wrapper.sh run tsys-weather-analysis [command]
# Example: Start Jupyter notebook server
./docker-compose-wrapper.sh run tsys-weather-analysis jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root --notebook-dir=/workspace --no-browser
# Example: Start Octave for numerical computations
./docker-compose-wrapper.sh run tsys-weather-analysis octave
# Example: Start an interactive bash session
./docker-compose-wrapper.sh run tsys-weather-analysis bash
```
### Using with docker-compose directly
```bash
# Set environment variables and run docker-compose directly
LOCAL_USER_ID=$(id -u) LOCAL_GROUP_ID=$(id -g) docker-compose up --build
# Or export variables first
export LOCAL_USER_ID=$(id -u)
export LOCAL_GROUP_ID=$(id -g)
docker-compose up
```
### Using the wrapper script
```bash
# Build and start the analysis container with automatic user mapping
./docker-compose-wrapper.sh up --build
# Start without rebuilding (Jupyter will be available on port 8889)
./docker-compose-wrapper.sh up
# View container status
./docker-compose-wrapper.sh ps
# Stop containers
./docker-compose-wrapper.sh down
```
## Jupyter Notebook Access
When running the container with `docker-compose up`, Jupyter notebook will be available at:
- http://localhost:8889
The notebook server is preconfigured to:
- Use the workspace directory as the notebook directory
- Allow access without authentication (in container only)
- Accept connections from any IP address
## User ID Mapping (For File Permissions)
The container automatically detects and uses the host user's UID and GID to ensure proper file permissions. This means:
- Files created inside the container will have the correct ownership on the host
- No more root-owned files after container operations
- Works across different environments (development, production servers)
The container detects the user ID from the mounted workspace volume. If needed, you can override the default values by setting environment variables:
```bash
# Set specific user ID and group ID before running docker-compose
export LOCAL_USER_ID=1000
export LOCAL_GROUP_ID=1000
docker-compose up
```
Or run with inline environment variables:
```bash
LOCAL_USER_ID=1000 LOCAL_GROUP_ID=1000 docker-compose up
```
The container runs as a non-root user named `TSYS-Tools` with the detected host user's UID/GID.
## Weather Analysis Workflows
This container is optimized for:
- Interactive weather data analysis using Jupyter notebooks
- Balloon path prediction using weather data
- Advanced meteorological calculations
- Time series forecasting
- Weather data visualization
- Climate analysis workflows

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# docker-compose-wrapper.sh - Wrapper script to detect host UID/GID and run docker-compose
set -e # Exit on any error
# Detect the UID and GID of the user that owns the workspace directory (parent directory)
WORKSPACE_DIR="$(cd "$(dirname "$0")/../.." && pwd)"
echo "Detecting user ID from workspace directory: $WORKSPACE_DIR"
if [ -d "$WORKSPACE_DIR" ]; then
DETECTED_USER_ID=$(stat -c %u "$WORKSPACE_DIR" 2>/dev/null || echo 0)
DETECTED_GROUP_ID=$(stat -c %g "$WORKSPACE_DIR" 2>/dev/null || echo 0)
# If detection failed, try current user
if [ "$DETECTED_USER_ID" = "0" ]; then
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
else
# Fallback to current user if workspace directory doesn't exist
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
echo "Detected USER_ID=$DETECTED_USER_ID and GROUP_ID=$DETECTED_GROUP_ID"
# Set environment variables for docker-compose
export LOCAL_USER_ID=$DETECTED_USER_ID
export LOCAL_GROUP_ID=$DETECTED_GROUP_ID
# Show usage information
echo ""
echo "Usage: $0 [build|up|run <service> <command>|exec <service> <command>|down|ps]"
echo ""
echo "Examples:"
echo " $0 up # Start services"
echo " $0 build # Build containers"
echo " $0 run tsys-weather-analysis bash # Run command in container"
echo " $0 down # Stop and remove containers"
echo ""
# Check if docker compose (new format) or docker-compose (old format) is available
if command -v docker &> /dev/null && docker compose version &> /dev/null; then
# Use new docker compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker compose up'..."
docker compose up
else
# Execute the provided docker compose command
echo "Running: docker compose $*"
docker compose "$@"
fi
elif command -v docker-compose &> /dev/null; then
# Fallback to old docker-compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker-compose up'..."
docker-compose up
else
# Execute the provided docker-compose command
echo "Running: docker-compose $*"
docker-compose "$@"
fi
else
echo "Error: Neither 'docker compose' nor 'docker-compose' command found."
echo "Please install Docker Compose to use this script."
exit 1
fi

View File

@@ -0,0 +1,20 @@
version: '3.8'
services:
tsys-weather-analysis:
build:
context: .
dockerfile: Dockerfile
container_name: TSYS-AIOS-GIS-Tools-Weather-Analysis
image: tsys-aios-gis-tools-weather-analysis:latest
volumes:
- ../../../:/workspace:rw
working_dir: /workspace
stdin_open: true
tty: true
ports:
- "8889:8888"
environment:
- LOCAL_USER_ID=${LOCAL_USER_ID:-1000}
- LOCAL_GROUP_ID=${LOCAL_GROUP_ID:-1000}
user: "${LOCAL_USER_ID:-1000}:${LOCAL_GROUP_ID:-1000}"

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# entrypoint.sh - Entrypoint script to handle user creation and permission setup at runtime
# Set default values if not provided
USER_ID=${LOCAL_USER_ID:-$(id -u 1000)}
GROUP_ID=${LOCAL_GROUP_ID:-$(id -g 1000)}
# In case the environment variables are not set properly, detect them from the workspace volume
if [ "$USER_ID" = "$(id -u 0)" ] || [ "$USER_ID" = "0" ]; then
# Detect the UID and GID of the user that owns the workspace directory
if [ -d "/workspace" ]; then
USER_ID=$(stat -c %u /workspace 2>/dev/null || echo 1000)
GROUP_ID=$(stat -c %g /workspace 2>/dev/null || echo 1000)
else
USER_ID=${LOCAL_USER_ID:-1000}
GROUP_ID=${LOCAL_GROUP_ID:-1000}
fi
fi
echo "Starting with USER_ID=$USER_ID and GROUP_ID=$GROUP_ID"
# Create the group with specified GID
groupadd -f -g $GROUP_ID -o TSYS-Tools 2>/dev/null || groupmod -g $GROUP_ID -o TSYS-Tools
# Create the user with specified UID and add to the group
useradd -f -u $USER_ID -g $GROUP_ID -m -s /bin/bash -o TSYS-Tools 2>/dev/null || usermod -u $USER_ID -g $GROUP_ID -o TSYS-Tools
# Add user to sudo group for any necessary operations
usermod -aG sudo TSYS-Tools 2>/dev/null || true
# Make sure workspace directory exists and has proper permissions
mkdir -p /workspace
chown -R $USER_ID:$GROUP_ID /workspace
# Set up proper permissions for .local (if they exist)
mkdir -p /home/TSYS-Tools/.local
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/.local
# Set up proper permissions for Jupyter (if they exist)
mkdir -p /home/TSYS-Tools/.jupyter
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/.jupyter
# If there are additional arguments, run them as the created user
if [ $# -gt 0 ]; then
exec su -p TSYS-Tools -c "$*"
else
# Otherwise start an interactive bash shell as the created user
exec su -p TSYS-Tools -c "/bin/bash"
fi

View File

@@ -0,0 +1,72 @@
FROM debian:bookworm-slim
# Avoid prompts from apt
ENV DEBIAN_FRONTEND=noninteractive
# Install base packages for weather tools
RUN apt-get update && apt-get install -y \
bash \
curl \
wget \
git \
python3 \
python3-pip \
build-essential \
sudo \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Create symbolic link for python
RUN ln -s /usr/bin/python3 /usr/bin/python
# Install weather data processing tools
RUN apt-get update && apt-get install -y \
ftp \
gfortran \
libgfortran5 \
&& rm -rf /var/lib/apt/lists/*
# Install Python weather libraries
RUN pip3 install --break-system-packages \
xarray \
cfgrib \
netcdf4 \
metpy \
siphon \
numpy \
pandas \
scipy \
matplotlib \
seaborn \
requests \
cdo-python
# Install Climate Data Operators (CDO)
RUN apt-get update && apt-get install -y \
cdo \
&& rm -rf /var/lib/apt/lists/*
# Install additional tools for weather data operations
RUN apt-get update && apt-get install -y \
nco \
ncl-ncarg \
&& rm -rf /var/lib/apt/lists/*
# Install R for statistical analysis
RUN apt-get update && apt-get install -y \
r-base \
r-base-dev \
&& rm -rf /var/lib/apt/lists/*
# Install R weather packages
RUN R --slave -e "install.packages(c('raster', 'rgdal', 'ncdf4', 'rasterVis', 'ncmeta'), repos='https://cran.rstudio.com/', dependencies=TRUE)"
# Add entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Create a working directory
WORKDIR /workspace
# Use the entrypoint script to handle user creation
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -0,0 +1,121 @@
# TSYS-AIOS-GIS-Tools-Weather-Base Container
This container is part of the TSYS-AIOS-GIS project and provides a base weather data processing environment.
## Overview
The TSYS-AIOS-GIS-Tools-Weather-Base container is designed for weather data processing and analysis tasks. It includes essential tools for handling weather data formats (GRIB, NetCDF), accessing weather APIs, and performing meteorological calculations.
## Tools Included
### Core Tools
- **Base OS**: Debian Bookworm slim
- **Shell**: Bash
- **Programming Languages**:
- Python 3 with meteorological libraries
- R with weather analysis packages
### Weather Data Processing Libraries
- **xarray**: Multi-dimensional data in Python
- **cfgrib**: GRIB format handling
- **netCDF4**: NetCDF file handling
- **MetPy**: Meteorological calculations
- **Siphon**: Access to weather data from various sources
- **Numpy/Pandas/Scipy**: Scientific computing libraries
### Climate Data Operators
- **CDO (Climate Data Operators)**: Tools for climate data processing
- **NCO (NetCDF Operators)**: Operators for NetCDF files
- **NCL (NCAR Command Language)**: For complex climate analysis
### Visualization & Analysis
- **Matplotlib/Seaborn**: Statistical plots
- **Requests**: HTTP library for API access
- **R packages**: raster, rgdal, ncdf4, rasterVis, ncmeta
### Additional Tools
- **FTP client**: For bulk weather data downloads
- **GFortran**: For compiling Fortran-based weather tools
## Usage
### Building the Weather Base Container
```bash
# From this directory
cd /home/localuser/AIWorkspace/TSYS-AIOS-GIS/Docker/TSYS-AIOS-GIS-Tools-Weather-Base
# Use the wrapper script to automatically detect and set user IDs
./docker-compose-wrapper.sh build
# Or run commands in the weather container with automatic user mapping
./docker-compose-wrapper.sh run tsys-weather-base [command]
# Example: Process a GRIB file with cfgrib
./docker-compose-wrapper.sh run tsys-weather-base python3 -c "import xarray as xr; ds = xr.open_dataset('file.grib', engine='cfgrib'); print(ds)"
# Example: Use CDO to process NetCDF files
./docker-compose-wrapper.sh run tsys-weather-base cdo info /workspace/weather_data.nc
# Example: Start Python with weather libraries
./docker-compose-wrapper.sh run tsys-weather-base python3
```
### Using with docker-compose directly
```bash
# Set environment variables and run docker-compose directly
LOCAL_USER_ID=$(id -u) LOCAL_GROUP_ID=$(id -g) docker-compose up --build
# Or export variables first
export LOCAL_USER_ID=$(id -u)
export LOCAL_GROUP_ID=$(id -g)
docker-compose up
```
### Using the wrapper script
```bash
# Build and start the base weather container with automatic user mapping
./docker-compose-wrapper.sh up --build
# Start without rebuilding
./docker-compose-wrapper.sh up
# View container status
./docker-compose-wrapper.sh ps
# Stop containers
./docker-compose-wrapper.sh down
```
## Weather Data Processing Workflows
This container is designed to handle:
- GRIB data format processing
- NetCDF data analysis
- NOAA and European weather API integration
- Bulk data download via HTTP/FTP
- Meteorological calculations
- Climate data processing with CDO/NCO
## User ID Mapping (For File Permissions)
The container automatically detects and uses the host user's UID and GID to ensure proper file permissions. This means:
- Files created inside the container will have the correct ownership on the host
- No more root-owned files after container operations
- Works across different environments (development, production servers)
The container detects the user ID from the mounted workspace volume. If needed, you can override the default values by setting environment variables:
```bash
# Set specific user ID and group ID before running docker-compose
export LOCAL_USER_ID=1000
export LOCAL_GROUP_ID=1000
docker-compose up
```
Or run with inline environment variables:
```bash
LOCAL_USER_ID=1000 LOCAL_GROUP_ID=1000 docker-compose up
```
The container runs as a non-root user named `TSYS-Tools` with the detected host user's UID/GID.

View File

@@ -0,0 +1,67 @@
#!/bin/bash
# docker-compose-wrapper.sh - Wrapper script to detect host UID/GID and run docker-compose
set -e # Exit on any error
# Detect the UID and GID of the user that owns the workspace directory (parent directory)
WORKSPACE_DIR="$(cd "$(dirname "$0")/../.." && pwd)"
echo "Detecting user ID from workspace directory: $WORKSPACE_DIR"
if [ -d "$WORKSPACE_DIR" ]; then
DETECTED_USER_ID=$(stat -c %u "$WORKSPACE_DIR" 2>/dev/null || echo 0)
DETECTED_GROUP_ID=$(stat -c %g "$WORKSPACE_DIR" 2>/dev/null || echo 0)
# If detection failed, try current user
if [ "$DETECTED_USER_ID" = "0" ]; then
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
else
# Fallback to current user if workspace directory doesn't exist
DETECTED_USER_ID=$(id -u)
DETECTED_GROUP_ID=$(id -g)
fi
echo "Detected USER_ID=$DETECTED_USER_ID and GROUP_ID=$DETECTED_GROUP_ID"
# Set environment variables for docker-compose
export LOCAL_USER_ID=$DETECTED_USER_ID
export LOCAL_GROUP_ID=$DETECTED_GROUP_ID
# Show usage information
echo ""
echo "Usage: $0 [build|up|run <service> <command>|exec <service> <command>|down|ps]"
echo ""
echo "Examples:"
echo " $0 up # Start services"
echo " $0 build # Build containers"
echo " $0 run tsys-weather-base bash # Run command in container"
echo " $0 down # Stop and remove containers"
echo ""
# Check if docker compose (new format) or docker-compose (old format) is available
if command -v docker &> /dev/null && docker compose version &> /dev/null; then
# Use new docker compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker compose up'..."
docker compose up
else
# Execute the provided docker compose command
echo "Running: docker compose $*"
docker compose "$@"
fi
elif command -v docker-compose &> /dev/null; then
# Fallback to old docker-compose format
if [ $# -eq 0 ]; then
echo "No command provided. Running 'docker-compose up'..."
docker-compose up
else
# Execute the provided docker-compose command
echo "Running: docker-compose $*"
docker-compose "$@"
fi
else
echo "Error: Neither 'docker compose' nor 'docker-compose' command found."
echo "Please install Docker Compose to use this script."
exit 1
fi

View File

@@ -0,0 +1,18 @@
version: '3.8'
services:
tsys-weather-base:
build:
context: .
dockerfile: Dockerfile
container_name: TSYS-AIOS-GIS-Tools-Weather-Base
image: tsys-aios-gis-tools-weather-base:latest
volumes:
- ../../../:/workspace:rw
working_dir: /workspace
stdin_open: true
tty: true
environment:
- LOCAL_USER_ID=${LOCAL_USER_ID:-1000}
- LOCAL_GROUP_ID=${LOCAL_GROUP_ID:-1000}
user: "${LOCAL_USER_ID:-1000}:${LOCAL_GROUP_ID:-1000}"

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# entrypoint.sh - Entrypoint script to handle user creation and permission setup at runtime
# Set default values if not provided
USER_ID=${LOCAL_USER_ID:-$(id -u 1000)}
GROUP_ID=${LOCAL_GROUP_ID:-$(id -g 1000)}
# In case the environment variables are not set properly, detect them from the workspace volume
if [ "$USER_ID" = "$(id -u 0)" ] || [ "$USER_ID" = "0" ]; then
# Detect the UID and GID of the user that owns the workspace directory
if [ -d "/workspace" ]; then
USER_ID=$(stat -c %u /workspace 2>/dev/null || echo 1000)
GROUP_ID=$(stat -c %g /workspace 2>/dev/null || echo 1000)
else
USER_ID=${LOCAL_USER_ID:-1000}
GROUP_ID=${LOCAL_GROUP_ID:-1000}
fi
fi
echo "Starting with USER_ID=$USER_ID and GROUP_ID=$GROUP_ID"
# Create the group with specified GID
groupadd -f -g $GROUP_ID -o TSYS-Tools 2>/dev/null || groupmod -g $GROUP_ID -o TSYS-Tools
# Create the user with specified UID and add to the group
useradd -f -u $USER_ID -g $GROUP_ID -m -s /bin/bash -o TSYS-Tools 2>/dev/null || usermod -u $USER_ID -g $GROUP_ID -o TSYS-Tools
# Add user to sudo group for any necessary operations
usermod -aG sudo TSYS-Tools 2>/dev/null || true
# Make sure workspace directory exists and has proper permissions
mkdir -p /workspace
chown -R $USER_ID:$GROUP_ID /workspace
# Set up proper permissions for .local (if they exist)
mkdir -p /home/TSYS-Tools/.local
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/.local
# Set up proper permissions for R (if they exist)
mkdir -p /home/TSYS-Tools/R
chown $USER_ID:$GROUP_ID /home/TSYS-Tools/R
# If there are additional arguments, run them as the created user
if [ $# -gt 0 ]; then
exec su -p TSYS-Tools -c "$*"
else
# Otherwise start an interactive bash shell as the created user
exec su -p TSYS-Tools -c "/bin/bash"
fi

12
GUIDEBOOK/AboutMe.md Normal file
View File

@@ -0,0 +1,12 @@
My full name is Charles N Wyble. I use the online handle @ReachableCEO.
I am a strong believer in digital data soverignity. I am a firm practicer of self hosting (using Cloudron on a netcup vps and soon Coolify on another Cloudron VPS).
I am 41 years old.
I am a democrat and believe strongly in the rule of law and separation of powers.
I actively avoid the media.
I am solo entrepenuer creating an ecosystem of entities called TSYS Group. (Please see TSYS.md for more on that)
My professional background is in production technical operations since 2002
I use many command line ai agents (codex,coder,qwen,gemini) and wish to remain agent agnostic at all times
I am located in the United States of America . As of October 2025 I am located in central texas
I will be relocating to Raleigh North Carolina in April 2026
I want to streamlne my life using AI and relying on it for all aspects of my professional, knowledge worker actions.
I prefer relaxed but professional engagement and dont want to be flattered.

26
GUIDEBOOK/AgentRules.md Normal file
View File

@@ -0,0 +1,26 @@
This file is rules for you to follow
Always refer to me as Charles. Do not ever refer to me as "the human" or "the user" please.
Do not be a sychophant
Avoid fluff in your responses
Use this pattern for workflows:
Question -> Proposal -> Plan -> Prompt -> Implementation
Expanding on that:
Additional Rules:
- When working with Docker containers, minimize root usage as much as possible. Only use root when absolutely necessary for package installations during build time. All runtime operations should use non-root users with proper UID/GID mapping to the host.
- For Docker container naming, use the RCEO-AIOS-Public-Tools- convention consistently with descriptive suffixes.
- Create thin wrapper scripts that detect and handle UID/GID mapping to ensure file permissions work across any host environment.
- Maintain disciplined naming and organization to prevent technical debt as the number of projects grows.
- Keep the repository root directory clean. Place all project-specific files and scripts in appropriate subdirectories rather than at the top level.
- Use conventional commits for all git commits with proper formatting: type(scope): brief description followed by more verbose explanation if needed.
- Commit messages should be beautiful and properly verbose, explaining what was done and why.
- Use the LLM's judgment for when to push and tag - delegate these decisions based on the significance of changes.
- All projects should include a collab/ directory with subdirectories: questions, proposals, plans, prompts, and audit.
- Follow the architectural approach: layered container architecture (base -> specialized layers), consistent security patterns (non-root user with UID/GID mapping), same operational patterns (wrapper scripts), and disciplined naming conventions.

View File

@@ -0,0 +1,47 @@
# Architectural Approach
This document captures the architectural approach for project development in the AIOS-Public system.
## Container Architecture
### Layered Approach
- Base containers provide foundational tools and libraries
- Specialized containers extend base functionality for specific use cases
- Each layer adds specific capabilities while maintaining consistency
### Naming Convention
- Use `RCEO-AIOS-Public-Tools-` prefix consistently
- Include descriptive suffixes indicating container purpose
- Follow pattern: `RCEO-AIOS-Public-Tools-[domain]-[type]`
### Security Patterns
- Minimize root usage during build and runtime
- Implement non-root users for all runtime operations
- Use UID/GID mapping for proper file permissions across environments
- Detect host user IDs automatically through file system inspection
### Operational Patterns
- Create thin wrapper scripts that handle environment setup
- Use consistent patterns for user ID detection and mapping
- Maintain same operational workflow across all containers
- Provide clear documentation in README files
### Organization Principles
- Separate COO mode (operational tasks) from CTO mode (R&D tasks) containers
- Create individual directories per container type
- Maintain disciplined file organization to prevent technical debt
- Keep repository root clean with project-specific files in subdirectories
## Documentation Requirements
- Each container must have comprehensive README
- Include usage examples and environment setup instructions
- Document security and permission handling
- Provide clear container mapping and purpose
## Implementation Workflow
1. Start with architectural design document
2. Create detailed implementation plan
3. Develop following established patterns
4. Test with sample data/usage
5. Document for end users
6. Commit with conventional commit messages

9
GUIDEBOOK/StartHere.md Normal file
View File

@@ -0,0 +1,9 @@
This repository is where I will start all of my AI interactions from.
Unless we are making new tools, you won't be doing any work in this repository (other than when I tell you to commit/push anything in the tree).
You will be doing all your work in a new repository that I will tell you about. You will have all of the core knowledge from
the GUIDEBOOK directory files and you will follow the workflow and rules outlined in AgentRules.md in that new project repository.
Think of this repository like the top level of a users home directory who is hyper organized. These markdown files and docker containers are kind of the dotfiles.
Any work would be done in a sub directory off of the users home directory, not at the top level.

36
GUIDEBOOK/TSYS.md Normal file
View File

@@ -0,0 +1,36 @@
This file documents the TSYS Group.
Legal Entities (all filed and domiciled in the great state of Texas)
Turnkey Network Systems LLC (a series LLC)
RackRental.net Operating Company LLC (a stand alone LLC) (all consulting and SAAS operations are run from here)
Suborbital Systems Development Company LLC (a stand alone LLC) (this is my "moonshot" business and will be where all fundraising is done)
Americans For A Better Network INC (a Texas non profit) (plan to be a 501c3) (want to get a fiscal sponsor by end of 2025)
Side Door Group (a Texas non profit) (plan to be a 501c4)
Side Door Solutions Group INC (a Texas non profit) (super PAC)
The overaall goal of TSYS Group is to solve the digital divide through a combination of :
R&D
Operations
Advocacy/Lobbying/Education
We are firecly FLO and also our governance materials are open.
We want our operations/business model to be adopted by other passionate pragmatic individuals to solve big problems (clean water, clean energy, governance, food shortages etc). We believe strongly that only a combination of private enterprise and government can solve these issues.
Series of Turnkey Network Systems LLC
High Flight Network Operating Company (HFNOC) (will be a coop in all states that recognize it) in early formation stages currently . This will be the entity (a collection of sub entities under this banner) that will own and operate (in coop/collective trust) balloons and ground stations for MorseNet (what we are calling the network we are building)
High Flight Network Finance Company (HFNFC) (will also be a coop just like HFNFC , also in early formation stages currently). This will be the entity that handles network finance/construction/loans etc. The idea is to raise financing from main street. To the extent wall street participates, it's only given financial interest, not governance.
We will not do security bundling and chase returns. The capital will earn a reasonable rate of return and reinvest into the coop to build more networks and keep debt and interest rates low.
RWSCP
RWFO
AP4AP

View File

@@ -1,3 +1,76 @@
# TSYS-AIOS-GIS
TSYS-AIOS for GIS stuff
TSYS-AIOS for GIS and Weather Data Processing
## Overview
This repository contains the GIS (Geographic Information System) and Weather data processing components of the TSYS AI Operating System. It provides specialized tools and workflows for handling geospatial data and meteorological datasets, particularly for infrastructure planning and balloon path prediction for TSYS Group projects.
## Architecture
This system follows the same disciplined container architecture as the parent AIOS-Public project:
- **Layered containers**: Base containers providing foundational tools with specialized containers extending functionality
- **Security patterns**: Non-root users with UID/GID mapping for proper file permissions
- **Operational patterns**: Wrapper scripts for automatic environment setup
- **Naming convention**: `TSYS-AIOS-GIS-Tools-[domain]-[type]` pattern
- **Organization**: Separate CTO mode (R&D) from COO mode (operational tasks)
## Components
### Docker Containers
#### GIS Containers
- `TSYS-AIOS-GIS-Tools-GIS-Base`: Foundation container with core GIS libraries (GDAL, DuckDB, PostGIS client tools)
- `TSYS-AIOS-GIS-Tools-GIS-Processing`: Advanced processing tools with Jupyter notebooks for ETL workflows
#### Weather Containers
- `TSYS-AIOS-GIS-Tools-Weather-Base`: Foundation container with weather data libraries (xarray, cfgrib, MetPy)
- `TSYS-AIOS-GIS-Tools-Weather-Analysis`: Advanced analysis tools with forecasting libraries
### Collaboration Structure
- **Questions**: Initial questions and requirements gathering
- **Proposals**: Design proposals and architectural decisions
- **Plans**: Implementation roadmaps and timelines
- **Prompts**: AI prompt templates for development
- **Audit**: Compliance and review records
## Usage
### Docker Container Usage
Each container is located in the `Docker/` directory with its own subdirectory:
```bash
# Navigate to a specific container directory
cd Docker/TSYS-AIOS-GIS-Tools-GIS-Base
# Build and start the container with automatic user mapping
./docker-compose-wrapper.sh up --build
# Run a specific command in the container
./docker-compose-wrapper.sh run tsys-gis-base [command]
```
## Design Principles
- **Self-hosted GIS stack**: Privacy and control over geospatial data processing
- **Multi-format support**: Shapefiles, GeoJSON, Parquet, GRIB, NetCDF
- **Database integration**: PostgreSQL/PostGIS client tools for spatial analysis
- **ETL workflows**: Support for both technical (GIS/Weather) and business data processing
- **Documentation integration**: Compatible with existing documentation tools
- **MinIO integration**: Output to MinIO buckets for business use
## Integration with AIOS Ecosystem
- Compatible with existing user management (UID/GID mapping)
- Can be orchestrated with documentation containers when needed
- Follows same naming conventions and wrapper script patterns
- Separate from documentation containers but can work together in CTO mode
## Next Steps
- Phase 1: Deploy and test base GIS container with sample datasets
- Phase 2: Deploy and test weather base container with GRIB support
- Phase 3: Advanced processing containers with visualization and Jupyter support
- Phase 4: Optional fusion container for integrated GIS+weather analysis (balloon path prediction)

View File

@@ -0,0 +1,175 @@
# GIS and Weather Data Processing Container Plan
## Overview
This document outlines the plan for creating Docker containers to handle GIS data processing and weather data analysis. These containers will be used exclusively in CTO mode for R&D and data analysis tasks, with integration to documentation workflows and MinIO for data output.
## Requirements
### GIS Data Processing
- Support for Shapefiles and other GIS formats
- Self-hosted GIS stack (not Google Maps or other commercial services)
- Integration with tools like GDAL, Tippecanoe, DuckDB
- Heavy use of PostGIS database
- Parquet format support for efficient data storage
- Based on reference workflows from:
- https://tech.marksblogg.com/american-solar-farms.html
- https://tech.marksblogg.com/canadas-odb-buildings.html
- https://tech.marksblogg.com/ornl-fema-buildings.html
### Weather Data Processing
- GRIB data format processing
- NOAA and European weather APIs integration
- Bulk data download via HTTP/FTP
- Balloon path prediction system (to be forked/modified)
### Shared Requirements
- Python-based with appropriate libraries (GeoPandas, DuckDB, etc.)
- R support for statistical analysis
- Jupyter notebook integration for experimentation
- MinIO bucket integration for data output
- Optional but enabled GPU support for performance
- All visualization types (command-line, web, desktop)
- Flexible ETL capabilities for both GIS/Weather and business workflows
## Proposed Container Structure
### RCEO-AIOS-Public-Tools-GIS-Base
- Foundation container with core GIS libraries
- Python + geospatial stack (GDAL, GEOS, PROJ, DuckDB, Tippecanoe)
- R with spatial packages
- PostGIS client tools
- Parquet support
- File format support (Shapefiles, GeoJSON, etc.)
### RCEO-AIOS-Public-Tools-GIS-Processing
- Extends GIS-Base with advanced processing tools
- Jupyter with GIS extensions
- Specialized ETL libraries
- Performance optimization tools
### RCEO-AIOS-Public-Tools-Weather-Base
- Foundation container with weather data libraries
- GRIB format support (cfgrib)
- NOAA and European API integration tools
- Bulk download utilities (HTTP/FTP)
### RCEO-AIOS-Public-Tools-Weather-Analysis
- Extends Weather-Base with advanced analysis tools
- Balloon path prediction tools
- Forecasting libraries
- Time series analysis
### RCEO-AIOS-Public-Tools-GIS-Weather-Fusion (Optional)
- Combined container for integrated GIS + Weather analysis
- For balloon path prediction using weather data
- High-resource container for intensive tasks
## Technology Stack
### GIS Libraries
- GDAL/OGR for format translation and processing
- GEOS for geometric operations
- PROJ for coordinate transformations
- PostGIS for spatial database operations
- DuckDB for efficient data processing with spatial extensions
- Tippecanoe for tile generation
- Shapely for Python geometric operations
- GeoPandas for Python geospatial data handling
- Rasterio for raster processing in Python
- Leaflet/Mapbox for web visualization
### Data Storage & Processing
- DuckDB with spatial extensions
- Parquet format support
- MinIO client tools for data output
- PostgreSQL client for connecting to external databases
### Weather Libraries
- xarray for multi-dimensional data in Python
- cfgrib for GRIB format handling
- MetPy for meteorological calculations
- Climate Data Operators (CDO) for climate data processing
- R packages: raster, rgdal, ncdf4, rasterVis
### Visualization
- Folium for interactive maps
- Plotly for time series visualization
- Matplotlib/Seaborn for statistical plots
- R visualization packages
- Command-line visualization tools
### ETL and Workflow Tools
- Apache Airflow (optional in advanced containers)
- Prefect or similar workflow orchestrators
- DuckDB for ETL operations
- Pandas/Dask for large data processing
## Container Deployment Strategy
### Workstation Prototyping
- Lighter containers for development and testing
- Optional GPU support
- MinIO client for data output testing
### Production Servers
- Full-featured containers with all processing capabilities
- GPU-enabled variants where applicable
- Optimized for large RAM/CPU/disk requirements
## Security & User Management
- Follow same non-root user pattern as documentation containers
- UID/GID mapping for file permissions
- Minimal necessary privileges
- Proper container isolation
- Secure access to MinIO buckets
## Integration with Existing Stack
- Compatible with existing user management approach
- Can be orchestrated with documentation containers when needed
- Follow same naming conventions
- Use same wrapper script patterns
- Separate from documentation containers but can work together in CTO mode
## Implementation Phases
### Phase 1: Base GIS Container
- Create GIS-Base with GDAL, DuckDB, PostGIS client tools
- Implement Parquet and Shapefile support
- Test with sample datasets from reference posts
- Validate MinIO integration
### Phase 2: Weather Base Container
- Create Weather-Base with GRIB support
- Integrate NOAA and European API tools
- Implement bulk download capabilities
- Test with weather data sources
### Phase 3: Processing Containers
- Create GIS-Processing container with ETL tools
- Create Weather-Analysis container with prediction tools
- Add visualization and Jupyter support
- Implement optional GPU support
### Phase 4: Optional Fusion Container
- Combined container for balloon path prediction
- Integration of GIS and weather data
- High-complexity, high-resource usage
## Data Flow Architecture
- ETL workflows for processing public datasets
- Output to MinIO buckets for business use
- Integration with documentation tools for CTO mode workflows
- Support for both GIS/Weather ETL (CTO) and business ETL (COO)
## Next Steps
1. Review and approve this enhanced plan
2. Begin Phase 1 implementation
3. Test with sample data from reference workflows
4. Iterate based on findings
## Risks & Considerations
- Large container sizes due to GIS libraries and dependencies
- Complex dependency management, especially with DuckDB and PostGIS
- Computational resource requirements, especially for large datasets
- GPU support implementation complexity
- Bulk data download and processing performance

View File

@@ -0,0 +1,35 @@
# GIS and Weather Data Processing - AI Prompt Template
## Purpose
This prompt template is designed to guide AI agents in implementing GIS and weather data processing containers following established patterns.
## Instructions for AI Agent
When implementing GIS and weather data processing containers:
1. Follow the established container architecture pattern (base -> specialized layers)
2. Maintain consistent naming convention: RCEO-AIOS-Public-Tools-[domain]-[type]
3. Implement non-root user with UID/GID mapping
4. Create appropriate Dockerfiles and docker-compose configurations
5. Include proper documentation and README files
6. Add wrapper scripts for environment management
7. Test with sample data to verify functionality
8. Follow same security and operational patterns as existing containers
## Technical Requirements
- Use Debian Bookworm slim as base OS
- Include appropriate GIS libraries (GDAL, GEOS, PROJ, etc.)
- Include weather data processing libraries (xarray, netCDF4, etc.)
- Implement Jupyter notebook support where appropriate
- Include R and Python stacks as needed
- Add visualization tools (Folium, Plotly, etc.)
## Quality Standards
- Ensure containers build without errors
- Verify file permissions work across environments
- Test with sample datasets
- Document usage clearly
- Follow security best practices
- Maintain consistent user experience with existing containers

View File

@@ -0,0 +1,64 @@
# GIS and Weather Data Processing Container Proposal
## Proposal Summary
Create specialized Docker containers for GIS data processing and weather data analysis to support CTO-mode R&D activities, particularly for infrastructure planning and balloon path prediction for your TSYS Group projects.
## Business Rationale
As GIS and weather data analysis become increasingly important for your TSYS Group projects (particularly for infrastructure planning like solar farms and building datasets, and balloon path prediction), there's a need for specialized containers that can handle these data types efficiently while maintaining consistency with existing infrastructure patterns. The containers will support:
- Self-hosted GIS stack for privacy and control
- Processing public datasets (NOAA, European APIs, etc.)
- ETL workflows for both technical and business data processing
- Integration with MinIO for data output to business systems
## Technical Approach
- Follow the same disciplined container architecture as the documentation tools
- Use layered approach with base and specialized containers
- Implement same security patterns (non-root user, UID/GID mapping)
- Maintain consistent naming conventions
- Use same operational patterns (wrapper scripts, etc.)
- Include PostGIS, DuckDB, and optional GPU support
- Implement MinIO integration for data output
- Support for prototyping on workstations and production on large servers
## Technology Stack
- **GIS Tools**: GDAL, Tippecanoe, DuckDB with spatial extensions
- **Database**: PostgreSQL/PostGIS client tools
- **Formats**: Shapefiles, Parquet, GRIB, GeoJSON
- **Weather**: cfgrib, xarray, MetPy
- **ETL**: Pandas, Dask, custom workflow tools
- **APIs**: NOAA, European weather APIs
- **Visualization**: Folium, Plotly, command-line tools
## Benefits
- Consistent environment across development (workstations) and production (large servers)
- Proper file permission handling across different systems
- Isolated tools prevent dependency conflicts
- Reproducible analysis environments for GIS and weather data
- Integration with documentation tools for CTO mode workflows
- Support for both technical (GIS/Weather) and business (COO) ETL workflows
- Scalable architecture with optional GPU support
- Data output capability to MinIO buckets for business use
## Resource Requirements
- Development time: 3-4 weeks for complete implementation
- Storage: Additional container images (est. 3-6GB each)
- Compute: Higher requirements for processing (can be isolated to CTO mode)
- Optional: GPU resources for performance-intensive tasks
## Expected Outcomes
- Improved capability for spatial and weather data analysis
- Consistent environments across development and production systems
- Better integration with documentation workflows
- Faster setup for ETL projects (both technical and business)
- Efficient processing of large datasets using DuckDB and Parquet
- Proper data output to MinIO buckets for business use
- Reduced technical debt through consistent patterns
## Implementation Timeline
- Week 1: Base GIS container with PostGIS, DuckDB, and data format support
- Week 2: Base Weather container with GRIB support and API integration
- Week 3: Advanced processing containers with Jupyter and visualization
- Week 4: Optional GPU variants and MinIO integration testing
## Approval Request
Please review and approve this proposal to proceed with implementation of the GIS and weather data processing containers that will support your infrastructure planning and balloon path prediction work.

View File

@@ -0,0 +1,87 @@
# GIS and Weather Data Processing - Initial Questions
## Core Questions
1. What specific GIS formats and operations are most critical for your current projects?
Well I am not entirely sure. I am guessing that I'll need to pull in shapefiles ? I will be working with an
entirely self hosted GIS stack (not Google maps or anything). I know things exist like gdal ? tippacanoe?
I think things like parquet as well. Maybe duckdb?
Reference these posts:
https://tech.marksblogg.com/american-solar-farms.html
https://tech.marksblogg.com/canadas-odb-buildings.html
https://tech.marksblogg.com/ornl-fema-buildings.html
FOr the type of workflows that I would like to run.
Extract patterns/architecture/approaches along with the specific reductions to practice.
2. What weather data sources and APIs do you currently use or plan to use?
None currently. But I'll be hacking/forking a system to predict balloon paths. I suspect I'll need to process grib data.
Also probably use the NOAA and european equivalant APIs? Maybe some bulk HTTP/FTP download?
3. Are there any specific performance requirements for processing large datasets?
I suspect I'll do some early prototyping with small data sets on my workstation and then running the container with the real data sets on my big ram/cpu/disk servers.
4. Do you need integration with specific databases (PostGIS, etc.)?
Yes I will be heavily using PostGIS for sure.
## Technical Questions
1. Should we include both Python and R stacks in the same containers or separate them?
I am not sure? Whatever you think is best?
2. What level of visualization capability is needed (command-line, web-based, desktop)?
All of those I think. I want flexibility.
3. Are there any licensing constraints or requirements to consider?
I will be working only with public data sets.
4. Do you need GPU support for any processing tasks?
Yes but make it optional. I dont want to be blocked with GPU complexity right now.
## Integration Questions
1. How should GIS/Weather outputs integrate with documentation workflows?
I will be using the GIS/Weather In CTO mode only. I will also be using documentation in CTO mode with it.
I think, for now, they can be siblings but not have strong integration.
**ANSWER**: GIS/Weather and documentation containers will operate as siblings in CTO mode, with loose integration for now.
2. Do you need persistent data storage within containers?
I do not think so. I will use docker compose to pass in directory paths .
Oh I will want to push finsihed data to minio buckets.
I don't know how to best architect my ETL toolbox.... I will mostly be doing ETL on GIS/Weather data but I can see also needing todo other business type ETL workflows in COO mode.
**ANSWER**: Use Docker compose volume mounts for data input/output. Primary output destination will be MinIO buckets for business use. ETL toolbox should handle both GIS/Weather (CTO) and business (COO) workflows.
3. What level of integration with existing documentation containers is desired?
**ANSWER**: Sibling relationship with loose integration. Both will be used in CTO mode but for different purposes.
4. Are there specific deployment environments to target (local, cloud, edge)?
Well the ultimate goal is some data sets get pushed to minio buckets for use by various lines of business.
This is all kind of new to me. I am a technical operations/system admin and easing my way into devops/sre and swe.
**ANSWER**: Primarily local deployment (workstation for prototyping, large servers for production). Data output to MinIO for business use. Targeting self-hosted environments for full control and privacy.