Compare commits

..

19 Commits

Author SHA1 Message Date
Charles N Wyble
3f5ca4c9a6 docs: add AGENTS.md with git commit guidelines
Add agent guidelines for AI assistants working on this repository:

- Document atomic commit requirements
- Specify conventional commit format with examples
- Require verbose, formatted commit messages
- Emphasize immediate commit/push behavior

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:08:39 -05:00
Charles N Wyble
0d7f079c21 docs: add validation section to README
Document the validate.sh script functionality:

- Add Validation section after SSL Stack components
- Describe script usage and invocation
- List validation checks performed:
  - Required top-level files and directories
  - Initializer directory structure
  - Apply script syntax
  - Path consistency between apply scripts and configs/scripts

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:07:23 -05:00
Charles N Wyble
48f6a6e29c feat: add repository validation script
Add comprehensive validation script (validate.sh) to verify repository
integrity and configuration consistency:

- Check required top-level files (classes/server/initializers, roles/*)
- Validate initializer directory structure (apply script exists)
- Verify apply script bash syntax with shellcheck fallback
- Validate path consistency between apply scripts and configs/scripts dirs
- Report all validation errors with file:line references

Run with: ./validate.sh

Exit codes: 0=pass, 1=validation errors found

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:07:05 -05:00
Charles N Wyble
dbe9e72969 fix(ldap-auth): remove reference to non-existent config file
Comment out LDAP configuration deployment as cloudron-ldap.conf
does not exist in the configs directory. Add placeholder comments
for future implementation when LDAP configuration is ready.

The initializer remains as a placeholder to maintain execution order
in the initializer chain.

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:06:47 -05:00
Charles N Wyble
ab6583cc88 fix(system-config): correct relative paths from ConfigFiles to direct
Refactor all configuration file paths to use direct relative paths
instead of the ./ConfigFiles/ prefix that referenced KNELServerBuild
directory structure:

- ZSH/tsys-zshrc (was ConfigFiles/ZSH/)
- SMTP/aliases (was ConfigFiles/SMTP/)
- Syslog/rsyslog.conf (was ConfigFiles/Syslog/)
- DHCP/dhclient.conf (was ConfigFiles/DHCP/)
- SNMP/snmp-*.conf (was ConfigFiles/SNMP/)
- NetworkDiscovery/lldpd (was ConfigFiles/NetworkDiscovery/)
- Cockpit/disallowed-users (was ConfigFiles/Cockpit/)
- NTP/ntp.conf (was ConfigFiles/NTP/)

Also fix redirect operator (> to use proper cp syntax) in rsyslog,
dhclient, and snmp-sudo deployments.

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:06:42 -05:00
Charles N Wyble
1cc9ba5830 fix(ssh-hardening): correct tsys-sshd-config path reference
Fix SSH configuration deployment to use the correct config filename:
- Change ./configs/sshd-config to ./configs/tsys-sshd-config
- Change ./configs/sshd-dev-config to ./configs/tsys-sshd-config

Both production and development environments now use the unified
tsys-sshd-config file to ensure consistent SSH hardening across
all deployment scenarios.

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:06:35 -05:00
Charles N Wyble
be474d4a75 feat(oam): add LibreNMS agent deployment
Implement comprehensive check_mk agent deployment for LibreNMS monitoring:

- Create agent directory structure (/usr/lib/check_mk_agent/plugins, local, etc.)
- Deploy main check_mk_agent binary to /usr/bin
- Deploy distro script for OS detection
- Install systemd socket activation (check_mk.socket, check_mk@.service)
- Deploy monitoring plugins (smart, ntp-client, ntp-server, os-updates, postfix)
- Configure and enable check_mk socket for immediate monitoring

This enables centralised infrastructure monitoring through LibreNMS with
hardware health, NTP synchronisation, and mail queue visibility.

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:06:20 -05:00
Charles N Wyble
ee9f391951 feat(security-hardening): implement SCAP-STIG compliance logic
Refactor apply script to implement comprehensive security hardening:

- Add GRUB bootloader permission hardening (root:root, mode 0400)
- Disable and remove autofs service per STIG requirements
- Deploy modprobe configurations for kernel module blacklisting
- Create STIG-compliant network protocol blacklist (dccp, rds, sctp, tipc)
- Create STIG-compliant filesystem blacklist (cramfs, freevxfs, hfs, etc.)
- Create USB storage blacklist for removable media control
- Deploy security banners (issue, issue.net, motd)
- Harden cron and at permission controls (cron.allow, at.allow)
- Fix typo in security-limits.conf destination path

🤖 Generated with [Crush](https://github.com/charmassociates/crush)

Assisted-by: GLM-5 via Crush <crush@charm.land>
2026-02-17 17:06:03 -05:00
Charles N Wyble
0a54b1386d feat(dell-config): add Dell server utility scripts
Add Dell-specific server management scripts:

- fixeth.sh: Ethernet interface naming fix script for Dell
  servers that require consistent network interface naming
  after BIOS/firmware updates or hardware changes

- omsa.sh: Dell OpenManage Server Administrator installation
  script for hardware monitoring, health status, and
  out-of-band management capabilities

These scripts support Dell PowerEdge server operations in
the KNEL infrastructure, enabling hardware monitoring and
consistent network configuration.

Related: KNELServerBuild/ProjectCode/Dell/Server/
2026-02-17 16:33:45 -05:00
Charles N Wyble
f97ae29877 feat(salt-client): add Salt minion configuration for config management
Add Salt minion configuration for ongoing configuration management:

- salt-minion: Configuration file pointing to the Salt master
  at salt-master.knownelement.com with appropriate settings
  for the KNEL infrastructure

This enables the server to receive configuration management
updates, orchestration commands, and compliance enforcement
from the central Salt master after initial provisioning.

Part of the KNEL management stack: FetchApply → Salt → Ansible
2026-02-17 16:33:32 -05:00
Charles N Wyble
65d719112c feat(wazuh): add Wazuh security monitoring agent configuration
Add comprehensive Wazuh agent configuration for security monitoring:

- wazuh-agent.conf: Full XML configuration including:
  * Server connection to tsys-nsm.knel.net via TCP/1514
  * AES encryption for agent-server communication
  * Rootcheck module for rootkit and anomaly detection
  * Syscheck file integrity monitoring for critical paths
    (/etc, /usr/bin, /usr/sbin, /bin, /sbin)
  * Log collection from syslog, auth.log, kern.log, dmesg
  * Active response capability enabled
  * Environment/organization labels for asset management

The agent connects to the centralized Wazuh server for log
aggregation, intrusion detection, and compliance monitoring.

Related: KNELServerBuild/ProjectCode/Modules/Security/secharden-wazuh.sh
2026-02-17 16:33:22 -05:00
Charles N Wyble
8f44815d97 feat(security-hardening): add SCAP-STIG compliance configuration files
Add security hardening configuration files implementing SCAP-STIG
controls:

- sysctl-hardening.conf: 75 kernel security parameters covering:
  * IP forwarding and redirect controls
  * Source routing and martian packet logging
  * TCP SYN cookies and timestamps
  * ExecShield and ASLR settings
  * Ptrace scope restrictions
  * Unprivileged BPF and userns restrictions

- security-limits.conf: Resource limits for:
  * Core dump prevention (fork bomb protection)
  * Process count limits (4096 soft, 8192 hard)
  * File handle limits (1024 soft, 4096 hard)
  * Memory lock and file size restrictions

- issue, issue.net, motd: Security warning banners for local
  and network login

- modprobe/: Directory for kernel module blacklist configurations

These configs implement CIS Benchmark and DISA STIG requirements
for Linux server hardening.

Related: KNELServerBuild/ProjectCode/Modules/Security/secharden-scap-stig.sh
2026-02-17 16:32:14 -05:00
Charles N Wyble
429454ebc9 feat(unattended-upgrades): add automatic security update configuration
Add Debian unattended-upgrades configuration files for automatic
security patch deployment:

- 50unattended-upgrades: Main configuration specifying allowed
  origins (distro, security, ESM), package blacklist, cleanup
  settings for unused kernels/dependencies, syslog logging, and
  configurable reboot behavior

- auto-upgrades: Enablement settings for the automatic update
  service

This ensures servers receive security patches promptly without
manual intervention, reducing the window of vulnerability.

Related: KNELServerBuild/ProjectCode/Modules/Security/secharden-auto-upgrade.sh
2026-02-17 16:31:53 -05:00
Charles N Wyble
43d6003128 feat(2fa): add PAM and SSH configuration for Google Authenticator
Add configuration files required for two-factor authentication
via Google Authenticator:

- sshd-pam: PAM configuration integrating Google Authenticator
  with standard Unix authentication, using nullok for gradual
  rollout allowing users without 2FA to still authenticate

- sshd-2fa-config: SSH daemon configuration additions enabling
  ChallengeResponseAuthentication and KeyboardInteractive
  authentication methods required for 2FA flow

These configs support the KNEL security baseline requiring 2FA
for SSH access while maintaining backward compatibility during
user onboarding.

Related: KNELServerBuild/ProjectCode/Modules/Security/secharden-2fa.sh
2026-02-17 16:31:37 -05:00
1e506fed1d feat: Complete port of all KNELServerBuild components to FetchApply
- Add secharden-audit-agents functionality to security-hardening
- Create unattended-upgrades initializer for automatic security updates
- Port Dell-specific scripts (fixcpuperf, fixeth, omsa) to dell-config
- Port sslStackFromSource.sh to ssl-stack initializer (dev systems only)
- Create ldap-auth placeholder for future Cloudron integration
- Update server class to include all initializers
- Update security role to include unattended-upgrades
- Add build dependencies to packages for SSL stack compilation
- Update README with comprehensive documentation of all initializers

Now all components from KNELServerBuild are successfully ported to FetchApply,
including previously missed security modules, Dell server scripts, and RandD components.

Future migration path clear: Salt for ongoing management, Ansible for ComplianceAsCode.

💘 Generated with Crush

Assisted-by: GLM-4.6 via Crush <crush@charm.land>
2026-01-21 12:48:32 -05:00
c5a504f9c8 docs: Update mental model and documentation for tool responsibilities
- Add MENTALMODEL.md documenting architecture and tool responsibilities
- Clarify Salt is for ongoing configuration management and automation
- Clarify Ansible is for ComplianceAsCode deployment from github.com/ComplianceAsCode/content
- Update README.md to reflect correct understanding of tool purposes
- Update decision matrix for when to use each tool
- Document migration path and future service plans (Beszel, Netbird via Salt)

Establishes clear separation of concerns across the configuration management ecosystem.

💘 Generated with Crush

Assisted-by: GLM-4.6 via Crush <crush@charm.land>
2026-01-21 11:51:56 -05:00
afe61cae9d refactor: Remove librenms, add ansible/salt clients
- Remove all librenms references from initializers and configuration
- Keep tailscale as requested (remove netbird plans)
- Add ansible-core (already present) and salt-minion packages
- Create salt-client initializer for minion configuration
- Update roles to replace librenms-agent with salt-client
- Simplify oam initializer to only handle up2date script
- Update README to reflect new architecture and tools

Prepares infrastructure for migration to Salt configuration management
while maintaining tailscale for VPN connectivity.

💘 Generated with Crush

Assisted-by: GLM-4.6 via Crush <crush@charm.land>
2026-01-21 11:43:35 -05:00
0a7efe5303 Complete server class configurations
- Configure all server classes (physical, virtual, database, webserver, ntp-server, librenms, dev-workstation)
- Set appropriate initializers, modules, and roles for each class
- Define class-specific configurations based on server type
- Standardize configuration across all server types

💘 Generated with Crush

Assisted-by: GLM-4.6 via Crush <crush@charm.land>
2026-01-21 11:10:45 -05:00
09d93e37cd Initial port of KNELServerBuild to FetchApply framework
- Created base FetchApply directory structure with classes, initializers, modules, roles, and variables
- Ported SetupNewSystem.sh functionality to modular FetchApply structure
- Created server classes: physical, virtual, librenms, database, webserver, dev-workstation
- Implemented initializers for system-setup, packages, ssh-keys, and user-configuration
- Created modules for oam, system-config, ssh-hardening, and librenms-agent
- Defined security and monitoring roles
- Copied configuration templates from KNELServerBuild
- Updated README with comprehensive FetchApply usage instructions

💘 Generated with Crush

Assisted-by: GLM-4.6 via Crush <crush@charm.land>
2026-01-21 11:05:17 -05:00
103 changed files with 9012 additions and 2 deletions

45
AGENTS.md Normal file
View File

@@ -0,0 +1,45 @@
# Agent Guidelines
## Git Commit Requirements
When making changes to this repository, ALWAYS:
1. **Commit atomically**: Each logical change should be its own commit
2. **Use conventional commit format**:
- `feat(scope): description` - New feature
- `fix(scope): description` - Bug fix
- `docs: description` - Documentation changes
- `refactor(scope): description` - Code refactoring
- `test(scope): description` - Test additions/changes
- `chore: description` - Maintenance tasks
3. **Write verbose, beautifully formatted messages**:
- Title line (50 chars max)
- Blank line
- Body explaining WHAT and WHY (not how)
- Reference related files/issues
- Include footer with attribution
## Example Commit
```
feat(security-hardening): implement SCAP-STIG compliance logic
Refactor apply script to implement comprehensive security hardening:
- Add GRUB bootloader permission hardening (root:root, mode 0400)
- Disable and remove autofs service per STIG requirements
- Deploy modprobe configurations for kernel module blacklisting
- Create STIG-compliant network protocol blacklist
This ensures servers meet DoD security requirements for production
deployment.
🤖 Generated with [Crush](https://github.com/charmassociates/crush)
Assisted-by: GLM-5 via Crush <crush@charm.land>
```
## Important
**NEVER wait to be asked to commit and push your work.**
**Commit immediately after each logical unit of work.**

55
MENTALMODEL.md Normal file
View File

@@ -0,0 +1,55 @@
# KNEL Configuration Management Mental Model
## Architecture Overview
### FetchApply - One-Time Provisioning
- **Purpose:** Initial server setup and basic configuration
- **When:** Runs once at first boot of newly provisioned system
- **What:** System detection, package installation, security hardening, basic monitoring setup
### Salt - Ongoing Configuration Management & Automation
- **Purpose:** Day-to-day system configuration, automation, and orchestration
- **When:** Continuously via Salt master/minion relationship
- **What:**
- Configuration management (file distribution, service management)
- Ad-hoc automation tasks
- System orchestration
- Application deployment
- Beszel client configuration and management
- Netbird client configuration and management (future)
### Ansible - ComplianceAsCode Deployment
- **Purpose:** Deploy and manage compliance as code content
- **When:** Periodically or on-demand compliance deployment
- **What:**
- Deploy https://github.com/ComplianceAsCode/content
- Apply compliance frameworks (CIS, STIG, etc.)
- Compliance validation and remediation
- Documentation generation
### Network Services
- **Tailscale:** Currently active VPN overlay network
- **Netbird:** Future replacement (to be deployed via Salt)
- **Beszel:** Future monitoring replacement (to be deployed via Salt)
## Migration Path
1. **Current State:** FetchApply + Manual Management
2. **Transition State:** FetchApply + Salt + Ansible
3. **Future State:** Salt + Ansible (FetchApply deprecated)
## Tool Responsibilities
| Tool | Primary Responsibility | Secondary Responsibilities |
|-------|-------------------|------------------------|
| FetchApply | Initial provisioning | Foundation setup |
| Salt | Ongoing configuration | Automation, orchestration, client deployment |
| Ansible | Compliance deployment | Documentation, validation |
## Decision Matrix
- **Use Salt for:** System configuration, automation, deployment, ongoing management
- **Use Ansible for:** Compliance as code, security frameworks, documentation
- **Use FetchApply for:** Initial server setup (temporary, to be replaced)
This model ensures clear separation of concerns while providing comprehensive coverage of system lifecycle management.

236
README.md
View File

@@ -1,3 +1,235 @@
# KNELConfigMgmt-FetchApply # KNEL Configuration Management - FetchApply
KNEL Configuration Management Collection - FetchApply This repository contains the KNEL server configuration management system implemented with the FetchApply framework.
**NOTE:** This is a one-time provisioning system. For ongoing configuration management, this uses:
- Salt for system configuration and automation
- Ansible for ComplianceAsCode deployment
## Overview
The KNEL FetchApply system provides automated server provisioning for Linux servers. It uses the FetchApply framework to apply initial configurations and then serves as a foundation for Salt/Ansible-based management.
## Repository Structure
```
.
├── classes/
│ └── server/ # Single class for all servers
│ ├── initializers # List of initializers to run
│ └── roles # List of roles to apply
├── initializers/ # One-time setup scripts
│ ├── system-setup/ # System detection and basic setup
│ ├── packages/ # Package installation with conditional logic
│ ├── oam/ # Operations and Maintenance setup
│ ├── system-config/ # System configuration files
│ ├── ssh-hardening/ # SSH security hardening
│ ├── ssh-keys/ # SSH authorized key deployment
│ ├── postfix/ # Email configuration
│ ├── 2fa/ # Two-factor authentication setup
│ ├── wazuh/ # Wazuh security monitoring
│ ├── security-hardening/ # SCAP/STIG compliance
│ ├── unattended-upgrades/ # Automatic security updates
│ ├── dell-config/ # Dell server specific configurations
│ ├── ssl-stack/ # SSL stack compilation (dev systems)
│ ├── ldap-auth/ # LDAP authentication (placeholder)
│ ├── salt-client/ # Salt minion configuration
│ └── user-configuration/ # User shell settings
├── roles/ # Groups of related initializers
│ ├── security # Security-related initializers
│ └── monitoring # Monitoring-related initializers
├── modules/ # Placeholder for future Ansible modules
└── variables # Global configuration variables
```
## Installation
### Prerequisites
- Linux server (Ubuntu 18.04+ or Debian 10+ recommended)
- Root or sudo access
- Internet connectivity for package downloads
### Install FetchApply
First, install FetchApply on your system:
```bash
curl https://source.priveasy.org/Priveasy/fetch-apply/raw/branch/main/install -o /tmp/install
sudo bash /tmp/install --operations-repository-url=https://git.knownelement.com/KNEL/KNELConfigMgmt-FetchApply.git
```
### Usage
Once installed, FetchApply will automatically:
1. Detect system characteristics (physical/virtual, OS, special hosts)
2. Run initializers in sequence to provision the server
3. Apply security hardening and configuration management setup
You can also run FetchApply manually:
```bash
sudo fa
```
## System Detection
The system automatically detects:
- **Physical vs Virtual** - Using dmidecode and virt-what
- **Operating System** - Ubuntu vs Kali detection
- **Special Hosts** - NTP servers, development workstations
- **User Accounts** - Detects localuser and subodev users
- **Raspberry Pi** - Hardware detection for RPi-specific configs
## Initializers
### Core Setup
- **system-setup** - System detection and variable setup
- **packages** - Package installation with conditional logic (includes build tools for SSL stack, ansible-core for ComplianceAsCode, salt-minion for ongoing management, tailscale for VPN)
- **user-configuration** - Shell settings and user preferences
### Configuration
- **system-config** - Deploy system configuration files (SNMP, NTP, Cockpit, etc.)
- **ssh-hardening** - SSH security hardening
- **ssh-keys** - Deploy SSH authorized keys
- **postfix** - Configure email delivery
- **salt-client** - Configure Salt minion for ongoing configuration management
### Security
- **2fa** - Set up Google Authenticator for 2FA
- **wazuh** - Deploy Wazuh security monitoring agent
- **security-hardening** - SCAP/STIG compliance hardening (includes auditd, systemd, logrotate configs)
- **unattended-upgrades** - Configure automatic security updates
### Specialized
- **dell-config** - Dell server specific optimizations (CPU performance, OMSA tools)
- **ssl-stack** - Compile OpenSSL, nghttp2, curl, APR, and Apache from source (dev systems only)
- **ldap-auth** - LDAP authentication configuration (placeholder for Cloudron)
### Monitoring
- **oam** - Operations and Maintenance tools (up2date script)
## Configuration Management Tools
The system installs clients for specific management purposes:
- **Ansible Core** - For deploying ComplianceAsCode content from https://github.com/ComplianceAsCode/content
- **Salt Minion** - For ongoing system configuration, automation, and orchestration
- **Tailscale** - VPN connectivity for secure remote access
## Tool Responsibilities
| Tool | Primary Responsibility | When Used |
|-------|-------------------|-----------|
| FetchApply | Initial server provisioning | Once at deployment |
| Salt | Ongoing configuration & automation | Continuously |
| Ansible | ComplianceAsCode deployment | Periodically/on-demand |
## Security Features
- SSH key-based authentication only
- 2FA support via Google Authenticator (gradual rollout)
- Wazuh security monitoring
- SCAP/STIG compliance hardening
- AIDE file integrity monitoring
- Automatic security updates
## Specialized Configurations
### Dell Servers
- Automatic CPU performance tuning
- Dell OpenManage Server Administrator setup
- Ethernet configuration scripts
### Development Workstations
- SSL stack compilation (OpenSSL 1.1.0h, nghttp2, curl, APR, Apache)
- HTTP/2 enabled Apache HTTPd
- Custom SSL installations
### Future Services
- Beszel monitoring (to be deployed via Salt)
- Netbird networking (to be deployed via Salt)
- LDAP authentication (Cloudron integration)
## Migration Path
This system provides a foundation for comprehensive management:
1. **FetchApply** - Initial server provisioning (this repo)
2. **Salt Master** - Ongoing configuration management and automation
3. **Ansible Playbooks** - ComplianceAsCode deployment and management
4. **Future Services** - Beszel monitoring and Netbird networking via Salt
## Compliance Management
Ansible will be used specifically to deploy and manage:
- Compliance frameworks from https://github.com/ComplianceAsCode/content
- Security baselines and hardening rules
- Compliance validation and reporting
- Documentation generation
## SSL Stack Compilation
Available on development workstations or when `COMPILE_SSL_STACK=true`:
- OpenSSL 1.1.0h with weak ciphers enabled (legacy compatibility)
- nghttp2 for HTTP/2 support
- curl with HTTP/2 and custom OpenSSL support
- Apache HTTPd with HTTP/2 enabled
- Custom installations at `/usr/local/custom-ssl/`
## Validation
The repository includes a validation script to verify structure and configuration:
```bash
./validate.sh
```
This checks:
- Required top-level files and directories
- Initializer directory structure
- Apply script syntax
- Path consistency between apply scripts and configs/scripts directories
## Troubleshooting
For detailed status information:
```bash
sudo fa status
```
To run specific initializers:
```bash
sudo fa run <initializer-name>
```
To compile SSL stack:
```bash
COMPILE_SSL_STACK=true sudo fa run ssl-stack
```
To pause automatic runs during maintenance:
```bash
sudo fa pause
```
To resume automatic runs:
```bash
sudo fa resume
```
## Repository Information
**Issues:** https://projects.knownelement.com/project/reachableceo-vptechnicaloperations/timeline
**Discussion:** https://community.turnsys.com/c/chieftechnologyandproductofficer/26
## License
This project is licensed under the terms specified in the LICENSE file.

View File

@@ -0,0 +1,21 @@
# Initializers for all servers (one-time provisioning)
system-setup
packages
oam
system-config
ssh-hardening
ssh-keys
postfix
2fa
wazuh
security-hardening
unattended-upgrades
dell-config
ssl-stack
ldap-auth
salt-client
user-configuration
# Roles for all servers
security
monitoring

3
classes/server/roles Normal file
View File

@@ -0,0 +1,3 @@
# Roles for all servers
security
monitoring

33
initializers/2fa/apply Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# KNEL 2FA Module
# Configures two-factor authentication via Google Authenticator
set -euo pipefail
echo "Running 2FA module..."
# Install Google Authenticator for PAM
DEBIAN_FRONTEND="noninteractive" apt-get -y install \
libpam-google-authenticator \
qrencode
# Configure PAM for SSH with 2FA (use nullok for gradual rollout)
if [[ -f ./configs/sshd-pam ]]; then
cp ./configs/sshd-pam /etc/pam.d/sshd
fi
# Configure SSH to allow challenge-response authentication
if [[ -f ./configs/sshd-2fa-config ]]; then
# Backup existing config
cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup
# Add 2FA settings to SSH config
cat ./configs/sshd-2fa-config >> /etc/ssh/sshd_config
fi
# Restart SSH service
systemctl restart ssh
echo "2FA module completed"
echo "Note: Users must run 'google-authenticator' to set up their 2FA tokens"

View File

@@ -0,0 +1,11 @@
# KNEL SSH 2FA Configuration Additions
# These settings enable two-factor authentication with SSH keys
# Enable challenge-response authentication for 2FA
ChallengeResponseAuthentication yes
# Enable PAM
UsePAM yes
# Require both publickey AND keyboard-interactive (2FA)
AuthenticationMethods publickey,keyboard-interactive

View File

@@ -0,0 +1,32 @@
# PAM configuration for SSH with 2FA
# Standard Un*x authentication
@include common-auth
# Google Authenticator 2FA
auth required pam_google_authenticator.so nullok
# Standard Un*x authorization
@include common-account
# SELinux needs to be the first session rule
session required pam_selinux.so close
session required pam_loginuid.so
# Standard Un*x session setup and teardown
@include common-session
# Print the message of the day upon successful login
session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate
# Print the status of the user's mailbox upon successful login
session optional pam_mail.so standard noenv
# Set up user limits from /etc/security/limits.conf
session required pam_limits.so
# SELinux needs to intervene at login time
session required pam_selinux.so open
# Standard Un*x password updating
@include common-password

51
initializers/dell-config/apply Executable file
View File

@@ -0,0 +1,51 @@
#!/bin/bash
# KNEL Dell Server Configuration Initializer
# Applies Dell-specific optimizations and tools
set -euo pipefail
echo "Running Dell server configuration initializer..."
# Only run on Dell physical servers
if [[ $IS_PHYSICAL_HOST -gt 0 ]]; then
echo "Dell physical hardware detected, applying Dell-specific configurations..."
# CPU performance tuning (from fixcpuperf.sh)
if command -v cpufreq-set >/dev/null 2>&1; then
cpufreq-set -r -g performance
echo "Set CPU performance governor"
fi
if command -v cpupower >/dev/null 2>&1; then
cpupower frequency-set --governor performance
echo "Set CPU frequency governor to performance"
fi
# Copy Dell-specific scripts if they exist
mkdir -p /opt/dell-tools
if [[ -f ./scripts/fixeth.sh ]]; then
cp ./scripts/fixeth.sh /opt/dell-tools/
chmod +x /opt/dell-tools/fixeth.sh
echo "Copied Ethernet fixing script"
fi
if [[ -f ./scripts/omsa.sh ]]; then
cp ./scripts/omsa.sh /opt/dell-tools/
chmod +x /opt/dell-tools/omsa.sh
echo "Copied OMSA setup script"
fi
# Install Dell OpenManage Server Administrator if available
if command -v apt >/dev/null 2>&1; then
# Add Dell repository if available
# This would need to be implemented when Dell repo access is available
echo "Dell OMSA installation would go here (requires Dell repo access)"
fi
else
echo "Not a Dell physical server, skipping Dell-specific configurations"
fi
echo "Dell server configuration initializer completed"

View File

@@ -0,0 +1,10 @@
#!/bin/bash
#Script to set performance.
cpufreq-set -r -g performance
cpupower frequency-set --governor performance

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Dell Ethernet interface fix script
# Fixes common issues with Dell NICs on Proxmox/Debian systems
echo "Determining management interface..."
export MAIN_INT=$(brctl show|grep vmbr0|awk '{print $NF}'|awk -F '.' '{print $1}')
echo "Management interface is: $MAIN_INT"
echo "Fixing management interface..."
ethtool -K $MAIN_INT tso off
ethtool -K $MAIN_INT gro off
ethtool -K $MAIN_INT gso off
ethtool -K $MAIN_INT tx off
ethtool -K $MAIN_INT rx off
# References:
# https://forum.proxmox.com/threads/e1000-driver-hang.58284/
# https://serverfault.com/questions/616485/e1000e-reset-adapter-unexpectedly-detected-hardware-unit-hang

View File

@@ -0,0 +1,43 @@
#!/bin/bash
# Dell OpenManage Server Administrator (OMSA) installation script
# Installs Dell OMSA for hardware monitoring and management
# Add Dell GPG key
gpg --keyserver hkp://pool.sks-keyservers.net:80 --recv-key 1285491434D8786F
gpg -a --export 1285491434D8786F | apt-key add -
# Add Dell repository
echo "deb https://linux.dell.com/repo/community/openmanage/930/bionic bionic main" > /etc/apt/sources.list.d/linux.dell.com.sources.list
# Download required dependencies
wget https://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman-curl-client-transport1_2.6.5-0ubuntu3_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman-client4_2.6.5-0ubuntu3_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman1_2.6.5-0ubuntu3_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman-server1_2.6.5-0ubuntu3_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/s/sblim-sfcc/libcimcclient0_2.2.8-0ubuntu2_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/openwsman_2.6.5-0ubuntu3_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/multiverse/c/cim-schema/cim-schema_2.48.0-0ubuntu1_all.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/s/sblim-sfc-common/libsfcutil0_1.0.1-0ubuntu4_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/multiverse/s/sblim-sfcb/sfcb_1.4.9-0ubuntu5_amd64.deb
wget https://archive.ubuntu.com/ubuntu/pool/universe/s/sblim-cmpi-devel/libcmpicppimpl0_2.0.3-0ubuntu2_amd64.deb
# Install dependencies
dpkg -i libwsman-curl-client-transport1_2.6.5-0ubuntu3_amd64.deb
dpkg -i libwsman-client4_2.6.5-0ubuntu3_amd64.deb
dpkg -i libwsman1_2.6.5-0ubuntu3_amd64.deb
dpkg -i libwsman-server1_2.6.5-0ubuntu3_amd64.deb
dpkg -i libcimcclient0_2.2.8-0ubuntu2_amd64.deb
dpkg -i openwsman_2.6.5-0ubuntu3_amd64.deb
dpkg -i cim-schema_2.48.0-0ubuntu1_all.deb
dpkg -i libsfcutil0_1.0.1-0ubuntu4_amd64.deb
dpkg -i sfcb_1.4.9-0ubuntu5_amd64.deb
dpkg -i libcmpicppimpl0_2.0.3-0ubuntu2_amd64.deb
# Install OMSA
apt update
apt -y install srvadmin-all
touch /opt/dell/srvadmin/lib64/openmanage/IGNORE_GENERATION
echo "OMSA installation complete"
echo "Logout, login, then run: srvadmin-services.sh enable && srvadmin-services.sh start"

27
initializers/ldap-auth/apply Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/bash
# KNEL LDAP Authentication Initializer
# Placeholder for future Cloudron LDAP authentication configuration
set -euo pipefail
echo "Running LDAP authentication initializer..."
# This is a placeholder for future Cloudron LDAP integration
# Currently, auth-cloudron-ldap.sh in KNELServerBuild is empty
# When ready, this would:
# 1. Configure PAM for LDAP authentication
# 2. Set up nsswitch.conf for LDAP user lookups
# 3. Configure SSH to use LDAP authentication
# 4. Test LDAP connectivity
# Create configs directory when ready
# mkdir -p ./configs
# cp ./configs/cloudron-ldap.conf /etc/ldap/ldap.conf
echo "LDAP authentication initializer completed (placeholder - no actual configuration applied)"
echo "To enable Cloudron LDAP when ready:"
echo "1. Configure Cloudron LDAP settings"
echo "2. Update this initializer with actual LDAP configuration"
echo "3. Test authentication against Cloudron LDAP"

76
initializers/oam/apply Executable file
View File

@@ -0,0 +1,76 @@
#!/bin/bash
# KNEL OAM Initializer
# Sets up Operations and Maintenance tools including LibreNMS monitoring agents
set -euo pipefail
echo "Running OAM initializer..."
# Setup up2date script
if [[ -f ./scripts/up2date.sh ]]; then
cp ./scripts/up2date.sh /usr/local/bin/up2date.sh
chmod +x /usr/local/bin/up2date.sh
fi
# Deploy LibreNMS check_mk agent
if [[ -f ./librenms/check_mk_agent ]]; then
# Create agent directories
mkdir -p /usr/lib/check_mk_agent/plugins
mkdir -p /usr/lib/check_mk_agent/local
mkdir -p /etc/check_mk
mkdir -p /var/lib/check_mk_agent
# Deploy main agent
cp ./librenms/check_mk_agent /usr/bin/check_mk_agent
chmod +x /usr/bin/check_mk_agent
# Deploy distro script for OS detection
if [[ -f ./librenms/distro ]]; then
cp ./librenms/distro /usr/bin/distro
chmod +x /usr/bin/distro
fi
# Deploy systemd service files
if [[ -f ./librenms/check_mk.socket ]]; then
cp ./librenms/check_mk.socket /etc/systemd/system/check_mk.socket
fi
if [[ -f ./librenms/check_mk@.service ]]; then
cp ./librenms/check_mk@.service /etc/systemd/system/check_mk@.service
fi
# Deploy plugins
for plugin in ./librenms/*.sh ./librenms/*.py; do
if [[ -f "$plugin" ]]; then
plugin_name=$(basename "$plugin")
cp "$plugin" /usr/lib/check_mk_agent/plugins/
chmod +x "/usr/lib/check_mk_agent/plugins/$plugin_name"
fi
done
# Deploy other plugins (without extensions)
for plugin in ./librenms/smart ./librenms/ntp-client ./librenms/ntp-server.sh \
./librenms/os-updates.sh ./librenms/postfix-queues ./librenms/postfixdetailed \
./librenms/ups-nut.sh; do
if [[ -f "$plugin" ]]; then
plugin_name=$(basename "$plugin")
cp "$plugin" /usr/lib/check_mk_agent/plugins/
chmod +x "/usr/lib/check_mk_agent/plugins/$plugin_name"
fi
done
# Deploy smart config if present
if [[ -f ./librenms/smart.config ]]; then
cp ./librenms/smart.config /etc/check_mk/smart.config
fi
# Reload systemd and enable check_mk socket
systemctl daemon-reload
systemctl enable check_mk.socket
systemctl start check_mk.socket
echo "LibreNMS agent deployed and enabled"
fi
echo "OAM initializer completed"

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Check_MK LibreNMS Agent Socket
[Socket]
ListenStream=6556
Accept=yes
[Install]
WantedBy=sockets.target

View File

@@ -0,0 +1,7 @@
[Unit]
Description=Check_MK LibreNMS Agent Service
After=network.target
[Service]
ExecStart=/usr/bin/check_mk_agent
StandardOutput=socket

View File

@@ -0,0 +1,659 @@
#!/bin/bash
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright Mathias Kettner 2014 mk@mathias-kettner.de |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# ails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
# Remove locale settings to eliminate localized outputs where possible
export LC_ALL=C
unset LANG
export MK_LIBDIR="/usr/lib/check_mk_agent"
export MK_CONFDIR="/etc/check_mk"
export MK_VARDIR="/var/lib/check_mk_agent"
# Provide information about the remote host. That helps when data
# is being sent only once to each remote host.
if [ "$REMOTE_HOST" ] ; then
export REMOTE=$REMOTE_HOST
elif [ "$SSH_CLIENT" ] ; then
export REMOTE=${SSH_CLIENT%% *}
fi
# Make sure, locally installed binaries are found
PATH=$PATH:/usr/local/bin
# All executables in PLUGINSDIR will simply be executed and their
# ouput appended to the output of the agent. Plugins define their own
# sections and must output headers with '<<<' and '>>>'
PLUGINSDIR=$MK_LIBDIR/plugins
# All executables in LOCALDIR will by executabled and their
# output inserted into the section <<<local>>>. Please
# refer to online documentation for details about local checks.
LOCALDIR=$MK_LIBDIR/local
# All files in SPOOLDIR will simply appended to the agent
# output if they are not outdated (see below)
SPOOLDIR=$MK_VARDIR/spool
# close standard input (for security reasons) and stderr
if [ "$1" = -d ]
then
set -xv
else
exec </dev/null 2>/dev/null
fi
# Runs a command asynchronous by use of a cache file
function run_cached () {
local section=
if [ "$1" = -s ] ; then local section="echo '<<<$2>>>' ; " ; shift ; fi
local NAME=$1
local MAXAGE=$2
shift 2
local CMDLINE="$section$@"
if [ ! -d $MK_VARDIR/cache ]; then mkdir -p $MK_VARDIR/cache ; fi
CACHEFILE="$MK_VARDIR/cache/$NAME.cache"
# Check if the creation of the cache takes suspiciously long and return
# nothing if the age (access time) of $CACHEFILE.new is twice the MAXAGE
local NOW=$(date +%s)
if [ -e "$CACHEFILE.new" ] ; then
local CF_ATIME=$(stat -c %X "$CACHEFILE.new")
if [ $((NOW - CF_ATIME)) -ge $((MAXAGE * 2)) ] ; then
# Kill the process still accessing that file in case
# it is still running. This avoids overlapping processes!
fuser -k -9 "$CACHEFILE.new" >/dev/null 2>&1
rm -f "$CACHEFILE.new"
return
fi
fi
# Check if cache file exists and is recent enough
if [ -s "$CACHEFILE" ] ; then
local MTIME=$(stat -c %Y "$CACHEFILE")
if [ $((NOW - MTIME)) -le $MAXAGE ] ; then local USE_CACHEFILE=1 ; fi
# Output the file in any case, even if it is
# outdated. The new file will not yet be available
cat "$CACHEFILE"
fi
# Cache file outdated and new job not yet running? Start it
if [ -z "$USE_CACHEFILE" -a ! -e "$CACHEFILE.new" ] ; then
echo "set -o noclobber ; exec > \"$CACHEFILE.new\" || exit 1 ; $CMDLINE && mv \"$CACHEFILE.new\" \"$CACHEFILE\" || rm -f \"$CACHEFILE\" \"$CACHEFILE.new\"" | nohup bash >/dev/null 2>&1 &
fi
}
# Make run_cached available for subshells (plugins, local checks, etc.)
export -f run_cached
echo '<<<check_mk>>>'
echo Version: 1.2.6b5
echo AgentOS: linux
echo AgentDirectory: $MK_CONFDIR
echo DataDirectory: $MK_VARDIR
echo SpoolDirectory: $SPOOLDIR
echo PluginsDirectory: $PLUGINSDIR
echo LocalDirectory: $LOCALDIR
# If we are called via xinetd, try to find only_from configuration
if [ -n "$REMOTE_HOST" ]
then
echo -n 'OnlyFrom: '
echo $(sed -n '/^service[[:space:]]*check_mk/,/}/s/^[[:space:]]*only_from[[:space:]]*=[[:space:]]*\(.*\)/\1/p' /etc/xinetd.d/* | head -n1)
fi
# Print out Partitions / Filesystems. (-P gives non-wrapped POSIXed output)
# Heads up: NFS-mounts are generally supressed to avoid agent hangs.
# If hard NFS mounts are configured or you have too large nfs retry/timeout
# settings, accessing those mounts from the agent would leave you with
# thousands of agent processes and, ultimately, a dead monitored system.
# These should generally be monitored on the NFS server, not on the clients.
echo '<<<df>>>'
# The exclusion list is getting a bit of a problem. -l should hide any remote FS but seems
# to be all but working.
excludefs="-x smbfs -x cifs -x iso9660 -x udf -x nfsv4 -x nfs -x mvfs -x zfs"
df -PTlk $excludefs | sed 1d
# df inodes information
echo '<<<df>>>'
echo '[df_inodes_start]'
df -PTli $excludefs | sed 1d
echo '[df_inodes_end]'
# Filesystem usage for ZFS
if type zfs > /dev/null 2>&1 ; then
echo '<<<zfsget>>>'
zfs get -Hp name,quota,used,avail,mountpoint,type -t filesystem,volume || \
zfs get -Hp name,quota,used,avail,mountpoint,type
echo '[df]'
df -PTlk -t zfs | sed 1d
fi
# Check NFS mounts by accessing them with stat -f (System
# call statfs()). If this lasts more then 2 seconds we
# consider it as hanging. We need waitmax.
if type waitmax >/dev/null
then
STAT_VERSION=$(stat --version | head -1 | cut -d" " -f4)
STAT_BROKE="5.3.0"
echo '<<<nfsmounts>>>'
sed -n '/ nfs4\? /s/[^ ]* \([^ ]*\) .*/\1/p' < /proc/mounts |
sed 's/\\040/ /g' |
while read MP
do
if [ $STAT_VERSION != $STAT_BROKE ]; then
waitmax -s 9 2 stat -f -c "$MP ok %b %f %a %s" "$MP" || \
echo "$MP hanging 0 0 0 0"
else
waitmax -s 9 2 stat -f -c "$MP ok %b %f %a %s" "$MP" && \
printf '\n'|| echo "$MP hanging 0 0 0 0"
fi
done
echo '<<<cifsmounts>>>'
sed -n '/ cifs\? /s/[^ ]* \([^ ]*\) .*/\1/p' < /proc/mounts |
sed 's/\\040/ /g' |
while read MP
do
if [ $STAT_VERSION != $STAT_BROKE ]; then
waitmax -s 9 2 stat -f -c "$MP ok %b %f %a %s" "$MP" || \
echo "$MP hanging 0 0 0 0"
else
waitmax -s 9 2 stat -f -c "$MP ok %b %f %a %s" "$MP" && \
printf '\n'|| echo "$MP hanging 0 0 0 0"
fi
done
fi
# Check mount options. Filesystems may switch to 'ro' in case
# of a read error.
echo '<<<mounts>>>'
grep ^/dev < /proc/mounts
# processes including username, without kernel processes
echo '<<<ps>>>'
ps ax -o user,vsz,rss,cputime,pid,command --columns 10000 | sed -e 1d -e 's/ *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) */(\1,\2,\3,\4,\5) /'
# Memory usage
echo '<<<mem>>>'
egrep -v '^Swap:|^Mem:|total:' < /proc/meminfo
# Load and number of processes
echo '<<<cpu>>>'
echo "$(cat /proc/loadavg) $(grep -E '^CPU|^processor' < /proc/cpuinfo | wc -l)"
# Uptime
echo '<<<uptime>>>'
cat /proc/uptime
# New variant: Information about speed and state in one section
echo '<<<lnx_if:sep(58)>>>'
sed 1,2d /proc/net/dev
if type ethtool > /dev/null
then
for eth in $(sed -e 1,2d < /proc/net/dev | cut -d':' -f1 | sort)
do
echo "[$eth]"
ethtool $eth | egrep '(Speed|Duplex|Link detected|Auto-negotiation):'
echo -en "\tAddress: " ; cat /sys/class/net/$eth/address ; echo
done
fi
# Current state of bonding interfaces
if [ -e /proc/net/bonding ] ; then
echo '<<<lnx_bonding:sep(58)>>>'
pushd /proc/net/bonding > /dev/null ; head -v -n 1000 * ; popd
fi
# Same for Open vSwitch bonding
if type ovs-appctl > /dev/null ; then
echo '<<<ovs_bonding:sep(58)>>>'
for bond in $(ovs-appctl bond/list | sed -e 1d | cut -f2) ; do
echo "[$bond]"
ovs-appctl bond/show $bond
done
fi
# Number of TCP connections in the various states
echo '<<<tcp_conn_stats>>>'
# waitmax 10 netstat -nt | awk ' /^tcp/ { c[$6]++; } END { for (x in c) { print x, c[x]; } }'
# New implementation: netstat is very slow for large TCP tables
cat /proc/net/tcp /proc/net/tcp6 2>/dev/null | awk ' /:/ { c[$4]++; } END { for (x in c) { print x, c[x]; } }'
# Linux Multipathing
if type multipath >/dev/null ; then
echo '<<<multipath>>>'
multipath -l
fi
# Performancecounter Platten
echo '<<<diskstat>>>'
date +%s
egrep ' (x?[shv]d[a-z]*|cciss/c[0-9]+d[0-9]+|emcpower[a-z]+|dm-[0-9]+|VxVM.*|mmcblk.*) ' < /proc/diskstats
if type dmsetup >/dev/null ; then
echo '[dmsetup_info]'
dmsetup info -c --noheadings --separator ' ' -o name,devno,vg_name,lv_name
fi
if [ -d /dev/vx/dsk ] ; then
echo '[vx_dsk]'
stat -c "%t %T %n" /dev/vx/dsk/*/*
fi
# Performancecounter Kernel
echo '<<<kernel>>>'
date +%s
cat /proc/vmstat /proc/stat
# Hardware sensors via IPMI (need ipmitool)
if type ipmitool > /dev/null
then
run_cached -s ipmi 300 "ipmitool sensor list | grep -v 'command failed' | sed -e 's/ *| */|/g' -e 's/ /_/g' -e 's/_*"'$'"//' -e 's/|/ /g' | egrep -v '^[^ ]+ na ' | grep -v ' discrete '"
fi
# IPMI data via ipmi-sensors (of freeipmi). Please make sure, that if you
# have installed freeipmi that IPMI is really support by your hardware.
if type ipmi-sensors >/dev/null
then
echo '<<<ipmi_sensors>>>'
# Newer ipmi-sensors version have new output format; Legacy format can be used
if ipmi-sensors --help | grep -q legacy-output; then
IPMI_FORMAT="--legacy-output"
else
IPMI_FORMAT=""
fi
# At least with ipmi-sensoirs 0.7.16 this group is Power_Unit instead of "Power Unit"
run_cached -s ipmi_sensors 300 "for class in Temperature Power_Unit Fan
do
ipmi-sensors $IPMI_FORMAT --sdr-cache-directory /var/cache -g "$class" | sed -e 's/ /_/g' -e 's/:_\?/ /g' -e 's@ \([^(]*\)_(\([^)]*\))@ \2_\1@'
# In case of a timeout immediately leave loop.
if [ $? = 255 ] ; then break ; fi
done"
fi
# RAID status of Linux software RAID
echo '<<<md>>>'
cat /proc/mdstat
# RAID status of Linux RAID via device mapper
if type dmraid >/dev/null && DMSTATUS=$(dmraid -r)
then
echo '<<<dmraid>>>'
# Output name and status
dmraid -s | grep -e ^name -e ^status
# Output disk names of the RAID disks
DISKS=$(echo "$DMSTATUS" | cut -f1 -d\:)
for disk in $DISKS ; do
device=$(cat /sys/block/$(basename $disk)/device/model )
status=$(echo "$DMSTATUS" | grep ^${disk})
echo "$status Model: $device"
done
fi
# RAID status of LSI controllers via cfggen
if type cfggen > /dev/null ; then
echo '<<<lsi>>>'
cfggen 0 DISPLAY | egrep '(Target ID|State|Volume ID|Status of volume)[[:space:]]*:' | sed -e 's/ *//g' -e 's/:/ /'
fi
# RAID status of LSI MegaRAID controller via MegaCli. You can download that tool from:
# http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/8.02.16_MegaCLI.zip
if type MegaCli >/dev/null ; then
MegaCli_bin="MegaCli"
elif type MegaCli64 >/dev/null ; then
MegaCli_bin="MegaCli64"
elif type megacli >/dev/null ; then
MegaCli_bin="megacli"
else
MegaCli_bin="unknown"
fi
if [ "$MegaCli_bin" != "unknown" ]; then
echo '<<<megaraid_pdisks>>>'
for part in $($MegaCli_bin -EncInfo -aALL -NoLog < /dev/null \
| sed -rn 's/:/ /g; s/[[:space:]]+/ /g; s/^ //; s/ $//; s/Number of enclosures on adapter ([0-9]+).*/adapter \1/g; /^(Enclosure|Device ID|adapter) [0-9]+$/ p'); do
[ $part = adapter ] && echo ""
[ $part = 'Enclosure' ] && echo -ne "\ndev2enc"
echo -n " $part"
done
echo
$MegaCli_bin -PDList -aALL -NoLog < /dev/null | egrep 'Enclosure|Raw Size|Slot Number|Device Id|Firmware state|Inquiry|Adapter'
echo '<<<megaraid_ldisks>>>'
$MegaCli_bin -LDInfo -Lall -aALL -NoLog < /dev/null | egrep 'Size|State|Number|Adapter|Virtual'
echo '<<<megaraid_bbu>>>'
$MegaCli_bin -AdpBbuCmd -GetBbuStatus -aALL -NoLog < /dev/null | grep -v Exit
fi
# RAID status of 3WARE disk controller (by Radoslaw Bak)
if type tw_cli > /dev/null ; then
for C in $(tw_cli show | awk 'NR < 4 { next } { print $1 }'); do
echo '<<<3ware_info>>>'
tw_cli /$C show all | egrep 'Model =|Firmware|Serial'
echo '<<<3ware_disks>>>'
tw_cli /$C show drivestatus | egrep 'p[0-9]' | sed "s/^/$C\//"
echo '<<<3ware_units>>>'
tw_cli /$C show unitstatus | egrep 'u[0-9]' | sed "s/^/$C\//"
done
fi
# RAID controllers from areca (Taiwan)
# cli64 can be found at ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Linux/CLI/
if type cli64 >/dev/null ; then
run_cached -s arc_raid_status 300 "cli64 rsf info | tail -n +3 | head -n -2"
fi
# VirtualBox Guests. Section must always been output. Otherwise the
# check would not be executed in case no guest additions are installed.
# And that is something the check wants to detect
echo '<<<vbox_guest>>>'
if type VBoxControl >/dev/null 2>&1 ; then
VBoxControl -nologo guestproperty enumerate | cut -d, -f1,2
[ ${PIPESTATUS[0]} = 0 ] || echo "ERROR"
fi
# OpenVPN Clients. Currently we assume that the configuration # is in
# /etc/openvpn. We might find a safer way to find the configuration later.
if [ -e /etc/openvpn/openvpn-status.log ] ; then
echo '<<<openvpn_clients:sep(44)>>>'
sed -n -e '/CLIENT LIST/,/ROUTING TABLE/p' < /etc/openvpn/openvpn-status.log | sed -e 1,3d -e '$d'
fi
# Time synchronization with NTP
if type ntpq > /dev/null 2>&1 ; then
# remove heading, make first column space separated
run_cached -s ntp 30 "waitmax 5 ntpq -np | sed -e 1,2d -e 's/^\(.\)/\1 /' -e 's/^ /%/'"
fi
# Time synchronization with Chrony
if type chronyc > /dev/null 2>&1 ; then
# Force successful exit code. Otherwise section will be missing if daemon not running
run_cached -s chrony 30 "waitmax 5 chronyc tracking || true"
fi
if type nvidia-settings >/dev/null && [ -S /tmp/.X11-unix/X0 ]
then
echo '<<<nvidia>>>'
for var in GPUErrors GPUCoreTemp
do
DISPLAY=:0 waitmax 2 nvidia-settings -t -q $var | sed "s/^/$var: /"
done
fi
if [ -e /proc/drbd ]; then
echo '<<<drbd>>>'
cat /proc/drbd
fi
# Status of CUPS printer queues
if type lpstat > /dev/null 2>&1; then
if pgrep cups > /dev/null 2>&1; then
echo '<<<cups_queues>>>'
CPRINTCONF=/etc/cups/printers.conf
if [ -r "$CPRINTCONF" ] ; then
LOCAL_PRINTERS=$(grep -E "<(Default)?Printer .*>" $CPRINTCONF | awk '{print $2}' | sed -e 's/>//')
lpstat -p | while read LINE
do
PRINTER=$(echo $LINE | awk '{print $2}')
if echo "$LOCAL_PRINTERS" | grep -q "$PRINTER"; then
echo $LINE
fi
done
echo '---'
lpstat -o | while read LINE
do
PRINTER=${LINE%%-*}
if echo "$LOCAL_PRINTERS" | grep -q "$PRINTER"; then
echo $LINE
fi
done
else
lpstat -p
echo '---'
lpstat -o | sort
fi
fi
fi
# Heartbeat monitoring
# Different handling for heartbeat clusters with and without CRM
# for the resource state
if [ -S /var/run/heartbeat/crm/cib_ro -o -S /var/run/crm/cib_ro ] || pgrep crmd > /dev/null 2>&1; then
echo '<<<heartbeat_crm>>>'
crm_mon -1 -r | grep -v ^$ | sed 's/^ //; /^\sResource Group:/,$ s/^\s//; s/^\s/_/g'
fi
if type cl_status > /dev/null 2>&1; then
echo '<<<heartbeat_rscstatus>>>'
cl_status rscstatus
echo '<<<heartbeat_nodes>>>'
for NODE in $(cl_status listnodes); do
if [ $NODE != $(echo $HOSTNAME | tr 'A-Z' 'a-z') ]; then
STATUS=$(cl_status nodestatus $NODE)
echo -n "$NODE $STATUS"
for LINK in $(cl_status listhblinks $NODE 2>/dev/null); do
echo -n " $LINK $(cl_status hblinkstatus $NODE $LINK)"
done
echo
fi
done
fi
# Postfix mailqueue monitoring
#
# Only handle mailq when postfix user is present. The mailq command is also
# available when postfix is not installed. But it produces different outputs
# which are not handled by the check at the moment. So try to filter out the
# systems not using postfix by searching for the postfix user.a
#
# Cannot take the whole outout. This could produce several MB of agent output
# on blocking queues.
# Only handle the last 6 lines (includes the summary line at the bottom and
# the last message in the queue. The last message is not used at the moment
# but it could be used to get the timestamp of the last message.
if type postconf >/dev/null ; then
echo '<<<postfix_mailq>>>'
postfix_queue_dir=$(postconf -h queue_directory)
postfix_count=$(find $postfix_queue_dir/deferred -type f | wc -l)
postfix_size=$(du -ks $postfix_queue_dir/deferred | awk '{print $1 }')
if [ $postfix_count -gt 0 ]
then
echo -- $postfix_size Kbytes in $postfix_count Requests.
else
echo Mail queue is empty
fi
elif [ -x /usr/sbin/ssmtp ] ; then
echo '<<<postfix_mailq>>>'
mailq 2>&1 | sed 's/^[^:]*: \(.*\)/\1/' | tail -n 6
fi
#Check status of qmail mailqueue
if type qmail-qstat >/dev/null
then
echo "<<<qmail_stats>>>"
qmail-qstat
fi
# Check status of OMD sites
if type omd >/dev/null
then
run_cached -s omd_status 60 "omd status --bare --auto"
fi
# Welcome the ZFS check on Linux
# We do not endorse running ZFS on linux if your vendor doesnt support it ;)
# check zpool status
if type zpool >/dev/null; then
echo "<<<zpool_status>>>"
zpool status -x
fi
# Fileinfo-Check: put patterns for files into /etc/check_mk/fileinfo.cfg
if [ -r "$MK_CONFDIR/fileinfo.cfg" ] ; then
echo '<<<fileinfo:sep(124)>>>'
date +%s
stat -c "%n|%s|%Y" $(cat "$MK_CONFDIR/fileinfo.cfg")
fi
# Get stats about OMD monitoring cores running on this machine.
# Since cd is a shell builtin the check does not affect the performance
# on non-OMD machines.
if cd /omd/sites
then
echo '<<<livestatus_status:sep(59)>>>'
for site in *
do
if [ -S "/omd/sites/$site/tmp/run/live" ] ; then
echo "[$site]"
echo -e "GET status" | waitmax 3 /omd/sites/$site/bin/unixcat /omd/sites/$site/tmp/run/live
fi
done
fi
# Get statistics about monitored jobs. Below the job directory there
# is a sub directory per user that ran a job. That directory must be
# owned by the user so that a symlink or hardlink attack for reading
# arbitrary files can be avoided.
if pushd $MK_VARDIR/job >/dev/null; then
echo '<<<job>>>'
for username in *
do
if [ -d "$username" ] && cd "$username" ; then
su "$username" -c "head -n -0 -v *"
cd ..
fi
done
popd > /dev/null
fi
# Gather thermal information provided e.g. by acpi
# At the moment only supporting thermal sensors
if ls /sys/class/thermal/thermal_zone* >/dev/null 2>&1; then
echo '<<<lnx_thermal>>>'
for F in /sys/class/thermal/thermal_zone*; do
echo -n "${F##*/} "
if [ ! -e $F/mode ] ; then echo -n "- " ; fi
cat $F/{mode,type,temp,trip_point_*} | tr \\n " "
echo
done
fi
# Libelle Business Shadow
if type trd >/dev/null; then
echo "<<<libelle_business_shadow:sep(58)>>>"
trd -s
fi
# MK's Remote Plugin Executor
if [ -e "$MK_CONFDIR/mrpe.cfg" ]
then
echo '<<<mrpe>>>'
grep -Ev '^[[:space:]]*($|#)' "$MK_CONFDIR/mrpe.cfg" | \
while read descr cmdline
do
PLUGIN=${cmdline%% *}
OUTPUT=$(eval "$cmdline")
echo -n "(${PLUGIN##*/}) $descr $? $OUTPUT" | tr \\n \\1
echo
done
fi
# Local checks
echo '<<<local>>>'
if cd $LOCALDIR ; then
for skript in $(ls) ; do
if [ -f "$skript" -a -x "$skript" ] ; then
./$skript
fi
done
# Call some plugins only every X'th minute
for skript in [1-9]*/* ; do
if [ -x "$skript" ] ; then
run_cached local_${skript//\//\\} ${skript%/*} "$skript"
fi
done
fi
# Plugins
if cd $PLUGINSDIR ; then
for skript in $(ls) ; do
if [ -f "$skript" -a -x "$skript" ] ; then
./$skript
fi
done
# Call some plugins only every Xth minute
for skript in [1-9]*/* ; do
if [ -x "$skript" ] ; then
run_cached plugins_${skript//\//\\} ${skript%/*} "$skript"
fi
done
fi
# Agent output snippets created by cronjobs, etc.
if [ -d "$SPOOLDIR" ]
then
pushd "$SPOOLDIR" > /dev/null
now=$(date +%s)
for file in *
do
# output every file in this directory. If the file is prefixed
# with a number, then that number is the maximum age of the
# file in seconds. If the file is older than that, it is ignored.
maxage=""
part="$file"
# Each away all digits from the front of the filename and
# collect them in the variable maxage.
while [ "${part/#[0-9]/}" != "$part" ]
do
maxage=$maxage${part:0:1}
part=${part:1}
done
# If there is at least one digit, than we honor that.
if [ "$maxage" ] ; then
mtime=$(stat -c %Y "$file")
if [ $((now - mtime)) -gt $maxage ] ; then
continue
fi
fi
# Output the file
cat "$file"
done
popd > /dev/null
fi

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env bash
# Detects which OS and if it is Linux then it will detect which Linux Distribution.
OS=`uname -s`
REV=`uname -r`
MACH=`uname -m`
if [ "${OS}" = "SunOS" ] ; then
OS=Solaris
ARCH=`uname -p`
OSSTR="${OS} ${REV}(${ARCH} `uname -v`)"
elif [ "${OS}" = "AIX" ] ; then
OSSTR="${OS} `oslevel` (`oslevel -r`)"
elif [ "${OS}" = "Linux" ] ; then
KERNEL=`uname -r`
if [ -f /etc/fedora-release ]; then
DIST=$(cat /etc/fedora-release | awk '{print $1}')
REV=`cat /etc/fedora-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/redhat-release ] ; then
DIST=$(cat /etc/redhat-release | awk '{print $1}')
if [ "${DIST}" = "CentOS" ]; then
DIST="CentOS"
elif [ "${DIST}" = "Mandriva" ]; then
DIST="Mandriva"
PSEUDONAME=`cat /etc/mandriva-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/mandriva-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/oracle-release ]; then
DIST="Oracle"
else
DIST="RedHat"
fi
PSEUDONAME=`cat /etc/redhat-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/redhat-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/mandrake-release ] ; then
DIST='Mandrake'
PSEUDONAME=`cat /etc/mandrake-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/mandrake-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/devuan_version ] ; then
DIST="Devuan `cat /etc/devuan_version`"
REV=""
elif [ -f /etc/debian_version ] ; then
DIST="Debian `cat /etc/debian_version`"
REV=""
ID=`lsb_release -i | awk -F ':' '{print $2}' | sed 's/ //g'`
if [ "${ID}" = "Raspbian" ] ; then
DIST="Raspbian `cat /etc/debian_version`"
fi
elif [ -f /etc/gentoo-release ] ; then
DIST="Gentoo"
REV=$(tr -d '[[:alpha:]]' </etc/gentoo-release | tr -d " ")
elif [ -f /etc/arch-release ] ; then
DIST="Arch Linux"
REV="" # Omit version since Arch Linux uses rolling releases
IGNORE_LSB=1 # /etc/lsb-release would overwrite $REV with "rolling"
elif [ -f /etc/os-release ] ; then
DIST=$(grep '^NAME=' /etc/os-release | cut -d= -f2- | tr -d '"')
REV=$(grep '^VERSION_ID=' /etc/os-release | cut -d= -f2- | tr -d '"')
elif [ -f /etc/openwrt_version ] ; then
DIST="OpenWrt"
REV=$(cat /etc/openwrt_version)
elif [ -f /etc/pld-release ] ; then
DIST=$(cat /etc/pld-release)
REV=""
elif [ -f /etc/SuSE-release ] ; then
DIST=$(echo SLES $(grep VERSION /etc/SuSE-release | cut -d = -f 2 | tr -d " "))
REV=$(echo SP$(grep PATCHLEVEL /etc/SuSE-release | cut -d = -f 2 | tr -d " "))
fi
if [ -f /etc/lsb-release -a "${IGNORE_LSB}" != 1 ] ; then
LSB_DIST=$(lsb_release -si)
LSB_REV=$(lsb_release -sr)
if [ "$LSB_DIST" != "" ] ; then
DIST=$LSB_DIST
fi
if [ "$LSB_REV" != "" ] ; then
REV=$LSB_REV
fi
fi
if [ "`uname -a | awk '{print $(NF)}'`" = "DD-WRT" ] ; then
DIST="dd-wrt"
fi
if [ -n "${REV}" ]
then
OSSTR="${DIST} ${REV}"
else
OSSTR="${DIST}"
fi
elif [ "${OS}" = "Darwin" ] ; then
if [ -f /usr/bin/sw_vers ] ; then
OSSTR=`/usr/bin/sw_vers|grep -v Build|sed 's/^.*:.//'| tr "\n" ' '`
fi
elif [ "${OS}" = "FreeBSD" ] ; then
OSSTR=`/usr/bin/uname -mior`
fi
echo ${OSSTR}

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
echo '<<<dmi>>>'
# requires dmidecode
for FIELD in bios-vendor bios-version bios-release-date system-manufacturer system-product-name system-version system-serial-number system-uuid baseboard-manufacturer baseboard-product-name baseboard-version baseboard-serial-number baseboard-asset-tag chassis-manufacturer chassis-type chassis-version chassis-serial-number chassis-asset-tag processor-family processor-manufacturer processor-version processor-frequency
do
echo $FIELD="$(dmidecode -s $FIELD | grep -v '^#')"
done

View File

@@ -0,0 +1,22 @@
#!/bin/bash
# Cache the file for 30 minutes
# If you want to override this, put the command in cron.
# We cache because it is a 1sec delay, which is painful for the poller
if [ -x /usr/bin/dpkg-query ]; then
DATE=$(date +%s)
FILE=/var/cache/librenms/agent-local-dpkg
[ -d /var/cache/librenms ] || mkdir -p /var/cache/librenms
if [ ! -e $FILE ]; then
dpkg-query -W --showformat='${Status} ${Package} ${Version} ${Architecture} ${Installed-Size}\n'|grep " installed "|cut -d\ -f4- > $FILE
fi
FILEMTIME=$(stat -c %Y $FILE)
FILEAGE=$(($DATE-$FILEMTIME))
if [ $FILEAGE -gt 1800 ]; then
dpkg-query -W --showformat='${Status} ${Package} ${Version} ${Architecture} ${Installed-Size}\n'|grep " installed "|cut -d\ -f4- > $FILE
fi
echo "<<<dpkg>>>"
cat $FILE
fi

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,34 @@
#!/bin/sh
# Please make sure the paths below are correct.
# Alternatively you can put them in $0.conf, meaning if you've named
# this script ntp-client then it must go in ntp-client.conf .
#
# NTPQV output version of "ntpq -c rv"
# Version 4 is the most common and up to date version.
#
# If you are unsure, which to set, run this script and make sure that
# the JSON output variables match that in "ntpq -c rv".
#
################################################################
# Don't change anything unless you know what are you doing #
################################################################
BIN_NTPQ='/usr/bin/env ntpq'
BIN_GREP='/usr/bin/env grep'
BIN_AWK='/usr/bin/env awk'
CONFIG=$0".conf"
if [ -f "$CONFIG" ]; then
# shellcheck disable=SC1090
. "$CONFIG"
fi
NTP_OFFSET=$($BIN_NTPQ -c rv | $BIN_GREP "offset" | $BIN_AWK -Foffset= '{print $2}' | $BIN_AWK -F, '{print $1}')
NTP_FREQUENCY=$($BIN_NTPQ -c rv | $BIN_GREP "frequency" | $BIN_AWK -Ffrequency= '{print $2}' | $BIN_AWK -F, '{print $1}')
NTP_SYS_JITTER=$($BIN_NTPQ -c rv | $BIN_GREP "sys_jitter" | $BIN_AWK -Fsys_jitter= '{print $2}' | $BIN_AWK -F, '{print $1}')
NTP_CLK_JITTER=$($BIN_NTPQ -c rv | $BIN_GREP "clk_jitter" | $BIN_AWK -Fclk_jitter= '{print $2}' | $BIN_AWK -F, '{print $1}')
NTP_WANDER=$($BIN_NTPQ -c rv | $BIN_GREP "clk_wander" | $BIN_AWK -Fclk_wander= '{print $2}' | $BIN_AWK -F, '{print $1}')
NTP_VERSION=$($BIN_NTPQ -c rv | $BIN_GREP "version" | $BIN_AWK -F'ntpd ' '{print $2}' | $BIN_AWK -F. '{print $1}')
echo '{"data":{"offset":"'"$NTP_OFFSET"'","frequency":"'"$NTP_FREQUENCY"'","sys_jitter":"'"$NTP_SYS_JITTER"'","clk_jitter":"'"$NTP_CLK_JITTER"'","clk_wander":"'"$NTP_WANDER"'"},"version":"'"$NTP_VERSION"'","error":"0","errorString":""}'
exit 0

View File

@@ -0,0 +1,89 @@
#!/bin/sh
# Please make sure the paths below are correct.
# Alternatively you can put them in $0.conf, meaning if you've named
# this script ntp-client.sh then it must go in ntp-client.sh.conf .
#
# NTPQV output version of "ntpq -c rv"
# p1 DD-WRT and some other outdated linux distros
# p11 FreeBSD 11 and any linux distro that is up to date
#
# If you are unsure, which to set, run this script and make sure that
# the JSON output variables match that in "ntpq -c rv".
#
BIN_NTPD='/usr/bin/env ntpd'
BIN_NTPQ='/usr/bin/env ntpq'
BIN_NTPDC='/usr/bin/env ntpdc'
BIN_GREP='/usr/bin/env grep'
BIN_TR='/usr/bin/env tr'
BIN_CUT='/usr/bin/env cut'
BIN_SED="/usr/bin/env sed"
BIN_AWK='/usr/bin/env awk'
NTPQV="p11"
################################################################
# Don't change anything unless you know what are you doing #
################################################################
CONFIG=$0".conf"
if [ -f $CONFIG ]; then
. $CONFIG
fi
VERSION=1
STRATUM=`$BIN_NTPQ -c rv | $BIN_GREP -Eow "stratum=[0-9]+" | $BIN_CUT -d "=" -f 2`
# parse the ntpq info that requires version specific info
NTPQ_RAW=`$BIN_NTPQ -c rv | $BIN_GREP jitter | $BIN_SED 's/[[:alpha:]=,_]/ /g'`
if [ $NTPQV = "p11" ]; then
OFFSET=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $3}'`
FREQUENCY=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $4}'`
SYS_JITTER=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $5}'`
CLK_JITTER=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $6}'`
CLK_WANDER=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $7}'`
fi
if [ $NTPQV = "p1" ]; then
OFFSET=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $2}'`
FREQUENCY=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $3}'`
SYS_JITTER=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $4}'`
CLK_JITTER=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $5}'`
CLK_WANDER=`echo $NTPQ_RAW | $BIN_AWK -F ' ' '{print $6}'`
fi
VER=`$BIN_NTPD --version`
if [ "$VER" = '4.2.6p5' ]; then
USECMD=`echo $BIN_NTPDC -c iostats`
else
USECMD=`echo $BIN_NTPQ -c iostats localhost`
fi
CMD2=`$USECMD | $BIN_TR -d ' ' | $BIN_CUT -d : -f 2 | $BIN_TR '\n' ' '`
TIMESINCERESET=`echo $CMD2 | $BIN_AWK -F ' ' '{print $1}'`
RECEIVEDBUFFERS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $2}'`
FREERECEIVEBUFFERS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $3}'`
USEDRECEIVEBUFFERS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $4}'`
LOWWATERREFILLS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $5}'`
DROPPEDPACKETS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $6}'`
IGNOREDPACKETS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $7}'`
RECEIVEDPACKETS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $8}'`
PACKETSSENT=`echo $CMD2 | $BIN_AWK -F ' ' '{print $9}'`
PACKETSENDFAILURES=`echo $CMD2 | $BIN_AWK -F ' ' '{print $10}'`
INPUTWAKEUPS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $11}'`
USEFULINPUTWAKEUPS=`echo $CMD2 | $BIN_AWK -F ' ' '{print $12}'`
echo '{"data":{"offset":"'$OFFSET\
'","frequency":"'$FREQUENCY\
'","sys_jitter":"'$SYS_JITTER\
'","clk_jitter":"'$CLK_JITTER\
'","clk_wander":"'$CLK_WANDER\
'","stratum":"'$STRATUM\
'","time_since_reset":"'$TIMESINCERESET\
'","receive_buffers":"'$RECEIVEDBUFFERS\
'","free_receive_buffers":"'$FREERECEIVEBUFFERS\
'","used_receive_buffers":"'$USEDRECEIVEBUFFERS\
'","low_water_refills":"'$LOWWATERREFILLS\
'","dropped_packets":"'$DROPPEDPACKETS\
'","ignored_packets":"'$IGNOREDPACKETS\
'","received_packets":"'$RECEIVEDPACKETS\
'","packets_sent":"'$PACKETSSENT\
'","packet_send_failures":"'$PACKETSENDFAILURES\
'","input_wakeups":"'$PACKETSENDFAILURES\
'","useful_input_wakeups":"'$USEFULINPUTWAKEUPS\
'"},"error":"0","errorString":"","version":"'$VERSION'"}'

View File

@@ -0,0 +1,73 @@
#!/usr/bin/env bash
################################################################
# copy this script to /etc/snmp/ and make it executable: #
# chmod +x /etc/snmp/os-updates.sh #
# ------------------------------------------------------------ #
# edit your snmpd.conf and include: #
# extend osupdate /opt/os-updates.sh #
#--------------------------------------------------------------#
# restart snmpd and activate the app for desired host #
#--------------------------------------------------------------#
# please make sure you have the path/binaries below #
################################################################
BIN_WC='/usr/bin/wc'
BIN_GREP='/bin/grep'
CMD_GREP='-c'
CMD_WC='-l'
BIN_ZYPPER='/usr/bin/zypper'
CMD_ZYPPER='-q lu'
BIN_YUM='/usr/bin/yum'
CMD_YUM='-q check-update'
BIN_DNF='/usr/bin/dnf'
CMD_DNF='-q check-update'
BIN_APT='/usr/bin/apt-get'
CMD_APT='-qq -s upgrade'
BIN_PACMAN='/usr/bin/pacman'
CMD_PACMAN='-Sup'
################################################################
# Don't change anything unless you know what are you doing #
################################################################
if [ -f $BIN_ZYPPER ]; then
# OpenSUSE
UPDATES=`$BIN_ZYPPER $CMD_ZYPPER | $BIN_WC $CMD_WC`
if [ $UPDATES -ge 2 ]; then
echo $(($UPDATES-2));
else
echo "0";
fi
elif [ -f $BIN_DNF ]; then
# Fedora
UPDATES=`$BIN_DNF $CMD_DNF | $BIN_WC $CMD_WC`
if [ $UPDATES -ge 1 ]; then
echo $(($UPDATES-1));
else
echo "0";
fi
elif [ -f $BIN_PACMAN ]; then
# Arch
UPDATES=`$BIN_PACMAN $CMD_PACMAN | $BIN_WC $CMD_WC`
if [ $UPDATES -ge 1 ]; then
echo $(($UPDATES-1));
else
echo "0";
fi
elif [ -f $BIN_YUM ]; then
# CentOS / Redhat
UPDATES=`$BIN_YUM $CMD_YUM | $BIN_WC $CMD_WC`
if [ $UPDATES -ge 1 ]; then
echo $(($UPDATES-1));
else
echo "0";
fi
elif [ -f $BIN_APT ]; then
# Debian / Devuan / Ubuntu
UPDATES=`$BIN_APT $CMD_APT | $BIN_GREP $CMD_GREP 'Inst'`
if [ $UPDATES -ge 1 ]; then
echo $UPDATES;
else
echo "0";
fi
else
echo "0";
fi

View File

@@ -0,0 +1,13 @@
#!/bin/bash
#Written by Valec 2006. Steal and share.
#Get postfix queue lengths
#extend mailq /opt/observer/scripts/getmailq.sh
QUEUES="incoming active deferred hold"
for i in $QUEUES; do
COUNT=$(qshape "$i" | grep TOTAL | awk '{print $2}')
printf "$COUNT\n"
done

View File

@@ -0,0 +1,548 @@
#!/usr/bin/env perl
# add this to your snmpd.conf file as below
# extend postfixdetailed /etc/snmp/postfixdetailed
# The cache file to use.
my $cache='/var/cache/postfixdetailed';
# the location of pflogsumm
my $pflogsumm='/usr/bin/env pflogsumm';
#totals
# 847 received = received
# 852 delivered = delivered
# 0 forwarded = forwarded
# 3 deferred (67 deferrals)= deferred
# 0 bounced = bounced
# 593 rejected (41%) = rejected
# 0 reject warnings = rejectw
# 0 held = held
# 0 discarded (0%) = discarded
# 16899k bytes received = bytesr
# 18009k bytes delivered = bytesd
# 415 senders = senders
# 266 sending hosts/domains = sendinghd
# 15 recipients = recipients
# 9 recipient hosts/domains = recipienthd
######message deferral detail
#Connection refused = deferralcr
#Host is down = deferralhid
########message reject detail
#Client host rejected = chr
#Helo command rejected: need fully-qualified hostname = hcrnfqh
#Sender address rejected: Domain not found = sardnf
#Sender address rejected: not owned by user = sarnobu
#blocked using = bu
#Recipient address rejected: User unknown = raruu
#Helo command rejected: Invalid name = hcrin
#Sender address rejected: need fully-qualified address = sarnfqa
#Recipient address rejected: Domain not found = rardnf
#Recipient address rejected: need fully-qualified address = rarnfqa
#Improper use of SMTP command pipelining = iuscp
#Message size exceeds fixed limit = msefl
#Server configuration error = sce
#Server configuration problem = scp
#unknown reject reason = urr
my $old='';
#reads in the old data if it exists
if ( -f $cache ){
open(my $fh, "<", $cache) or die "Can't open '".$cache."'";
# if this is over 2048, something is most likely wrong
read($fh , $old , 2048);
close($fh);
}
my ( $received,
$delivered,
$forwarded,
$deferred,
$bounced,
$rejected,
$rejectw,
$held,
$discarded,
$bytesr,
$bytesd,
$senders,
$sendinghd,
$recipients,
$recipienthd,
$deferralcr,
$deferralhid,
$chr,
$hcrnfqh,
$sardnf,
$sarnobu,
$bu,
$raruu,
$hcrin,
$sarnfqa,
$rardnf,
$rarnfqa,
$iuscp,
$sce,
$scp,
$urr,
$msefl) = split ( /\n/, $old );
if ( ! defined( $received ) ){ $received=0; }
if ( ! defined( $delivered ) ){ $delivered=0; }
if ( ! defined( $forwarded ) ){ $forwarded=0; }
if ( ! defined( $deferred ) ){ $deferred=0; }
if ( ! defined( $bounced ) ){ $bounced=0; }
if ( ! defined( $rejected ) ){ $rejected=0; }
if ( ! defined( $rejectw ) ){ $rejectw=0; }
if ( ! defined( $held ) ){ $held=0; }
if ( ! defined( $discarded ) ){ $discarded=0; }
if ( ! defined( $bytesr ) ){ $bytesr=0; }
if ( ! defined( $bytesd ) ){ $bytesd=0; }
if ( ! defined( $senders ) ){ $senders=0; }
if ( ! defined( $sendinghd ) ){ $sendinghd=0; }
if ( ! defined( $recipients ) ){ $recipients=0; }
if ( ! defined( $recipienthd ) ){ $recipienthd=0; }
if ( ! defined( $deferralcr ) ){ $deferralcr=0; }
if ( ! defined( $deferralhid ) ){ $deferralhid=0; }
if ( ! defined( $chr ) ){ $chr=0; }
if ( ! defined( $hcrnfqh ) ){ $hcrnfqh=0; }
if ( ! defined( $sardnf ) ){ $sardnf=0; }
if ( ! defined( $sarnobu ) ){ $sarnobu=0; }
if ( ! defined( $bu ) ){ $bu=0; }
if ( ! defined( $raruu ) ){ $raruu=0; }
if ( ! defined( $hcrin ) ){ $hcrin=0; }
if ( ! defined( $sarnfqa ) ){ $sarnfqa=0; }
if ( ! defined( $rardnf ) ){ $rardnf=0; }
if ( ! defined( $rarnfqa ) ){ $rarnfqa=0; }
if ( ! defined( $iuscp ) ){ $iuscp=0; }
if ( ! defined( $msefl ) ){ $msefl=0; }
if ( ! defined( $sce ) ){ $sce=0; }
if ( ! defined( $scp ) ){ $scp=0; }
if ( ! defined( $urr ) ){ $urr=0; }
#init current variables
my $receivedC=0;
my $deliveredC=0;
my $forwardedC=0;
my $deferredC=0;
my $bouncedC=0;
my $rejectedC=0;
my $rejectwC=0;
my $heldC=0;
my $discardedC=0;
my $bytesrC=0;
my $bytesdC=0;
my $sendersC=0;
my $sendinghdC=0;
my $recipientsC=0;
my $recipienthdC=0;
my $deferralcrC=0;
my $deferralhidC=0;
my $hcrnfqhC=0;
my $sardnfC=0;
my $sarnobuC=0;
my $buC=0;
my $raruuC=0;
my $hcrinC=0;
my $sarnfqaC=0;
my $rardnfC=0;
my $rarnfqaC=0;
my $iuscpC=0;
my $mseflC=0;
my $sceC=0;
my $scpC=0;
my $urrC=0;
sub newValue{
my $old=$_[0];
my $new=$_[1];
#if new is undefined, just default to 0... this should never happen
if ( !defined( $new ) ){
warn('New not defined');
return 0;
}
#sets it to 0 if old is not defined
if ( !defined( $old ) ){
warn('Old not defined');
$old=0;
}
#make sure they are both numberic and if not set to zero
if( $old !~ /^[0123456789]*$/ ){
warn('Old not numeric');
$old=0;
}
if( $new !~ /^[0123456789]*$/ ){
warn('New not numeric');
$new=0;
}
#log rotation happened
if ( $old > $new ){
return $new;
};
return $new - $old;
}
my $output=`$pflogsumm /var/log/maillog`;
#holds RBL values till the end when it is compared to the old one
my $buNew=0;
#holds client host rejected values till the end when it is compared to the old one
my $chrNew=0;
# holds recipient address rejected values till the end when it is compared to the old one
my $raruuNew=0;
#holds the current values for checking later
my $current='';
my @outputA=split( /\n/, $output );
my $int=0;
while ( defined( $outputA[$int] ) ){
my $line=$outputA[$int];
$line=~s/^ *//;
$line=~s/ +/ /g;
$line=~s/\)$//;
my $handled=0;
#received line
if ( ( $line =~ /[0123456789] received$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$receivedC=$line;
$received=newValue( $received, $line );
$handled=1;
}
#delivered line
if ( ( $line =~ /[0123456789] delivered$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$deliveredC=$line;
$delivered=newValue( $delivered, $line );
$handled=1;
}
#forward line
if ( ( $line =~ /[0123456789] forwarded$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$forwardedC=$line;
$forwarded=newValue( $forwarded, $line );
$handled=1;
}
#defereed line
if ( ( $line =~ /[0123456789] deferred \(/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$deferredC=$line;
$deferred=newValue( $deferred, $line );
$handled=1;
}
#bounced line
if ( ( $line =~ /[0123456789] bounced$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$bouncedC=$line;
$bounced=newValue( $bounced, $line );
$handled=1;
}
#rejected line
if ( ( $line =~ /[0123456789] rejected \(/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$rejectedC=$line;
$rejected=newValue( $rejected, $line );
$handled=1;
}
#reject warning line
if ( ( $line =~ /[0123456789] reject warnings/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$rejectwC=$line;
$rejectw=newValue( $rejectw, $line );
$handled=1;
}
#held line
if ( ( $line =~ /[0123456789] held$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$heldC=$line;
$held=newValue( $held, $line );
$handled=1;
}
#discarded line
if ( ( $line =~ /[0123456789] discarded \(/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$discardedC=$line;
$discarded=newValue( $discarded, $line );
$handled=1;
}
#bytes received line
if ( ( $line =~ /[0123456789kM] bytes received$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$line=~s/k/000/;
$line=~s/M/000000/;
$bytesrC=$line;
$bytesr=newValue( $bytesr, $line );
$handled=1;
}
#bytes delivered line
if ( ( $line =~ /[0123456789kM] bytes delivered$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$line=~s/k/000/;
$line=~s/M/000000/;
$bytesdC=$line;
$bytesd=newValue( $bytesd, $line );
$handled=1;
}
#senders line
if ( ( $line =~ /[0123456789] senders$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$sendersC=$line;
$senders=newValue( $senders, $line );
$handled=1;
}
#sendering hosts/domains line
if ( ( $line =~ /[0123456789] sending hosts\/domains$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$sendinghdC=$line;
$sendinghd=newValue( $sendinghd, $line );
$handled=1;
}
#recipients line
if ( ( $line =~ /[0123456789] recipients$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$recipientsC=$line;
$recipients=newValue( $recipients, $line );
$handled=1;
}
#recipients line
if ( ( $line =~ /[0123456789] recipient hosts\/domains$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$recipienthdC=$line;
$recipienthd=newValue( $recipienthd, $line );
$handled=1;
}
# deferrals connectios refused
if ( ( $line =~ /[0123456789] 25\: Connection refused$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$deferralcrC=$line;
$deferralcr=newValue( $deferralcr, $line );
$handled=1;
}
# deferrals Host is down
if ( ( $line =~ /Host is down$/ ) && ( ! $handled ) ){
$line=~s/ .*//;
$deferralcrC=$line;
$deferralhidC=$line;
$deferralhid=newValue( $deferralhid, $line );
$handled=1;
}
# Client host rejected
if ( ( $line =~ /Client host rejected/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$chrNew=$chrNew + $line;
$handled=1;
}
#Helo command rejected: need fully-qualified hostname
if ( ( $line =~ /Helo command rejected\: need fully\-qualified hostname/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$hcrnfqhC=$line;
$hcrnfqh=newValue( $hcrnfqh, $line );
$handled=1;
}
#Sender address rejected: Domain not found
if ( ( $line =~ /Sender address rejected\: Domain not found/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$sardnfC=$line;
$sardnf=newValue( $sardnf, $line );
$handled=1;
}
#Sender address rejected: not owned by user
if ( ( $line =~ /Sender address rejected\: not owned by user/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$sarnobuC=$line;
$sarnobu=newValue( $sarnobu, $line );
$handled=1;
}
#blocked using
# These lines are RBLs so there will be more than one.
# Use $buNew to add them all up.
if ( ( $line =~ /blocked using/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$buNew=$buNew + $line;
$handled=1;
}
#Recipient address rejected: User unknown
if ( ( $line =~ /Recipient address rejected\: User unknown/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$raruuNew=$raruuNew + $line;
$handled=1;
}
#Helo command rejected: Invalid name
if ( ( $line =~ /Helo command rejected\: Invalid name/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$hcrinC=$line;
$hcrin=newValue( $hcrin, $line );
}
#Sender address rejected: need fully-qualified address
if ( ( $line =~ /Sender address rejected\: need fully-qualified address/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$sarnfqaC=$line;
$sarnfqa=newValue( $sarnfqa, $line );
}
#Recipient address rejected: Domain not found
if ( ( $line =~ /Recipient address rejected\: Domain not found/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$rardnfC=$line;
$rardnf=newValue( $rardnf, $line );
}
#Improper use of SMTP command pipelining
if ( ( $line =~ /Improper use of SMTP command pipelining/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$iuscpC=$line;
$iuscp=newValue( $iuscp, $line );
}
#Message size exceeds fixed limit
if ( ( $line =~ /Message size exceeds fixed limit/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$mseflC=$line;
$msefl=newValue( $msefl, $line );
}
#Server configuration error
if ( ( $line =~ /Server configuration error/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$sceC=$line;
$sce=newValue( $sce, $line );
}
#Server configuration problem
if ( ( $line =~ /Server configuration problem/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$scpC=$line;
$scp=newValue( $scp, $line );
}
#unknown reject reason
if ( ( $line =~ /unknown reject reason/ ) && ( ! $handled ) ){
$line=~s/.*\: //g;
$urrC=$line;
$urr=newValue( $urr, $line );
}
$int++;
}
# final client host rejected total
$chr=newValue( $chr, $chrNew );
# final RBL total
$bu=newValue( $bu, $buNew );
# final recipient address rejected total
$raruu=newValue( $raruu, $raruuNew );
my $data=$received."\n".
$delivered."\n".
$forwarded."\n".
$deferred."\n".
$bounced."\n".
$rejected."\n".
$rejectw."\n".
$held."\n".
$discarded."\n".
$bytesr."\n".
$bytesd."\n".
$senders."\n".
$sendinghd."\n".
$recipients."\n".
$recipienthd."\n".
$deferralcr."\n".
$deferralhid."\n".
$chr."\n".
$hcrnfqh."\n".
$sardnf."\n".
$sarnobu."\n".
$bu."\n".
$raruu."\n".
$hcrin."\n".
$sarnfqa."\n".
$rardnf."\n".
$rarnfqa."\n".
$iuscp."\n".
$sce."\n".
$scp."\n".
$urr."\n".
$msefl."\n";
print $data;
my $current=$receivedC."\n".
$deliveredC."\n".
$forwardedC."\n".
$deferredC."\n".
$bouncedC."\n".
$rejectedC."\n".
$rejectwC."\n".
$heldC."\n".
$discardedC."\n".
$bytesrC."\n".
$bytesdC."\n".
$sendersC."\n".
$sendinghdC."\n".
$recipientsC."\n".
$recipienthdC."\n".
$deferralcrC."\n".
$deferralhidC."\n".
$chrNew."\n".
$hcrnfqhC."\n".
$sardnfC."\n".
$sarnobuC."\n".
$buNew."\n".
$raruuNew."\n".
$hcrinC."\n".
$sarnfqaC."\n".
$rardnfC."\n".
$rarnfqaC."\n".
$iuscpC."\n".
$sceC."\n".
$scpC."\n".
$urrC."\n".
$mseflC."\n";
open(my $fh, ">", $cache) or die "Can't open '".$cache."'";
print $fh $current;
close($fh);

View File

@@ -0,0 +1,46 @@
#!/bin/bash
#######################################
# please read DOCS to succesfully get #
# raspberry sensors into your host #
#######################################
picmd='/usr/bin/vcgencmd'
pised='/bin/sed'
getTemp='measure_temp'
getVoltsCore='measure_volts core'
getVoltsRamC='measure_volts sdram_c'
getVoltsRamI='measure_volts sdram_i'
getVoltsRamP='measure_volts sdram_p'
getFreqArm='measure_clock arm'
getFreqCore='measure_clock core'
getStatusH264='codec_enabled H264'
getStatusMPG2='codec_enabled MPG2'
getStatusWVC1='codec_enabled WVC1'
getStatusMPG4='codec_enabled MPG4'
getStatusMJPG='codec_enabled MJPG'
getStatusWMV9='codec_enabled WMV9'
$picmd $getTemp | $pised 's|[^0-9.]||g'
$picmd "$getVoltsCore" | $pised 's|[^0-9.]||g'
$picmd "$getVoltsRamC" | $pised 's|[^0-9.]||g'
$picmd "$getVoltsRamI" | $pised 's|[^0-9.]||g'
$picmd "$getVoltsRamP" | $pised 's|[^0-9.]||g'
$picmd "$getFreqArm" | $pised 's/frequency([0-9]*)=//g'
$picmd "$getFreqCore" | $pised 's/frequency([0-9]*)=//g'
$picmd "$getStatusH264" | $pised 's/H264=//g'
$picmd "$getStatusMPG2" | $pised 's/MPG2=//g'
$picmd "$getStatusWVC1" | $pised 's/WVC1=//g'
$picmd "$getStatusMPG4" | $pised 's/MPG4=//g'
$picmd "$getStatusMJPG" | $pised 's/MJPG=//g'
$picmd "$getStatusWMV9" | $pised 's/WMV9=//g'
$picmd "$getStatusH264" | $pised 's/enabled/2/g'
$picmd "$getStatusMPG2" | $pised 's/enabled/2/g'
$picmd "$getStatusWVC1" | $pised 's/enabled/2/g'
$picmd "$getStatusMPG4" | $pised 's/enabled/2/g'
$picmd "$getStatusMJPG" | $pised 's/enabled/2/g'
$picmd "$getStatusWMV9" | $pised 's/enabled/2/g'
$picmd "$getStatusH264" | $pised 's/disabled/1/g'
$picmd "$getStatusMPG2" | $pised 's/disabled/1/g'
$picmd "$getStatusWVC1" | $pised 's/disabled/1/g'
$picmd "$getStatusMPG4" | $pised 's/disabled/1/g'
$picmd "$getStatusMJPG" | $pised 's/disabled/1/g'
$picmd "$getStatusWMV9" | $pised 's/disabled/1/g'

View File

@@ -0,0 +1,929 @@
#!/usr/bin/env perl
#Copyright (c) 2024, Zane C. Bowers-Hadley
#All rights reserved.
#
#Redistribution and use in source and binary forms, with or without modification,
#are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
#THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
#ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
#WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
#IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
#INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
#BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
#DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
#LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
#OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
#THE POSSIBILITY OF SUCH DAMAGE.
=for comment
Add this to snmpd.conf like below.
extend smart /etc/snmp/smart
Then add to root's cron tab, if you have more than a few disks.
*/5 * * * * /etc/snmp/extends/smart -u
You will also need to create the config file, which defaults to the same path as the script,
but with .config appended. So if the script is located at /etc/snmp/smart, the config file
will be /etc/snmp/extends/smart.config. Alternatively you can also specific a config via -c.
Anything starting with a # is comment. The format for variables is $variable=$value. Empty
lines are ignored. Spaces and tabes at either the start or end of a line are ignored. Any
line with out a matched variable or # are treated as a disk.
#This is a comment
cache=/var/cache/smart
smartctl=/usr/local/sbin/smartctl
useSN=0
ada0
da5 /dev/da5 -d sat
twl0,0 /dev/twl0 -d 3ware,0
twl0,1 /dev/twl0 -d 3ware,1
twl0,2 /dev/twl0 -d 3ware,2
The variables are as below.
cache = The path to the cache file to use. Default: /var/cache/smart
smartctl = The path to use for smartctl. Default: /usr/bin/env smartctl
useSN = If set to 1, it will use the disks SN for reporting instead of the device name.
1 is the default. 0 will use the device name.
A disk line is can be as simple as just a disk name under /dev/. Such as in the config above
The line "ada0" would resolve to "/dev/ada0" and would be called with no special argument. If
a line has a space in it, everything before the space is treated as the disk name and is what
used for reporting and everything after that is used as the argument to be passed to smartctl.
If you want to guess at the configuration, call it with -g and it will print out what it thinks
it should be.
Switches:
-c <config> The config file to use.
-u Update
-p Pretty print the JSON.
-Z GZip+Base64 compress the results.
-g Guess at the config and print it to STDOUT
-C Enable manual checking for guess and cciss.
-S Set useSN to 0 when using -g
-t <test> Run the specified smart self test on all the devices.
-U When calling cciss_vol_status, call it with -u.
-G <modes> Guess modes to use. This is a comma seperated list.
Default :: scan-open,cciss-vol-status
Guess Modes:
- scan :: Use "--scan" with smartctl. "scan-open" will take presidence.
- scan-open :: Call smartctl with "--scan-open".
- cciss-vol-status :: Freebsd/Linux specific and if it sees /dev/sg0(on Linux) or
/dev/ciss0(on FreebSD) it will attempt to find drives via cciss-vol-status,
and then optionally checking for disks via smrtctl if -C is given. Should be noted
though that -C will not find drives that are currently missing/failed. If -U is given,
cciss_vol_status will be called with -u.
=cut
##
## You should not need to touch anything below here.
##
use warnings;
use strict;
use Getopt::Std;
use JSON;
use MIME::Base64;
use IO::Compress::Gzip qw(gzip $GzipError);
my $cache = '/var/cache/smart';
my $smartctl = '/usr/bin/env smartctl';
my @disks;
my $useSN = 1;
$Getopt::Std::STANDARD_HELP_VERSION = 1;
sub main::VERSION_MESSAGE {
print "SMART SNMP extend 0.3.2\n";
}
sub main::HELP_MESSAGE {
&VERSION_MESSAGE;
print "\n" . "-u Update '" . $cache . "'\n" . '-g Guess at the config and print it to STDOUT
-c <config> The config file to use.
-p Pretty print the JSON.
-Z GZip+Base64 compress the results.
-C Enable manual checking for guess and cciss.
-S Set useSN to 0 when using -g
-t <test> Run the specified smart self test on all the devices.
-U When calling cciss_vol_status, call it with -u.
-G <modes> Guess modes to use. This is a comma seperated list.
Default :: scan-open,cciss-vol-status
Scan Modes:
- scan :: Use "--scan" with smartctl. "scan-open" will take presidence.
- scan-open :: Call smartctl with "--scan-open".
- cciss-vol-status :: Freebsd/Linux specific and if it sees /dev/sg0(on Linux) or
/dev/ciss0(on FreebSD) it will attempt to find drives via cciss-vol-status,
and then optionally checking for disks via smrtctl if -C is given. Should be noted
though that -C will not find drives that are currently missing/failed. If -U is given,
cciss_vol_status will be called with -u.
';
} ## end sub main::HELP_MESSAGE
#gets the options
my %opts = ();
getopts( 'ugc:pZhvCSGt:U', \%opts );
if ( $opts{h} ) {
&HELP_MESSAGE;
exit;
}
if ( $opts{v} ) {
&VERSION_MESSAGE;
exit;
}
#
# figure out what scan modes to use if -g specified
#
my $scan_modes = {
'scan-open' => 0,
'scan' => 0,
'cciss_vol_status' => 0,
};
if ( $opts{g} ) {
if ( !defined( $opts{G} ) ) {
$opts{G} = 'scan-open,cciss_vol_status';
}
$opts{G} =~ s/[\ \t]//g;
my @scan_modes_split = split( /,/, $opts{G} );
foreach my $mode (@scan_modes_split) {
if ( !defined $scan_modes->{$mode} ) {
die( '"' . $mode . '" is not a recognized scan mode' );
}
$scan_modes->{$mode} = 1;
}
} ## end if ( $opts{g} )
# configure JSON for later usage
# only need to do this if actually running as in -g is not specified
my $json;
if ( !$opts{g} ) {
$json = JSON->new->allow_nonref->canonical(1);
if ( $opts{p} ) {
$json->pretty;
}
}
#
#
# guess if asked
#
#
if ( defined( $opts{g} ) ) {
#get what path to use for smartctl
$smartctl = `which smartctl`;
chomp($smartctl);
if ( $? != 0 ) {
warn("'which smartctl' failed with a exit code of $?");
exit 1;
}
#try to touch the default cache location and warn if it can't be done
system( 'touch ' . $cache . '>/dev/null' );
if ( $? != 0 ) {
$cache = '#Could not touch ' . $cache . "You will need to manually set it\n" . "cache=?\n";
} else {
system( 'rm -f ' . $cache . '>/dev/null' );
$cache = 'cache=' . $cache . "\n";
}
my $drive_lines = '';
#
#
# scan-open and scan guess mode handling
#
#
if ( $scan_modes->{'scan-open'} || $scan_modes->{'scan'} ) {
# used for checking if a disk has been found more than once
my %found_disks_names;
my @argumentsA;
# use scan-open if it is set, overriding scan if it is also set
my $mode = 'scan';
if ( $scan_modes->{'scan-open'} ) {
$mode = 'scan-open';
}
#have smartctl scan and see if it finds anythings not get found
my $scan_output = `$smartctl --$mode`;
my @scan_outputA = split( /\n/, $scan_output );
# remove non-SMART devices sometimes returned
@scan_outputA = grep( !/ses[0-9]/, @scan_outputA ); # not a disk, but may or may not have SMART attributes
@scan_outputA = grep( !/pass[0-9]/, @scan_outputA ); # very likely a duplicate and a disk under another name
@scan_outputA = grep( !/cd[0-9]/, @scan_outputA ); # CD drive
if ( $^O eq 'freebsd' ) {
@scan_outputA = grep( !/sa[0-9]/, @scan_outputA ); # tape drive
@scan_outputA = grep( !/ctl[0-9]/, @scan_outputA ); # CAM target layer
} elsif ( $^O eq 'linux' ) {
@scan_outputA = grep( !/st[0-9]/, @scan_outputA ); # SCSI tape drive
@scan_outputA = grep( !/ht[0-9]/, @scan_outputA ); # ATA tape drive
}
# make the first pass, figuring out what all we have and trimming comments
foreach my $arguments (@scan_outputA) {
my $name = $arguments;
$arguments =~ s/ \#.*//; # trim the comment out of the argument
$name =~ s/ .*//;
$name =~ s/\/dev\///;
if ( defined( $found_disks_names{$name} ) ) {
$found_disks_names{$name}++;
} else {
$found_disks_names{$name} = 0;
}
push( @argumentsA, $arguments );
} ## end foreach my $arguments (@scan_outputA)
# second pass, putting the lines together
my %current_disk;
foreach my $arguments (@argumentsA) {
my $not_virt = 1;
# check to see if we have a virtual device
my @virt_check = split( /\n/, `smartctl -i $arguments 2> /dev/null` );
foreach my $virt_check_line (@virt_check) {
if ( $virt_check_line =~ /(?i)Product\:.*LOGICAL VOLUME/ ) {
$not_virt = 0;
}
}
my $name = $arguments;
$name =~ s/ .*//;
$name =~ s/\/dev\///;
# only add it if not a virtual RAID drive
# HP RAID virtual disks will show up with very basical but totally useless smart data
if ($not_virt) {
if ( $found_disks_names{$name} == 0 ) {
# If no other devices, just name it after the base device.
$drive_lines = $drive_lines . $name . " " . $arguments . "\n";
} else {
# if more than one, start at zero and increment, apennding comma number to the base device name
if ( defined( $current_disk{$name} ) ) {
$current_disk{$name}++;
} else {
$current_disk{$name} = 0;
}
$drive_lines = $drive_lines . $name . "," . $current_disk{$name} . " " . $arguments . "\n";
}
} ## end if ($not_virt)
} ## end foreach my $arguments (@argumentsA)
} ## end if ( $scan_modes->{'scan-open'} || $scan_modes...)
#
#
# scan mode handler for cciss_vol_status
# /dev/sg* devices for cciss on Linux
# /dev/ccis* devices for cciss on FreeBSD
#
#
if ( $scan_modes->{'cciss_vol_status'} && ( $^O eq 'linux' || $^O eq 'freebsd' ) ) {
my $cciss;
if ( $^O eq 'freebsd' ) {
$cciss = 'ciss';
} elsif ( $^O eq 'linux' ) {
$cciss = 'sg';
}
my $uarg = '';
if ( $opts{U} ) {
$uarg = '-u';
}
# generate the initial device path that will be checked
my $sg_int = 0;
my $device = '/dev/' . $cciss . $sg_int;
my $sg_process = 1;
if ( -e $device ) {
my $output = `which cciss_vol_status 2> /dev/null`;
if ( $? != 0 && !$opts{C} ) {
$sg_process = 0;
$drive_lines
= $drive_lines
. "# -C not given, but "
. $device
. " exists and cciss_vol_status is not present\n"
. "# in path or 'ccis_vol_status -V "
. $device
. "' is failing\n";
} ## end if ( $? != 0 && !$opts{C} )
} ## end if ( -e $device )
my $seen_lines = {};
my $ignore_lines = {};
while ( -e $device && $sg_process ) {
my $output = `cciss_vol_status -V $uarg $device 2> /dev/null`;
if ( $? != 0 && $output eq '' && !$opts{C} ) {
# just empty here as we just want to skip it if it fails and there is no C
# warning is above
} elsif ( $? != 0 && $output eq '' && $opts{C} ) {
my $drive_count = 0;
my $continue = 1;
while ($continue) {
my $output = `$smartctl -i $device -d cciss,$drive_count 2> /dev/null`;
if ( $? != 0 ) {
$continue = 0;
} else {
my $add_it = 0;
my $id;
while ( $output =~ /(?i)Serial Number:(.*)/g ) {
$id = $1;
$id =~ s/^\s+|\s+$//g;
}
if ( defined($id) && !defined( $seen_lines->{$id} ) ) {
$add_it = 1;
$seen_lines->{$id} = 1;
}
if ( $continue && $add_it ) {
$drive_lines
= $drive_lines
. $cciss . '0-'
. $drive_count . ' '
. $device
. ' -d cciss,'
. $drive_count . "\n";
}
} ## end else [ if ( $? != 0 ) ]
$drive_count++;
} ## end while ($continue)
} else {
my $drive_count = 0;
# count the connector lines, this will make sure failed are founded as well
my $seen_conectors = {};
while ( $output =~ /(connector +\d+[IA]\ +box +\d+\ +bay +\d+.*)/g ) {
my $cciss_drive_line = $1;
my $connector = $cciss_drive_line;
$connector =~ s/(.*\ bay +\d+).*/$1/;
if ( !defined( $seen_lines->{$cciss_drive_line} )
&& !defined( $seen_conectors->{$connector} )
&& !defined( $ignore_lines->{$cciss_drive_line} ) )
{
$seen_lines->{$cciss_drive_line} = 1;
$seen_conectors->{$connector} = 1;
$drive_count++;
} else {
# going to be a connector we've already seen
# which will happen when it is processing replacement drives
# so save this as a device to ignore
$ignore_lines->{$cciss_drive_line} = 1;
}
} ## end while ( $output =~ /(connector +\d+[IA]\ +box +\d+\ +bay +\d+.*)/g)
my $drive_int = 0;
while ( $drive_int < $drive_count ) {
$drive_lines
= $drive_lines
. $cciss
. $sg_int . '-'
. $drive_int . ' '
. $device
. ' -d cciss,'
. $drive_int . "\n";
$drive_int++;
} ## end while ( $drive_int < $drive_count )
} ## end else [ if ( $? != 0 && $output eq '' && !$opts{C})]
$sg_int++;
$device = '/dev/' . $cciss . $sg_int;
} ## end while ( -e $device && $sg_process )
} ## end if ( $scan_modes->{'cciss_vol_status'} && ...)
my $useSN = 1;
if ( $opts{S} ) {
$useSN = 0;
}
print '# scan_modes='
. $opts{G}
. "\nuseSN="
. $useSN . "\n"
. 'smartctl='
. $smartctl . "\n"
. $cache
. $drive_lines;
exit 0;
} ## end if ( defined( $opts{g} ) )
#get which config file to use
my $config = $0 . '.config';
if ( defined( $opts{c} ) ) {
$config = $opts{c};
}
#reads the config file, optionally
my $config_file = '';
open( my $readfh, "<", $config ) or die "Can't open '" . $config . "'";
read( $readfh, $config_file, 1000000 );
close($readfh);
#
#
# parse the config file and remove comments and empty lines
#
#
my @configA = split( /\n/, $config_file );
@configA = grep( !/^$/, @configA );
@configA = grep( !/^\#/, @configA );
@configA = grep( !/^[\s\t]*$/, @configA );
my $configA_int = 0;
while ( defined( $configA[$configA_int] ) ) {
my $line = $configA[$configA_int];
chomp($line);
$line =~ s/^[\t\s]+//;
$line =~ s/[\t\s]+$//;
my ( $var, $val ) = split( /=/, $line, 2 );
my $matched;
if ( $var eq 'cache' ) {
$cache = $val;
$matched = 1;
}
if ( $var eq 'smartctl' ) {
$smartctl = $val;
$matched = 1;
}
if ( $var eq 'useSN' ) {
$useSN = $val;
$matched = 1;
}
if ( !defined($val) ) {
push( @disks, $line );
}
$configA_int++;
} ## end while ( defined( $configA[$configA_int] ) )
#
#
# run the specified self test on all disks if asked
#
#
if ( defined( $opts{t} ) ) {
# make sure we have something that atleast appears sane for the test name
my $valid_tesks = {
'offline' => 1,
'short' => 1,
'long' => 1,
'conveyance' => 1,
'afterselect,on' => 1,
};
if ( !defined( $valid_tesks->{ $opts{t} } ) && $opts{t} !~ /select,(\d+[\-\+]\d+|next|next\+\d+|redo\+\d+)/ ) {
print '"' . $opts{t} . "\" does not appear to be a valid test\n";
exit 1;
}
print "Running the SMART $opts{t} on all devices in the config...\n\n";
foreach my $line (@disks) {
my $disk;
my $name;
if ( $line =~ /\ / ) {
( $name, $disk ) = split( /\ /, $line, 2 );
} else {
$disk = $line;
$name = $line;
}
if ( $disk !~ /\// ) {
$disk = '/dev/' . $disk;
}
print "\n------------------------------------------------------------------\nDoing "
. $smartctl . ' -t '
. $opts{t} . ' '
. $disk
. " ...\n\n";
print `$smartctl -t $opts{t} $disk` . "\n";
} ## end foreach my $line (@disks)
exit 0;
} ## end if ( defined( $opts{t} ) )
#if set to 1, no cache will be written and it will be printed instead
my $noWrite = 0;
#
#
# if no -u, it means we are being called from snmped
#
#
if ( !defined( $opts{u} ) ) {
# if the cache file exists, print it, otherwise assume one is not being used
if ( -f $cache ) {
my $old = '';
open( my $readfh, "<", $cache ) or die "Can't open '" . $cache . "'";
read( $readfh, $old, 1000000 );
close($readfh);
print $old;
exit 0;
} else {
$opts{u} = 1;
$noWrite = 1;
}
} ## end if ( !defined( $opts{u} ) )
#
#
# Process each disk
#
#
my $to_return = {
data => { disks => {}, exit_nonzero => 0, unhealthy => 0, useSN => $useSN },
version => 1,
error => 0,
errorString => '',
};
foreach my $line (@disks) {
my $disk;
my $name;
if ( $line =~ /\ / ) {
( $name, $disk ) = split( /\ /, $line, 2 );
} else {
$disk = $line;
$name = $line;
}
if ( $disk !~ /\// ) {
$disk = '/dev/' . $disk;
}
my $output = `$smartctl -A $disk`;
my %IDs = (
'5' => 'null',
'10' => 'null',
'173' => 'null',
'177' => 'null',
'183' => 'null',
'184' => 'null',
'187' => 'null',
'188' => 'null',
'190' => 'null',
'194' => 'null',
'196' => 'null',
'197' => 'null',
'198' => 'null',
'199' => 'null',
'231' => 'null',
'232' => 'null',
'233' => 'null',
'9' => 'null',
'disk' => $disk,
'serial' => undef,
'selftest_log' => undef,
'health_pass' => 0,
max_temp => 'null',
exit => $?,
);
$IDs{'disk'} =~ s/^\/dev\///;
# if polling exited non-zero above, no reason running the rest of the checks
my $disk_id = $name;
if ( $IDs{exit} != 0 ) {
$to_return->{data}{exit_nonzero}++;
} else {
my @outputA;
if ( $output =~ /NVMe Log/ ) {
# we have an NVMe drive with annoyingly different output
my %mappings = (
'Temperature' => 194,
'Power Cycles' => 12,
'Power On Hours' => 9,
'Percentage Used' => 231,
);
foreach ( split( /\n/, $output ) ) {
if (/:/) {
my ( $key, $val ) = split(/:/);
$val =~ s/^\s+|\s+$|\D+//g;
if ( exists( $mappings{$key} ) ) {
if ( $mappings{$key} == 231 ) {
$IDs{ $mappings{$key} } = 100 - $val;
} else {
$IDs{ $mappings{$key} } = $val;
}
}
} ## end if (/:/)
} ## end foreach ( split( /\n/, $output ) )
} else {
@outputA = split( /\n/, $output );
my $outputAint = 0;
while ( defined( $outputA[$outputAint] ) ) {
my $line = $outputA[$outputAint];
$line =~ s/^ +//;
$line =~ s/ +/ /g;
if ( $line =~ /^[0123456789]+ / ) {
my @lineA = split( /\ /, $line, 10 );
my $raw = $lineA[9];
my $normalized = $lineA[3];
my $id = $lineA[0];
# Crucial SSD
# 202, Percent_Lifetime_Remain, same as 231, SSD Life Left
if ( $id == 202
&& $line =~ /Percent_Lifetime_Remain/ )
{
$IDs{231} = $raw;
}
# single int raw values
if ( ( $id == 5 )
|| ( $id == 10 )
|| ( $id == 173 )
|| ( $id == 183 )
|| ( $id == 184 )
|| ( $id == 187 )
|| ( $id == 196 )
|| ( $id == 197 )
|| ( $id == 198 )
|| ( $id == 199 ) )
{
my @rawA = split( /\ /, $raw );
$IDs{$id} = $rawA[0];
} ## end if ( ( $id == 5 ) || ( $id == 10 ) || ( $id...))
# single int normalized values
if ( ( $id == 177 )
|| ( $id == 230 )
|| ( $id == 231 )
|| ( $id == 232 )
|| ( $id == 233 ) )
{
# annoying non-standard disk
# WDC WDS500G2B0A
# 230 Media_Wearout_Indicator 0x0032 100 100 --- Old_age Always - 0x002e000a002e
# 232 Available_Reservd_Space 0x0033 100 100 004 Pre-fail Always - 100
# 233 NAND_GB_Written_TLC 0x0032 100 100 --- Old_age Always - 9816
if ( $id == 230
&& $line =~ /Media_Wearout_Indicator/ )
{
$IDs{233} = int($normalized);
} elsif ( $id == 232
&& $line =~ /Available_Reservd_Space/ )
{
$IDs{232} = int($normalized);
} else {
# only set 233 if it has not been set yet
# if it was set already then the above did it and we don't want
# to overwrite it
if ( $id == 233 && $IDs{233} eq "null" ) {
$IDs{$id} = int($normalized);
} elsif ( $id != 233 ) {
$IDs{$id} = int($normalized);
}
} ## end else [ if ( $id == 230 && $line =~ /Media_Wearout_Indicator/)]
} ## end if ( ( $id == 177 ) || ( $id == 230 ) || (...))
# 9, power on hours
if ( $id == 9 ) {
my @runtime = split( /[\ h]/, $raw );
$IDs{$id} = $runtime[0];
}
# 188, Command_Timeout
if ( $id == 188 ) {
my $total = 0;
my @rawA = split( /\ /, $raw );
my $rawAint = 0;
while ( defined( $rawA[$rawAint] ) ) {
$total = $total + $rawA[$rawAint];
$rawAint++;
}
$IDs{$id} = $total;
} ## end if ( $id == 188 )
# 190, airflow temp
# 194, temp
if ( ( $id == 190 )
|| ( $id == 194 ) )
{
my ($temp) = split( /\ /, $raw );
$IDs{$id} = $temp;
}
} ## end if ( $line =~ /^[0123456789]+ / )
# SAS Wrapping
# Section by Cameron Munroe (munroenet[at]gmail.com)
# Elements in Grown Defect List.
# Marking as 5 Reallocated_Sector_Ct
if ( $line =~ "Elements in grown defect list:" ) {
my @lineA = split( /\ /, $line, 10 );
my $raw = $lineA[5];
# Reallocated Sector Count ID
$IDs{5} = $raw;
}
# Current Drive Temperature
# Marking as 194 Temperature_Celsius
if ( $line =~ "Current Drive Temperature:" ) {
my @lineA = split( /\ /, $line, 10 );
my $raw = $lineA[3];
# Temperature C ID
$IDs{194} = $raw;
}
# End of SAS Wrapper
$outputAint++;
} ## end while ( defined( $outputA[$outputAint] ) )
} ## end else [ if ( $output =~ /NVMe Log/ ) ]
#get the selftest logs
$output = `$smartctl -l selftest $disk`;
@outputA = split( /\n/, $output );
my @completed = grep( /Completed/, @outputA );
$IDs{'completed'} = scalar @completed;
my @interrupted = grep( /Interrupted/, @outputA );
$IDs{'interrupted'} = scalar @interrupted;
my @read_failure = grep( /read failure/, @outputA );
$IDs{'read_failure'} = scalar @read_failure;
my @read_failure2 = grep( /Failed in segment/, @outputA );
$IDs{'read_failure'} = $IDs{'read_failure'} + scalar @read_failure2;
my @unknown_failure = grep( /unknown failure/, @outputA );
$IDs{'unknown_failure'} = scalar @unknown_failure;
my @extended = grep( /\d.*\ ([Ee]xtended|[Ll]ong).*(?![Dd]uration)/, @outputA );
$IDs{'extended'} = scalar @extended;
my @short = grep( /[Ss]hort/, @outputA );
$IDs{'short'} = scalar @short;
my @conveyance = grep( /[Cc]onveyance/, @outputA );
$IDs{'conveyance'} = scalar @conveyance;
my @selective = grep( /[Ss]elective/, @outputA );
$IDs{'selective'} = scalar @selective;
my @offline = grep( /(\d|[Bb]ackground|[Ff]oreground)+\ +[Oo]ffline/, @outputA );
$IDs{'offline'} = scalar @offline;
# if we have logs, actually grab the log output
if ( $IDs{'completed'} > 0
|| $IDs{'interrupted'} > 0
|| $IDs{'read_failure'} > 0
|| $IDs{'extended'} > 0
|| $IDs{'short'} > 0
|| $IDs{'conveyance'} > 0
|| $IDs{'selective'} > 0
|| $IDs{'offline'} > 0 )
{
my @headers = grep( /(Num\ +Test.*LBA| Description .*[Hh]ours)/, @outputA );
my @log_lines;
push( @log_lines, @extended, @short, @conveyance, @selective, @offline );
$IDs{'selftest_log'} = join( "\n", @headers, sort(@log_lines) );
} ## end if ( $IDs{'completed'} > 0 || $IDs{'interrupted'...})
# get the drive serial number, if needed
$disk_id = $name;
$output = `$smartctl -i $disk`;
# generally upper case, HP branded drives seem to report with lower case n
while ( $output =~ /(?i)Serial Number:(.*)/g ) {
$IDs{'serial'} = $1;
$IDs{'serial'} =~ s/^\s+|\s+$//g;
}
if ($useSN) {
$disk_id = $IDs{'serial'};
}
while ( $output =~ /(?i)Model Family:(.*)/g ) {
$IDs{'model_family'} = $1;
$IDs{'model_family'} =~ s/^\s+|\s+$//g;
}
while ( $output =~ /(?i)Device Model:(.*)/g ) {
$IDs{'device_model'} = $1;
$IDs{'device_model'} =~ s/^\s+|\s+$//g;
}
while ( $output =~ /(?i)Model Number:(.*)/g ) {
$IDs{'model_number'} = $1;
$IDs{'model_number'} =~ s/^\s+|\s+$//g;
}
while ( $output =~ /(?i)Firmware Version:(.*)/g ) {
$IDs{'fw_version'} = $1;
$IDs{'fw_version'} =~ s/^\s+|\s+$//g;
}
# mainly HP drives
while ( $output =~ /(?i)Vendor:(.*)/g ) {
$IDs{'vendor'} = $1;
$IDs{'vendor'} =~ s/^\s+|\s+$//g;
}
# mainly HP drives
while ( $output =~ /(?i)Product:(.*)/g ) {
$IDs{'product'} = $1;
$IDs{'product'} =~ s/^\s+|\s+$//g;
}
# mainly HP drives
while ( $output =~ /(?i)Revision:(.*)/g ) {
$IDs{'revision'} = $1;
$IDs{'revision'} =~ s/^\s+|\s+$//g;
}
# figure out what to use for the max temp, if there is one
if ( $IDs{'190'} =~ /^\d+$/ ) {
$IDs{max_temp} = $IDs{'190'};
} elsif ( $IDs{'194'} =~ /^\d+$/ ) {
$IDs{max_temp} = $IDs{'194'};
}
if ( $IDs{'194'} =~ /^\d+$/ && defined( $IDs{max_temp} ) && $IDs{'194'} > $IDs{max_temp} ) {
$IDs{max_temp} = $IDs{'194'};
}
$output = `$smartctl -H $disk`;
if ( $output =~ /SMART\ overall\-health\ self\-assessment\ test\ result\:\ PASSED/ ) {
$IDs{'health_pass'} = 1;
} elsif ( $output =~ /SMART\ Health\ Status\:\ OK/ ) {
$IDs{'health_pass'} = 1;
}
if ( !$IDs{'health_pass'} ) {
$to_return->{data}{unhealthy}++;
}
} ## end else [ if ( $IDs{exit} != 0 ) ]
# only bother to save this if useSN is not being used
if ( !$useSN ) {
$to_return->{data}{disks}{$disk_id} = \%IDs;
} elsif ( $IDs{exit} == 0 && defined($disk_id) ) {
$to_return->{data}{disks}{$disk_id} = \%IDs;
}
# smartctl will in some cases exit zero when it can't pull data for cciss
# so if we get a zero exit, but no serial then it means something errored
# and the device is likely dead
if ( $IDs{exit} == 0 && !defined( $IDs{serial} ) ) {
$to_return->{data}{unhealthy}++;
}
} ## end foreach my $line (@disks)
my $toReturn = $json->encode($to_return);
if ( !$opts{p} ) {
$toReturn = $toReturn . "\n";
}
if ( $opts{Z} ) {
my $toReturnCompressed;
gzip \$toReturn => \$toReturnCompressed;
my $compressed = encode_base64($toReturnCompressed);
$compressed =~ s/\n//g;
$compressed = $compressed . "\n";
if ( length($compressed) < length($toReturn) ) {
$toReturn = $compressed;
}
} ## end if ( $opts{Z} )
if ( !$noWrite ) {
open( my $writefh, ">", $cache ) or die "Can't open '" . $cache . "'";
print $writefh $toReturn;
close($writefh);
} else {
print $toReturn;
}

View File

@@ -0,0 +1,3 @@
smartctl=/usr/sbin/smartctl
cache=/var/cache/smart
sda

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,45 @@
#!/bin/sh
################################################################
# Instructions: #
# 1. copy this script to /etc/snmp/ and make it executable: #
# chmod +x ups-nut.sh #
# 2. make sure UPS_NAME below matches the name of your UPS #
# 3. edit your snmpd.conf to include this line: #
# extend ups-nut /etc/snmp/ups-nut.sh #
# 4. restart snmpd on the host #
# 5. activate the app for the desired host in LibreNMS #
################################################################
UPS_NAME="${1:-APCUPS}"
PATH=$PATH:/usr/bin:/bin
TMP=$(upsc $UPS_NAME 2>/dev/null)
for value in "battery\.charge: [0-9.]+" "battery\.(runtime\.)?low: [0-9]+" "battery\.runtime: [0-9]+" "battery\.voltage: [0-9.]+" "battery\.voltage\.nominal: [0-9]+" "input\.voltage\.nominal: [0-9.]+" "input\.voltage: [0-9.]+" "ups\.load: [0-9.]+"
do
OUT=$(echo "$TMP" | grep -Eo "$value" | awk '{print $2}' | LANG=C sort | head -n 1)
if [ -n "$OUT" ]; then
echo "$OUT"
else
echo "Unknown"
fi
done
for value in "ups\.status:[A-Z ]{0,}OL" "ups\.status:[A-Z ]{0,}OB" "ups\.status:[A-Z ]{0,}LB" "ups\.status:[A-Z ]{0,}HB" "ups\.status:[A-Z ]{0,}RB" "ups\.status:[A-Z ]{0,}CHRG" "ups\.status:[A-Z ]{0,}DISCHRG" "ups\.status:[A-Z ]{0,}BYPASS" "ups\.status:[A-Z ]{0,}CAL" "ups\.status:[A-Z ]{0,}OFF" "ups\.status:[A-Z ]{0,}OVER" "ups\.status:[A-Z ]{0,}TRIM" "ups\.status:[A-Z ]{0,}BOOST" "ups\.status:[A-Z ]{0,}FSD" "ups\.alarm:[A-Z ]"
do
UNKNOWN=$(echo "$TMP" | grep -Eo "ups\.status:")
if [ -z "$UNKNOWN" ]; then
echo "Unknown"
else
OUT=$(echo "$TMP" | grep -Eo "$value")
if [ -n "$OUT" ]; then
echo "1"
else
echo "0"
fi
fi
done
UPSTEMP="ups\.temperature: [0-9.]+"
OUT=$(echo "$TMP" | grep -Eo "$UPSTEMP" | awk '{print $2}' | LANG=C sort | head -n 1)
[ -n "$OUT" ] && echo "$OUT" || echo "Unknown"

View File

@@ -0,0 +1,16 @@
#!/bin/bash
echo "Running apt-get update"
export DEBIAN_FRONTEND="noninteractive" && apt-get -qq --yes update
echo "Running apt-get dist-upgrade"
export DEBIAN_FRONTEND="noninteractive" && apt-get -qq --yes dist-upgrade
echo "Running apt-get upgrade"
export DEBIAN_FRONTEND="noninteractive" && apt-get -qq --yes upgrade
echo "Running apt-get purge"
export DEBIAN_FRONTEND="noninteractive" && apt-get -qq --purge autoremove --yes
export DEBIAN_FRONTEND="noninteractive" && apt-get -qq autoclean --yes

136
initializers/packages/apply Executable file
View File

@@ -0,0 +1,136 @@
#!/bin/bash
# KNEL Package Installation
# This initializer installs required packages with conditional logic
set -euo pipefail
echo "Installing required packages..."
# Ensure apt is up to date
apt-get update
# Install basic tools first
apt-get install -y git sudo dmidecode curl
# Setup webmin repo (used for RBAC/2FA PAM)
curl https://raw.githubusercontent.com/webmin/webmin/master/webmin-setup-repo.sh >/tmp/webmin-setup.sh
sh /tmp/webmin-setup.sh -f && rm -f /tmp/webmin-setup.sh
# Setup tailscale
curl -fsSL https://tailscale.com/install.sh | sh
# Remove unwanted packages
export DEBIAN_FRONTEND="noninteractive"
apt-get -y --purge remove \
systemd-timesyncd \
chrony \
telnet \
inetutils-telnet \
wpasupplicant \
modemmanager \
nano \
multipath-tools \
|| true
apt-get --purge autoremove
# Install desired packages
apt-get -y -o Dpkg::Options::="--force-confold" install \
build-essential \
wget \
gcc \
make \
perl \
libpcre3 \
libpcre3-dev \
zlib1g \
zlib1g-dev \
virt-what \
auditd \
audispd-plugins \
cloud-guest-utils \
aide \
htop \
snmpd \
ncdu \
iftop \
iotop \
cockpit \
cockpit-bridge \
cockpit-doc \
cockpit-networkmanager \
cockpit-packagekit \
cockpit-pcp \
cockpit-sosreport \
cockpit-storaged \
cockpit-system \
cockpit-ws \
nethogs \
sysstat \
ngrep \
acct \
lsb-release \
screen \
tailscale \
tmux \
vim \
command-not-found \
lldpd \
ansible-core \
salt-minion \
net-tools \
dos2unix \
gpg \
molly-guard \
lshw \
fzf \
ripgrep \
sudo \
mailutils \
clamav \
sl \
logwatch \
git \
net-tools \
tshark \
tcpdump \
lynis \
glances \
zsh \
zsh-autosuggestions \
zsh-syntax-highlighting \
fonts-powerline \
webmin \
usermin \
ntpsec \
ntpsec-ntpdate \
tuned \
iptables \
netfilter-persistent \
iptables-persistent \
pflogsumm \
postfix
# Kali-specific packages
if [[ $KALI_CHECK -eq 0 ]]; then
apt-get -y -o Dpkg::Options::="--force-confold" install \
latencytop \
cockpit-tests
fi
# KVM guest specific packages
if [[ $IS_KVM_GUEST -eq 1 ]]; then
apt-get -y install qemu-guest-agent
fi
# Physical host specific packages
if [[ $IS_PHYSICAL_HOST -gt 0 ]]; then
apt-get -y -o Dpkg::Options::="--force-confold" install \
i7z \
thermald \
cpufrequtils \
linux-cpupower
fi
echo "Package installation complete"

32
initializers/postfix/apply Executable file
View File

@@ -0,0 +1,32 @@
#!/bin/bash
# KNEL Postfix Module
# Configures postfix for email delivery
set -euo pipefail
echo "Running postfix module..."
# Stop postfix
systemctl stop postfix
# Configure postfix for local mail relay
if [[ -f ./configs/postfix_generic ]]; then
cp ./configs/postfix_generic /etc/postfix/generic
postmap /etc/postfix/generic
fi
# Set postfix configuration
postconf -e "inet_protocols = ipv4"
postconf -e "inet_interfaces = 127.0.0.1"
postconf -e "mydestination = 127.0.0.1"
postconf -e "relayhost = tsys-cloudron.knel.net"
postconf -e "smtp_generic_maps = hash:/etc/postfix/generic"
# Restart postfix
systemctl start postfix
# Test mail delivery
echo "Test email from $(hostname)" | mail -s "Test from $(hostname)" root
echo "Postfix module completed"

View File

@@ -0,0 +1 @@
/.*/ tsysrootaccount@knel.net

19
initializers/salt-client/apply Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/bash
# KNEL Salt Client Initializer
# Configures Salt minion for configuration management
set -euo pipefail
echo "Running Salt client initializer..."
# Configure Salt minion if configuration file exists
if [[ -f ./configs/salt-minion ]]; then
cp ./configs/salt-minion /etc/salt/minion
fi
# Enable and start Salt minion service
systemctl enable salt-minion
systemctl start salt-minion
echo "Salt client initializer completed"

View File

@@ -0,0 +1,53 @@
# KNEL Salt Minion Configuration
# Primary configuration for SaltStack client
# Master server address
master: salt-master.knownelement.com
# Master port
master_port: 4506
# Unique ID for this minion (defaults to hostname)
#id:
# User to run salt-minion as
user: root
# Root directory for minion
root_dir: /
# Directory for PID file
pidfile: /var/run/salt-minion.pid
# Directory for configuration files
conf_file: /etc/salt/minion
# Directory for minion modules
pki_dir: /etc/salt/pki/minion
# Cache directory
cachedir: /var/cache/salt/minion
# Append minion_id to the cache directory
append_minionid_configdir: False
# Verify master pubkey on initial connection
verify_master_pubkey_sign: True
# Keep cache files for
keep_jobs: 24
# Accept the master's public key on first connection
acceptance_wait_time: 10
# Retry connection to master
retry_dns: 30
# Logging options
log_file: /var/log/salt/minion
log_level: warning
log_granular_levels:
salt: warning
# Include additional configuration
# include: /etc/salt/minion.d/*.conf

View File

@@ -0,0 +1,132 @@
#!/bin/bash
# KNEL Security Hardening Initializer
# Implements SCAP/STIG security compliance
set -euo pipefail
echo "Running security hardening initializer..."
# Source variables if available
if [[ -f ../../variables ]]; then
source ../../variables
fi
# Enable auditd
systemctl --now enable auditd
# Configure sysctl security parameters
if [[ -f ./configs/sysctl-hardening.conf ]]; then
cp ./configs/sysctl-hardening.conf /etc/sysctl.d/99-security-hardening.conf
sysctl -p /etc/sysctl.d/99-security-hardening.conf
fi
# Configure core dumps and resource limits
if [[ -f ./configs/security-limits.conf ]]; then
cp ./configs/security-limits.conf /etc/security/limits.d/security-hardening.conf
fi
# SCAP-STIG Compliance: Fix GRUB permissions (skip on Raspberry Pi)
if [[ "${IS_RASPI:-0}" != "1" ]] && [[ -f /boot/grub/grub.cfg ]]; then
chown root:root /boot/grub/grub.cfg
chmod og-rwx /boot/grub/grub.cfg
chmod 0400 /boot/grub/grub.cfg
echo "GRUB permissions hardened"
fi
# SCAP-STIG Compliance: Disable auto mounting
systemctl --now disable autofs 2>/dev/null || true
DEBIAN_FRONTEND="noninteractive" apt-get -y --purge remove autofs 2>/dev/null || true
# SCAP-STIG Compliance: Deploy ModProbe security configs
for conf_file in ./configs/modprobe/*.conf; do
if [[ -f "$conf_file" ]]; then
cp "$conf_file" /etc/modprobe.d/
fi
done
# Deploy network filesystem blacklisting
cat > /etc/modprobe.d/stig-network.conf << 'EOF'
# STIG: Disable uncommon network protocols
install dccp /bin/true
install rds /bin/true
install sctp /bin/true
install tipc /bin/true
EOF
# Deploy filesystem blacklisting
cat > /etc/modprobe.d/stig-filesystem.conf << 'EOF'
# STIG: Disable uncommon filesystem types
install cramfs /bin/true
install freevxfs /bin/true
install hfs /bin/true
install hfsplus /bin/true
install jffs2 /bin/true
install squashfs /bin/true
install udf /bin/true
EOF
# Deploy USB storage blacklisting
cat > /etc/modprobe.d/usb_storage.conf << 'EOF'
# STIG: Disable USB storage
install usb-storage /bin/true
EOF
# SCAP-STIG Compliance: Deploy security banners
if [[ -f ./configs/issue ]]; then
cp ./configs/issue /etc/issue
fi
if [[ -f ./configs/issue.net ]]; then
cp ./configs/issue.net /etc/issue.net
fi
if [[ -f ./configs/motd ]]; then
cp ./configs/motd /etc/motd
fi
# SCAP-STIG Compliance: Cron permission hardening
rm -f /etc/cron.deny 2>/dev/null || true
touch /etc/cron.allow
chmod g-wx,o-rwx /etc/cron.allow
chown root:root /etc/cron.allow
chmod og-rwx /etc/crontab
chmod og-rwx /etc/cron.hourly/
chmod og-rwx /etc/cron.daily/
chmod og-rwx /etc/cron.weekly/
chmod og-rwx /etc/cron.monthly/
chown root:root /etc/cron.d/
chmod og-rwx /etc/cron.d/
# SCAP-STIG Compliance: At permission hardening
rm -f /etc/at.deny 2>/dev/null || true
touch /etc/at.allow
chmod g-wx,o-rwx /etc/at.allow
chown root:root /etc/at.allow
# Set file permissions
chmod 644 /etc/passwd
chmod 600 /etc/shadow
chmod 644 /etc/group
chmod 600 /etc/gshadow
# Remove dangerous packages
DEBIAN_FRONTEND="noninteractive" apt-get -y purge \
telnetd \
rsh-server \
rsh-client \
telnet \
|| true
# Install security tools
DEBIAN_FRONTEND="noninteractive" apt-get -y install \
aide \
lynis \
chkrootkit \
rkhunter \
|| true
# Initialize AIDE database
if [[ ! -f /var/lib/aide/aide.db ]]; then
aideinit
fi
echo "Security hardening initializer completed"

View File

@@ -0,0 +1,5 @@
This system is the property of Known Element Enterprises LLC.
Authorized uses only. All activity may be monitored and reported.
All activities subject to monitoring/recording/review in real time and/or at a later time.

View File

@@ -0,0 +1,5 @@
This system is the property of Known Element Enterprises LLC.
Authorized uses only. All activity may be monitored and reported.
All activities subject to monitoring/recording/review in real time and/or at a later time.

View File

@@ -0,0 +1,5 @@
This system is the property of Known Element Enterprises LLC.
Authorized uses only. All activity may be monitored and reported.
All activities subject to monitoring/recording/review in real time and/or at a later time.

View File

@@ -0,0 +1,29 @@
# KNEL Security Limits Configuration
# SCAP/STIG compliant resource limits
# Prevent core dumps for all users
* hard core 0
* soft core 0
# Prevent core dumps for root
root hard core 0
root soft core 0
# Limit max processes for users (fork bomb protection)
* soft nproc 4096
* hard nproc 8192
# Limit max file handles
* soft nofile 1024
* hard nofile 4096
# Limit max memory lock
* hard memlock 64
# Limit max file size
* soft fsize 2097152
* hard fsize 4194304
# Stack size limit
* soft stack 8192
* hard stack 65536

View File

@@ -0,0 +1,75 @@
# KNEL Kernel Security Hardening Configuration
# SCAP/STIG compliant sysctl parameters
# Disable IP forwarding
net.ipv4.ip_forward = 0
net.ipv6.conf.all.forwarding = 0
# Disable send packet redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
# Disable accept source routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0
# Disable accept redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
# Disable secure redirects
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
# Log martian packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
# Enable TCP SYN cookies
net.ipv4.tcp_syncookies = 1
# Disable RFC1337 fix
net.ipv4.tcp_rfc1337 = 1
# Enable reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
# Disable ICMP redirects
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
# Disable IP source routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
# Enable TCP timestamps
net.ipv4.tcp_timestamps = 1
# Disable magic sysrq
kernel.sysrq = 0
# Disable core dumps for SUID programs
fs.suid_dumpable = 0
# Enable execshield protection
kernel.exec-shield = 1
# Randomize virtual address space
kernel.randomize_va_space = 2
# Disable coredumps
kernel.core_pattern = |/bin/false
# Restrict ptrace scope
kernel.yama.ptrace_scope = 1
# Disable unprivileged BPF
kernel.unprivileged_bpf_disabled = 1
# Restrict user namespaces
kernel.unprivileged_userns_clone = 0

View File

@@ -0,0 +1,65 @@
#!/bin/bash
# KNEL SSH Hardening Module
# Applies SSH security hardening configurations
set -euo pipefail
echo "Running SSH hardening module..."
# Create SSH directories
mkdir -p $ROOT_SSH_DIR
# Setup root SSH keys
if [[ -f ./configs/root-ssh-authorized-keys ]]; then
cp ./configs/root-ssh-authorized-keys $ROOT_SSH_DIR/authorized_keys
chmod 400 $ROOT_SSH_DIR/authorized_keys
chown root: $ROOT_SSH_DIR/authorized_keys
fi
# Setup localuser SSH keys
if [[ $LOCALUSER_CHECK -gt 0 ]]; then
mkdir -p $LOCALUSER_SSH_DIR
if [[ -f ./configs/localuser-ssh-authorized-keys ]]; then
cp ./configs/localuser-ssh-authorized-keys $LOCALUSER_SSH_DIR/authorized_keys
chmod 400 $LOCALUSER_SSH_DIR/authorized_keys
chown localuser $LOCALUSER_SSH_DIR/authorized_keys
fi
fi
# Setup subodev SSH keys
if [[ $SUBODEV_CHECK -gt 0 ]]; then
mkdir -p $SUBODEV_SSH_DIR
if [[ -f ./configs/localuser-ssh-authorized-keys ]]; then
cp ./configs/localuser-ssh-authorized-keys $SUBODEV_SSH_DIR/authorized_keys
chmod 400 $SUBODEV_SSH_DIR/authorized_keys
chown subodev: $SUBODEV_SSH_DIR/authorized_keys
fi
fi
# Deploy SSH configuration based on environment
if [[ $DEV_WORKSTATION_CHECK -eq 0 ]]; then
# Production SSH configuration
if [[ -f ./configs/tsys-sshd-config ]]; then
cp ./configs/tsys-sshd-config /etc/ssh/sshd_config
fi
else
# Development workstation - more permissive settings
if [[ -f ./configs/tsys-sshd-config ]]; then
cp ./configs/tsys-sshd-config /etc/ssh/sshd_config
fi
fi
# Additional SSH hardening for non-Ubuntu systems
if [[ $UBUNTU_CHECK -ne 1 ]] && [[ -f ./configs/ssh-audit-hardening.conf ]]; then
mkdir -p /etc/ssh/sshd_config.d
cp ./configs/ssh-audit-hardening.conf /etc/ssh/sshd_config.d/ssh-audit_hardening.conf
chmod og-rwx /etc/ssh/sshd_config.d/*
fi
# Secure SSH configuration permissions
chmod og-rwx /etc/ssh/sshd_config
echo "SSH hardening module completed"

View File

@@ -0,0 +1,2 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHaBNuLS+GYGRPc9wne63Ocr+R+/Q01Y9V0FTv0RnG3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPyMR0lFgiMKhQJ5aqy68nR0BQp1cNzi/wIThyuTV4a8 tsyscto@ultix-control

View File

@@ -0,0 +1,2 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHaBNuLS+GYGRPc9wne63Ocr+R+/Q01Y9V0FTv0RnG3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPyMR0lFgiMKhQJ5aqy68nR0BQp1cNzi/wIThyuTV4a8 tsyscto@ultix-control

View File

@@ -0,0 +1,19 @@
# Restrict key exchange, cipher, and MAC algorithms, as per sshaudit.com
# hardening guide.
KexAlgorithms sntrup761x25519-sha512,sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,gss-curve25519-sha256-,diffie-hellman-group16-sha512,gss-group16-sha512-,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-gcm@openssh.com,aes128-ctr
MACs hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-128-etm@openssh.com
HostKeyAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256
RequiredRSASize 3072
CASignatureAlgorithms sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256
GSSAPIKexAlgorithms gss-curve25519-sha256-,gss-group16-sha512-
HostbasedAcceptedAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-256
PubkeyAcceptedAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-256

View File

@@ -0,0 +1,20 @@
Include /etc/ssh/sshd_config.d/*.conf
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
KbdInteractiveAuthentication no
PrintMotd no
PasswordAuthentication no
AllowTcpForwarding no
X11Forwarding no
ChallengeResponseAuthentication no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
Banner /etc/issue.net
MaxAuthTries 2
MaxStartups 10:30:100
PermitRootLogin prohibit-password
ClientAliveInterval 300
ClientAliveCountMax 3
AllowUsers root localuser subodev
LoginGraceTime 60

42
initializers/ssh-keys/apply Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# KNEL SSH Keys Initializer
# Sets up SSH authorized keys for users
set -euo pipefail
echo "Running SSH keys initializer..."
# Create SSH directories
mkdir -p $ROOT_SSH_DIR
# Setup root SSH keys
if [[ -f ./configs/root-ssh-authorized-keys ]]; then
cp ./configs/root-ssh-authorized-keys $ROOT_SSH_DIR/authorized_keys
chmod 400 $ROOT_SSH_DIR/authorized_keys
chown root: $ROOT_SSH_DIR/authorized_keys
fi
# Setup localuser SSH keys
if [[ $LOCALUSER_CHECK -gt 0 ]]; then
mkdir -p $LOCALUSER_SSH_DIR
if [[ -f ./configs/localuser-ssh-authorized-keys ]]; then
cp ./configs/localuser-ssh-authorized-keys $LOCALUSER_SSH_DIR/authorized_keys
chmod 400 $LOCALUSER_SSH_DIR/authorized_keys
chown localuser $LOCALUSER_SSH_DIR/authorized_keys
fi
fi
# Setup subodev SSH keys
if [[ $SUBODEV_CHECK -gt 0 ]]; then
mkdir -p $SUBODEV_SSH_DIR
if [[ -f ./configs/localuser-ssh-authorized-keys ]]; then
cp ./configs/localuser-ssh-authorized-keys $SUBODEV_SSH_DIR/authorized_keys
chmod 400 $SUBODEV_SSH_DIR/authorized_keys
chown subodev: $SUBODEV_SSH_DIR/authorized_keys
fi
fi
echo "SSH keys initializer completed"

View File

@@ -0,0 +1,2 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHaBNuLS+GYGRPc9wne63Ocr+R+/Q01Y9V0FTv0RnG3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPyMR0lFgiMKhQJ5aqy68nR0BQp1cNzi/wIThyuTV4a8 tsyscto@ultix-control

View File

@@ -0,0 +1,2 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHaBNuLS+GYGRPc9wne63Ocr+R+/Q01Y9V0FTv0RnG3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPyMR0lFgiMKhQJ5aqy68nR0BQp1cNzi/wIThyuTV4a8 tsyscto@ultix-control

149
initializers/ssl-stack/apply Executable file
View File

@@ -0,0 +1,149 @@
#!/bin/bash
# KNEL SSL Stack Compilation Initializer
# Compiles OpenSSL, nghttp2, curl, APR, and Apache HTTPd from source
# Made from instructions at https://www.tunetheweb.com/performance/http2/
set -euo pipefail
echo "Running SSL stack compilation initializer..."
# Only run on specific systems or if explicitly requested
# This is a resource-intensive operation
if [[ $DEV_WORKSTATION_CHECK -gt 0 ]] || [[ "${COMPILE_SSL_STACK:-}" == "true" ]]; then
echo "Compiling SSL stack from source..."
# Base URLs and files (using original versions from KNELServerBuild)
OPENSSL_URL_BASE="https://www.openssl.org/source/"
OPENSSL_FILE="openssl-1.1.0h.tar.gz"
NGHTTP_URL_BASE="https://github.com/nghttp2/nghttp2/releases/download/v1.31.0/"
NGHTTP_FILE="nghttp2-1.31.0.tar.gz"
APR_URL_BASE="https://archive.apache.org/dist/apr/"
APR_FILE="apr-1.6.3.tar.gz"
APR_UTIL_URL_BASE="https://archive.apache.org/dist/apr/"
APR_UTIL_FILE="apr-util-1.6.1.tar.gz"
APACHE_URL_BASE="https://archive.apache.org/dist/httpd/"
APACHE_FILE="httpd-2.4.33.tar.gz"
CURL_URL_BASE="https://curl.haxx.se/download/"
CURL_FILE="curl-7.60.0.tar.gz"
# Create build directory
BUILD_DIR="/tmp/ssl-stack-build"
mkdir -p "$BUILD_DIR"
cd "$BUILD_DIR"
# Install build dependencies
DEBIAN_FRONTEND="noninteractive" apt-get -y install \
build-essential \
wget \
gcc \
make \
perl \
libpcre3 \
libpcre3-dev \
zlib1g \
zlib1g-dev \
|| true
# Download and compile OpenSSL
echo "Compiling OpenSSL..."
wget $OPENSSL_URL_BASE/$OPENSSL_FILE
tar xzf $OPENSSL_FILE
cd openssl-1.1.0h
./config enable-weak-ssl-ciphers shared zlib-dynamic -DOPENSSL_TLS_SECURITY_LEVEL=0 --prefix=/usr/local/custom-ssl/openssl-1.1.0h
make
make install
ln -sf /usr/local/custom-ssl/openssl-1.1.0h /usr/local/openssl
cd -
# Download and compile nghttp2
echo "Compiling nghttp2..."
wget $NGHTTP_URL_BASE/$NGHTTP_FILE
tar xzf $NGHTTP_FILE
cd nghttp2-1.31.0
./configure --prefix=/usr/local/custom-ssl/nghttp
make
make install
cd -
# Update ldconfig for custom SSL
cat <<EOF > /etc/ld.so.conf.d/custom-ssl.conf
/usr/local/custom-ssl/openssl-1.1.0h/lib
/usr/local/custom-ssl/nghttp/lib
EOF
ldconfig
# Download and compile curl
echo "Compiling curl..."
wget $CURL_URL_BASE/$CURL_FILE
tar xzf $CURL_FILE
cd curl-7.60.0
./configure --prefix=/usr/local/custom-ssl/curl --with-nghttp2=/usr/local/custom-ssl/nghttp/ --with-ssl=/usr/local/custom-ssl/openssl-1.1.0h/
make
make install
cd -
# Download and compile APR
echo "Compiling APR..."
wget $APR_URL_BASE/$APR_FILE
tar xzf $APR_FILE
cd apr-1.6.3
./configure --prefix=/usr/local/custom-ssl/apr
make
make install
cd -
# Download and compile APR-util
echo "Compiling APR-util..."
wget $APR_UTIL_URL_BASE/$APR_UTIL_FILE
tar xzf $APR_UTIL_FILE
tar xzf $APR_UTIL_FILE
cd apr-util-1.6.1
./configure --prefix=/usr/local/custom-ssl/apr-util --with-apr=/usr/local/custom-ssl/apr
make
make install
cd -
# Download and compile Apache HTTPd
echo "Compiling Apache HTTPd..."
wget $APACHE_URL_BASE/$APACHE_FILE
tar xzf $APACHE_FILE
cd httpd-2.4.33
cp -r ../apr-1.6.3 srclib/apr
cp -r ../apr-util-1.6.1 srclib/apr-util
./configure --prefix=/usr/local/custom-ssl/apache \
--with-ssl=/usr/local/custom-ssl/openssl-1.1.0h/ \
--with-pcre=/usr/bin/pcre-config \
--enable-unique-id \
--enable-ssl \
--enable-so \
--with-included-apr \
--enable-http2 \
--with-nghttp2=/usr/local/custom-ssl/nghttp/
make
make install
ln -sf /usr/local/custom-ssl/apache /usr/local/apache
cd -
# Cleanup
cd /
rm -rf "$BUILD_DIR"
echo "SSL stack compilation completed"
echo "Custom installations available at:"
echo " OpenSSL: /usr/local/custom-ssl/openssl-1.1.0h"
echo " nghttp2: /usr/local/custom-ssl/nghttp"
echo " curl: /usr/local/custom-ssl/curl"
echo " APR: /usr/local/custom-ssl/apr"
echo " Apache: /usr/local/custom-ssl/apache"
else
echo "Skipping SSL stack compilation (only runs on dev workstations or when COMPILE_SSL_STACK=true)"
fi
echo "SSL stack compilation initializer completed"

View File

@@ -0,0 +1,46 @@
#
# Known Element Enterprises Customized Config File
# auditd
# Initial version 2025-06-27
#
local_events = yes
write_logs = yes
log_file = /var/log/audit/audit.log
log_group = adm
log_format = ENRICHED
flush = INCREMENTAL_ASYNC
freq = 50
max_log_file = 8
num_logs = 5
priority_boost = 4
name_format = NONE
max_log_file_action = keep_logs
space_left = 75
space_left_action = email
action_mail_acct = root
admin_space_left_action = halt
disk_full_action = SUSPEND
disk_error_action = SUSPEND
admin_space_left = 50
verify_email = yes
use_libwrap = yes
tcp_listen_queue = 5
tcp_max_per_addr = 1
tcp_client_max_idle = 0
transport = TCP
distribute_network = no
q_depth = 2000
overflow_action = SYSLOG
max_restarts = 10
plugin_dir = /etc/audit/plugins.d
end_of_event_timeout = 2
##tcp_client_ports = 1024-65535
##tcp_listen_port = 60
##krb5_key_file = /etc/audit/audit.key
krb5_principal = auditd
##name = mydomain

View File

@@ -0,0 +1,5 @@
This system is the property of Known Element Enterprises LLC.
Authorized uses only. All activity may be monitored and reported.
All activities subject to monitoring/recording/review in real time and/or at a later time.

View File

@@ -0,0 +1,5 @@
This system is the property of Known Element Enterprises LLC.
Authorized uses only. All activity may be monitored and reported.
All activities subject to monitoring/recording/review in real time and/or at a later time.

View File

@@ -0,0 +1,5 @@
This system is the property of Known Element Enterprises LLC.
Authorized uses only. All activity may be monitored and reported.
All activities subject to monitoring/recording/review in real time and/or at a later time.

View File

@@ -0,0 +1,2 @@
#/etc/cockpit/disallowed-users
# List of users which are not allowed to login to Cockpit

View File

@@ -0,0 +1,6 @@
option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
send host-name = gethostname();
request subnet-mask, broadcast-address, time-offset, routers,
domain-name, host-name,
rfc3442-classless-static-routes;

View File

@@ -0,0 +1,23 @@
# see "man logrotate" for details
# global options do not affect preceding include directives
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create 0640 root utmp
# use date as a suffix of the rotated file
#dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may also be configured here.

View File

@@ -0,0 +1 @@
install cramfs /bin/true

View File

@@ -0,0 +1 @@
install dccp /bin/true

View File

@@ -0,0 +1 @@
install freevxfs /bin/true

View File

@@ -0,0 +1 @@
install hfs /bin/true

View File

@@ -0,0 +1 @@
install hfsplus /bin/true

View File

@@ -0,0 +1 @@
install jffs2 /bin/true

View File

@@ -0,0 +1 @@
install rds /bin/true

View File

@@ -0,0 +1 @@
install sctp /bin/true

View File

@@ -0,0 +1 @@
install squashfs /bin/true

View File

@@ -0,0 +1 @@
install tipc /bin/true

View File

@@ -0,0 +1 @@
install udf /bin/true

View File

@@ -0,0 +1 @@
install usb-storage /bin/true

View File

@@ -0,0 +1,7 @@
driftfile /var/lib/ntp/ntp.drift
leapfile /usr/share/zoneinfo/leap-seconds.list
server pfv-netboot.knel.net
restrict 127.0.0.1
restrict ::1
interface ignore wildcard
interface listen 127.0.0.1

View File

@@ -0,0 +1,2 @@
# Uncomment to start SNMP subagent and enable CDP, SONMP and EDP protocol
DAEMON_ARGS="-x -c -s -e"

View File

@@ -0,0 +1,3 @@
# See man 5 aliases for format
postmaster: root
root: coo@turnsys.com

View File

@@ -0,0 +1 @@
/.*/ tsysrootaccount@knel.net

View File

@@ -0,0 +1 @@
Debian-snmp ALL = NOPASSWD: /bin/cat

View File

@@ -0,0 +1,46 @@
##########################################################################
# snmpd.conf
# Created by CNW on 11/3/2018 via snmpconf wizard and manual post tweaks
###########################################################################
# SECTION: Monitor Various Aspects of the Running Host
#
# disk: Check for disk space usage of a partition.
# The agent can check the amount of available disk space, and make
# sure it is above a set limit.
#
load 3 3 3
rocommunity kn3lmgmt
sysservices 76
#syslocation Rack, Room, Building, City, Country [Lat, Lon]
syslocation R4, Server Room, SITER, Pflugerville, United States
syscontact coo@turnsys.com
#NTP
extend ntp-client /usr/lib/check_mk_agent/local/ntp-client
#SMTP
extend mailq /usr/lib/check_mk_agent/local/postfix-queues
extend postfixdetailed /usr/lib/check_mk_agent/local/postfixdetailed
#OS Distribution Detection
extend distro /usr/local/bin/distro
extend osupdate /usr/lib/check_mk_agent/local/os-updates.sh
#Hardware Detection
extend manufacturer /usr/bin/sudo /usr/bin/cat /sys/devices/virtual/dmi/id/sys_vendor
extend hardware /usr/bin/sudo /usr/bin/cat /sys/devices/virtual/dmi/id/product_name
extend serial /usr/bin/sudo /usr/bin/cat /sys/devices/virtual/dmi/id/product_serial
#SMART
extend smart /usr/lib/check_mk_agent/local/smart
#Temperature
pass_persist .1.3.6.1.4.1.9.9.13.1.3 /usr/local/bin/temper-snmp
# Allow Systems Management Data Engine SNMP to connect to snmpd using SMUX
# smuxpeer .1.3.6.1.4.1.674.10892.1
# LLDP collection
master agentx

View File

@@ -0,0 +1,40 @@
##########################################################################
# snmpd.conf
# Created by CNW on 11/3/2018 via snmpconf wizard and manual post tweaks
###########################################################################
# SECTION: Monitor Various Aspects of the Running Host
#
# disk: Check for disk space usage of a partition.
# The agent can check the amount of available disk space, and make
# sure it is above a set limit.
#
load 3 3 3
rocommunity kn3lmgmt
sysservices 76
#syslocation Rack, Room, Building, City, Country [Lat, Lon]
syslocation SITER, Pflugerville, United States
syscontact coo@turnsys.com
#NTP
extend ntp-client /usr/lib/check_mk_agent/local/ntp-client
#SMTP
extend mailq /usr/lib/check_mk_agent/local/postfix-queues
extend postfixdetailed /usr/lib/check_mk_agent/local/postfixdetailed
#OS Distribution Detection
extend distro /usr/local/bin/distro
extend osupdate /usr/lib/check_mk_agent/local/os-updates.sh
#Hardware Detection
extend hardware /usr/bin/sudo /usr/bin/cat /sys/firmware/devicetree/base/model
extend serial /usr/bin/sudo /usr/bin/cat /sys/firmware/devicetree/base/serial-number
# Allow Systems Management Data Engine SNMP to connect to snmpd using SMUX
# smuxpeer .1.3.6.1.4.1.674.10892.1
# LLDP collection
master agentx

View File

@@ -0,0 +1,44 @@
##########################################################################
# snmpd.conf
# Created by CNW on 11/3/2018 via snmpconf wizard and manual post tweaks
###########################################################################
# SECTION: Monitor Various Aspects of the Running Host
#
# disk: Check for disk space usage of a partition.
# The agent can check the amount of available disk space, and make
# sure it is above a set limit.
#
load 3 3 3
rocommunity kn3lmgmt
sysservices 76
#syslocation Rack, Room, Building, City, Country [Lat, Lon]
syslocation R4, Server Room, SITER, Pflugerville, United States
syscontact coo@turnsys.com
#NTP
extend ntp-client /usr/lib/check_mk_agent/local/ntp-client
#SMTP
extend mailq /usr/lib/check_mk_agent/local/postfix-queues
extend postfixdetailed /usr/lib/check_mk_agent/local/postfixdetailed
#OS Distribution Detection
extend distro /usr/local/bin/distro
extend osupdate /usr/lib/check_mk_agent/local/os-updates.sh
# Socket statistics
extend ss /usr/lib/check_mk_agent/local/ss.py
#Hardware Detection
# (uncomment for x86 platforms)
extend manufacturer /usr/bin/sudo /usr/bin/cat /sys/devices/virtual/dmi/id/sys_vendor
extend hardware /usr/bin/sudo /usr/bin/cat /sys/devices/virtual/dmi/id/product_name
extend serial /usr/bin/sudo /usr/bin/cat /sys/devices/virtual/dmi/id/product_serial
# Allow Systems Management Data Engine SNMP to connect to snmpd using SMUX
# smuxpeer .1.3.6.1.4.1.674.10892.1
# LLDP collection
master agentx

View File

@@ -0,0 +1,2 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHaBNuLS+GYGRPc9wne63Ocr+R+/Q01Y9V0FTv0RnG3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPyMR0lFgiMKhQJ5aqy68nR0BQp1cNzi/wIThyuTV4a8 tsyscto@ultix-control

View File

@@ -0,0 +1,2 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHaBNuLS+GYGRPc9wne63Ocr+R+/Q01Y9V0FTv0RnG3
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPyMR0lFgiMKhQJ5aqy68nR0BQp1cNzi/wIThyuTV4a8 tsyscto@ultix-control

View File

@@ -0,0 +1,19 @@
# Restrict key exchange, cipher, and MAC algorithms, as per sshaudit.com
# hardening guide.
KexAlgorithms sntrup761x25519-sha512,sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,gss-curve25519-sha256-,diffie-hellman-group16-sha512,gss-group16-sha512-,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-gcm@openssh.com,aes128-ctr
MACs hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-128-etm@openssh.com
HostKeyAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256
RequiredRSASize 3072
CASignatureAlgorithms sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256
GSSAPIKexAlgorithms gss-curve25519-sha256-,gss-group16-sha512-
HostbasedAcceptedAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-256
PubkeyAcceptedAlgorithms sk-ssh-ed25519-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,ssh-ed25519,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-256

View File

@@ -0,0 +1,20 @@
Include /etc/ssh/sshd_config.d/*.conf
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
KbdInteractiveAuthentication no
PrintMotd no
PasswordAuthentication no
AllowTcpForwarding no
X11Forwarding no
ChallengeResponseAuthentication no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
Banner /etc/issue.net
MaxAuthTries 2
MaxStartups 10:30:100
PermitRootLogin prohibit-password
ClientAliveInterval 300
ClientAliveCountMax 3
AllowUsers root localuser subodev
LoginGraceTime 60

View File

@@ -0,0 +1,6 @@
module(load="imuxsock") # provides support for local system logging
module(load="imklog") # provides kernel logging support
#module(load="immark") # provides --MARK-- message capability
*.* @tsys-librenms.knel.net:514
:omusrmsg:EOF

View File

@@ -0,0 +1,31 @@
[Journal]
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitIntervalSec=30s
#RateLimitBurst=10000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
#ReadKMsg=yes
#Audit=no
Storage=persistent

View File

@@ -0,0 +1,258 @@
# ~/.zshrc file for zsh interactive shells.
# see /usr/share/doc/zsh/examples/zshrc for examples
setopt autocd # change directory just by typing its name
#setopt correct # auto correct mistakes
setopt interactivecomments # allow comments in interactive mode
setopt magicequalsubst # enable filename expansion for arguments of the form anything=expression
setopt nonomatch # hide error message if there is no match for the pattern
setopt notify # report the status of background jobs immediately
setopt numericglobsort # sort filenames numerically when it makes sense
setopt promptsubst # enable command substitution in prompt
WORDCHARS=${WORDCHARS//\/} # Don't consider certain characters part of the word
# hide EOL sign ('%')
PROMPT_EOL_MARK=""
# configure key keybindings
bindkey -v # emacs key bindings
bindkey ' ' magic-space # do history expansion on space
bindkey '^U' backward-kill-line # ctrl + U
bindkey '^[[3;5~' kill-word # ctrl + Supr
bindkey '^[[3~' delete-char # delete
bindkey '^[[1;5C' forward-word # ctrl + ->
bindkey '^[[1;5D' backward-word # ctrl + <-
bindkey '^[[5~' beginning-of-buffer-or-history # page up
bindkey '^[[6~' end-of-buffer-or-history # page down
bindkey '^[[H' beginning-of-line # home
bindkey '^[[F' end-of-line # end
bindkey '^[[Z' undo # shift + tab undo last action
# enable completion features
autoload -Uz compinit
compinit -d ~/.cache/zcompdump
zstyle ':completion:*:*:*:*:*' menu select
zstyle ':completion:*' auto-description 'specify: %d'
zstyle ':completion:*' completer _expand _complete
zstyle ':completion:*' format 'Completing %d'
zstyle ':completion:*' group-name ''
zstyle ':completion:*' list-colors ''
zstyle ':completion:*' list-prompt %SAt %p: Hit TAB for more, or the character to insert%s
zstyle ':completion:*' matcher-list 'm:{a-zA-Z}={A-Za-z}'
zstyle ':completion:*' rehash true
zstyle ':completion:*' select-prompt %SScrolling active: current selection at %p%s
zstyle ':completion:*' use-compctl false
zstyle ':completion:*' verbose true
zstyle ':completion:*:kill:*' command 'ps -u $USER -o pid,%cpu,tty,cputime,cmd'
# History configurations
HISTFILE=~/.zsh_history
HISTSIZE=10000
SAVEHIST=200000
setopt hist_expire_dups_first # delete duplicates first when HISTFILE size exceeds HISTSIZE
setopt hist_ignore_dups # ignore duplicated commands history list
setopt hist_ignore_space # ignore commands that start with space
setopt hist_verify # show command with history expansion to user before running it
#setopt share_history # share command history data
# force zsh to show the complete history
alias history="history 0"
# configure `time` format
TIMEFMT=$'\nreal\t%E\nuser\t%U\nsys\t%S\ncpu\t%P'
# make less more friendly for non-text input files, see lesspipe(1)
#[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
configure_prompt() {
prompt_symbol=㉿
# Skull emoji for root terminal
#[ "$EUID" -eq 0 ] && prompt_symbol=💀
case "$PROMPT_ALTERNATIVE" in
twoline)
PROMPT=$'%F{%(#.blue.green)}┌──${debian_chroot:+($debian_chroot)─}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))─}(%B%F{%(#.red.blue)}%n'$prompt_symbol$'%m%b%F{%(#.blue.green)})-[%B%F{reset}%(6~.%-1~/…/%4~.%5~)%b%F{%(#.blue.green)}]\n└─%B%(#.%F{red}#.%F{blue}$)%b%F{reset} '
# Right-side prompt with exit codes and background processes
#RPROMPT=$'%(?.. %? %F{red}%B%b%F{reset})%(1j. %j %F{yellow}%B⚙%b%F{reset}.)'
;;
oneline)
PROMPT=$'${debian_chroot:+($debian_chroot)}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))}%B%F{%(#.red.blue)}%n@%m%b%F{reset}:%B%F{%(#.blue.green)}%~%b%F{reset}%(#.#.$) '
RPROMPT=
;;
backtrack)
PROMPT=$'${debian_chroot:+($debian_chroot)}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))}%B%F{red}%n@%m%b%F{reset}:%B%F{blue}%~%b%F{reset}%(#.#.$) '
RPROMPT=
;;
esac
unset prompt_symbol
}
# The following block is surrounded by two delimiters.
# These delimiters must not be modified. Thanks.
# START KALI CONFIG VARIABLES
PROMPT_ALTERNATIVE=twoline
NEWLINE_BEFORE_PROMPT=yes
# STOP KALI CONFIG VARIABLES
if [ "$color_prompt" = yes ]; then
# override default virtualenv indicator in prompt
VIRTUAL_ENV_DISABLE_PROMPT=1
configure_prompt
# enable syntax-highlighting
if [ -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]; then
. /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
ZSH_HIGHLIGHT_HIGHLIGHTERS=(main brackets pattern)
ZSH_HIGHLIGHT_STYLES[default]=none
ZSH_HIGHLIGHT_STYLES[unknown-token]=underline
ZSH_HIGHLIGHT_STYLES[reserved-word]=fg=cyan,bold
ZSH_HIGHLIGHT_STYLES[suffix-alias]=fg=green,underline
ZSH_HIGHLIGHT_STYLES[global-alias]=fg=green,bold
ZSH_HIGHLIGHT_STYLES[precommand]=fg=green,underline
ZSH_HIGHLIGHT_STYLES[commandseparator]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[autodirectory]=fg=green,underline
ZSH_HIGHLIGHT_STYLES[path]=bold
ZSH_HIGHLIGHT_STYLES[path_pathseparator]=
ZSH_HIGHLIGHT_STYLES[path_prefix_pathseparator]=
ZSH_HIGHLIGHT_STYLES[globbing]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[history-expansion]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[command-substitution]=none
ZSH_HIGHLIGHT_STYLES[command-substitution-delimiter]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[process-substitution]=none
ZSH_HIGHLIGHT_STYLES[process-substitution-delimiter]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[single-hyphen-option]=fg=green
ZSH_HIGHLIGHT_STYLES[double-hyphen-option]=fg=green
ZSH_HIGHLIGHT_STYLES[back-quoted-argument]=none
ZSH_HIGHLIGHT_STYLES[back-quoted-argument-delimiter]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[single-quoted-argument]=fg=yellow
ZSH_HIGHLIGHT_STYLES[double-quoted-argument]=fg=yellow
ZSH_HIGHLIGHT_STYLES[dollar-quoted-argument]=fg=yellow
ZSH_HIGHLIGHT_STYLES[rc-quote]=fg=magenta
ZSH_HIGHLIGHT_STYLES[dollar-double-quoted-argument]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[back-double-quoted-argument]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[back-dollar-quoted-argument]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[assign]=none
ZSH_HIGHLIGHT_STYLES[redirection]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[comment]=fg=black,bold
ZSH_HIGHLIGHT_STYLES[named-fd]=none
ZSH_HIGHLIGHT_STYLES[numeric-fd]=none
ZSH_HIGHLIGHT_STYLES[arg0]=fg=cyan
ZSH_HIGHLIGHT_STYLES[bracket-error]=fg=red,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-1]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-2]=fg=green,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-3]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-4]=fg=yellow,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-5]=fg=cyan,bold
ZSH_HIGHLIGHT_STYLES[cursor-matchingbracket]=standout
fi
else
PROMPT='${debian_chroot:+($debian_chroot)}%n@%m:%~%(#.#.$) '
fi
unset color_prompt force_color_prompt
toggle_oneline_prompt(){
if [ "$PROMPT_ALTERNATIVE" = oneline ]; then
PROMPT_ALTERNATIVE=twoline
else
PROMPT_ALTERNATIVE=oneline
fi
configure_prompt
zle reset-prompt
}
zle -N toggle_oneline_prompt
bindkey ^P toggle_oneline_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*|Eterm|aterm|kterm|gnome*|alacritty)
TERM_TITLE=$'\e]0;${debian_chroot:+($debian_chroot)}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))}%n@%m: %~\a'
;;
*)
;;
esac
precmd() {
# Print the previously configured title
print -Pnr -- "$TERM_TITLE"
# Print a new line before the prompt, but only if it is not the first line
if [ "$NEWLINE_BEFORE_PROMPT" = yes ]; then
if [ -z "$_NEW_LINE_BEFORE_PROMPT" ]; then
_NEW_LINE_BEFORE_PROMPT=1
else
print ""
fi
fi
}
# enable color support of ls, less and man, and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
export LS_COLORS="$LS_COLORS:ow=30;44:" # fix ls color for folders with 777 permissions
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
alias diff='diff --color=auto'
alias ip='ip --color=auto'
export LESS_TERMCAP_mb=$'\E[1;31m' # begin blink
export LESS_TERMCAP_md=$'\E[1;36m' # begin bold
export LESS_TERMCAP_me=$'\E[0m' # reset bold/blink
export LESS_TERMCAP_so=$'\E[01;33m' # begin reverse video
export LESS_TERMCAP_se=$'\E[0m' # reset reverse video
export LESS_TERMCAP_us=$'\E[1;32m' # begin underline
export LESS_TERMCAP_ue=$'\E[0m' # reset underline
# Take advantage of $LS_COLORS for completion as well
zstyle ':completion:*' list-colors "${(s.:.)LS_COLORS}"
zstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#)*=0=01;31'
fi
# some more ls aliases
alias ll='ls -l'
alias la='ls -A'
alias l='ls -CF'
# enable auto-suggestions based on the history
if [ -f /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh ]; then
. /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh
# change suggestion color
ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE='fg=#999'
fi
# enable command-not-found if installed
if [ -f /etc/zsh_command_not_found ]; then
. /etc/zsh_command_not_found
fi

View File

@@ -0,0 +1,96 @@
#!/bin/bash
# KNEL System Configuration Initializer
# Applies system-wide configuration files with conditional logic
set -euo pipefail
echo "Running system configuration initializer..."
# Create necessary directories
mkdir -p $ROOT_SSH_DIR
# Deploy system configuration files from copied templates
if [[ -f ./ZSH/tsys-zshrc ]]; then
cp ./ZSH/tsys-zshrc /etc/zshrc
fi
if [[ -f ./SMTP/aliases ]]; then
cp ./SMTP/aliases /etc/aliases
newaliases
fi
if [[ -f ./Syslog/rsyslog.conf ]]; then
cp ./Syslog/rsyslog.conf /etc/rsyslog.conf
fi
# Configure DHCP client
if [[ -f ./DHCP/dhclient.conf ]]; then
cp ./DHCP/dhclient.conf /etc/dhcp/dhclient.conf
fi
# Configure SNMP
systemctl stop snmpd 2>/dev/null || true
/etc/init.d/snmpd stop 2>/dev/null || true
if [[ -f ./SNMP/snmp-sudo.conf ]]; then
cp ./SNMP/snmp-sudo.conf /etc/sudoers.d/Debian-snmp
fi
# Adjust SNMP service for log verbosity
sed -i "s|-Lsd|-LS6d|" /lib/systemd/system/snmpd.service
# Configure SNMP based on system type (with pi-detect)
if command -v vcgencmd >/dev/null 2>&1; then
export IS_RASPI="1"
else
export IS_RASPI="0"
fi
if [[ $IS_RASPI -eq 1 ]] && [[ -f ./SNMP/snmpd-rpi.conf ]]; then
cp ./SNMP/snmpd-rpi.conf /etc/snmp/snmpd.conf
elif [[ $IS_PHYSICAL_HOST -eq 1 ]] && [[ -f ./SNMP/snmpd-physicalhost.conf ]]; then
cp ./SNMP/snmpd-physicalhost.conf /etc/snmp/snmpd.conf
elif [[ $IS_VIRT_GUEST -eq 1 ]] && [[ -f ./SNMP/snmpd.conf ]]; then
cp ./SNMP/snmpd.conf /etc/snmp/snmpd.conf
fi
# Configure lldpd
if [[ -f ./NetworkDiscovery/lldpd ]]; then
cp ./NetworkDiscovery/lldpd /etc/default/lldpd
systemctl restart lldpd
fi
# Configure Cockpit
if [[ -f ./Cockpit/disallowed-users ]]; then
cp ./Cockpit/disallowed-users /etc/cockpit/disallowed-users
systemctl restart cockpit
fi
# Configure NTP for non-NTP servers
if [[ $NTP_SERVER_CHECK -eq 0 ]] && [[ -f ./NTP/ntp.conf ]]; then
cp ./NTP/ntp.conf /etc/ntpsec/ntp.conf
systemctl restart ntpsec.service
fi
# Always install rsyslog (removed librenms conditional)
DEBIAN_FRONTEND="noninteractive" apt-get -qq --yes -o Dpkg::Options::="--force-confold" install rsyslog
systemctl stop rsyslog
systemctl start rsyslog
# Reload systemd and restart SNMP
systemctl daemon-reload
systemctl restart snmpd 2>/dev/null || true
/etc/init.d/snmpd restart 2>/dev/null || true
# Performance tuning based on system type
if [[ $IS_PHYSICAL_HOST -gt 0 ]]; then
cpufreq-set -r -g performance
cpupower frequency-set --governor performance
fi
if [[ $IS_VIRT_GUEST -eq 1 ]]; then
tuned-adm profile virtual-guest
fi
echo "System configuration initializer completed"

View File

@@ -0,0 +1,3 @@
# See man 5 aliases for format
postmaster: root
root: coo@turnsys.com

View File

@@ -0,0 +1,6 @@
module(load="imuxsock") # provides support for local system logging
module(load="imklog") # provides kernel logging support
#module(load="immark") # provides --MARK-- message capability
*.* @tsys-librenms.knel.net:514
:omusrmsg:EOF

View File

@@ -0,0 +1,258 @@
# ~/.zshrc file for zsh interactive shells.
# see /usr/share/doc/zsh/examples/zshrc for examples
setopt autocd # change directory just by typing its name
#setopt correct # auto correct mistakes
setopt interactivecomments # allow comments in interactive mode
setopt magicequalsubst # enable filename expansion for arguments of the form anything=expression
setopt nonomatch # hide error message if there is no match for the pattern
setopt notify # report the status of background jobs immediately
setopt numericglobsort # sort filenames numerically when it makes sense
setopt promptsubst # enable command substitution in prompt
WORDCHARS=${WORDCHARS//\/} # Don't consider certain characters part of the word
# hide EOL sign ('%')
PROMPT_EOL_MARK=""
# configure key keybindings
bindkey -v # emacs key bindings
bindkey ' ' magic-space # do history expansion on space
bindkey '^U' backward-kill-line # ctrl + U
bindkey '^[[3;5~' kill-word # ctrl + Supr
bindkey '^[[3~' delete-char # delete
bindkey '^[[1;5C' forward-word # ctrl + ->
bindkey '^[[1;5D' backward-word # ctrl + <-
bindkey '^[[5~' beginning-of-buffer-or-history # page up
bindkey '^[[6~' end-of-buffer-or-history # page down
bindkey '^[[H' beginning-of-line # home
bindkey '^[[F' end-of-line # end
bindkey '^[[Z' undo # shift + tab undo last action
# enable completion features
autoload -Uz compinit
compinit -d ~/.cache/zcompdump
zstyle ':completion:*:*:*:*:*' menu select
zstyle ':completion:*' auto-description 'specify: %d'
zstyle ':completion:*' completer _expand _complete
zstyle ':completion:*' format 'Completing %d'
zstyle ':completion:*' group-name ''
zstyle ':completion:*' list-colors ''
zstyle ':completion:*' list-prompt %SAt %p: Hit TAB for more, or the character to insert%s
zstyle ':completion:*' matcher-list 'm:{a-zA-Z}={A-Za-z}'
zstyle ':completion:*' rehash true
zstyle ':completion:*' select-prompt %SScrolling active: current selection at %p%s
zstyle ':completion:*' use-compctl false
zstyle ':completion:*' verbose true
zstyle ':completion:*:kill:*' command 'ps -u $USER -o pid,%cpu,tty,cputime,cmd'
# History configurations
HISTFILE=~/.zsh_history
HISTSIZE=10000
SAVEHIST=200000
setopt hist_expire_dups_first # delete duplicates first when HISTFILE size exceeds HISTSIZE
setopt hist_ignore_dups # ignore duplicated commands history list
setopt hist_ignore_space # ignore commands that start with space
setopt hist_verify # show command with history expansion to user before running it
#setopt share_history # share command history data
# force zsh to show the complete history
alias history="history 0"
# configure `time` format
TIMEFMT=$'\nreal\t%E\nuser\t%U\nsys\t%S\ncpu\t%P'
# make less more friendly for non-text input files, see lesspipe(1)
#[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
configure_prompt() {
prompt_symbol=
# Skull emoji for root terminal
#[ "$EUID" -eq 0 ] && prompt_symbol=💀
case "$PROMPT_ALTERNATIVE" in
twoline)
PROMPT=$'%F{%(#.blue.green)}┌──${debian_chroot:+($debian_chroot)─}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))─}(%B%F{%(#.red.blue)}%n'$prompt_symbol$'%m%b%F{%(#.blue.green)})-[%B%F{reset}%(6~.%-1~/…/%4~.%5~)%b%F{%(#.blue.green)}]\n└─%B%(#.%F{red}#.%F{blue}$)%b%F{reset} '
# Right-side prompt with exit codes and background processes
#RPROMPT=$'%(?.. %? %F{red}%B%b%F{reset})%(1j. %j %F{yellow}%B⚙%b%F{reset}.)'
;;
oneline)
PROMPT=$'${debian_chroot:+($debian_chroot)}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))}%B%F{%(#.red.blue)}%n@%m%b%F{reset}:%B%F{%(#.blue.green)}%~%b%F{reset}%(#.#.$) '
RPROMPT=
;;
backtrack)
PROMPT=$'${debian_chroot:+($debian_chroot)}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))}%B%F{red}%n@%m%b%F{reset}:%B%F{blue}%~%b%F{reset}%(#.#.$) '
RPROMPT=
;;
esac
unset prompt_symbol
}
# The following block is surrounded by two delimiters.
# These delimiters must not be modified. Thanks.
# START KALI CONFIG VARIABLES
PROMPT_ALTERNATIVE=twoline
NEWLINE_BEFORE_PROMPT=yes
# STOP KALI CONFIG VARIABLES
if [ "$color_prompt" = yes ]; then
# override default virtualenv indicator in prompt
VIRTUAL_ENV_DISABLE_PROMPT=1
configure_prompt
# enable syntax-highlighting
if [ -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]; then
. /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
ZSH_HIGHLIGHT_HIGHLIGHTERS=(main brackets pattern)
ZSH_HIGHLIGHT_STYLES[default]=none
ZSH_HIGHLIGHT_STYLES[unknown-token]=underline
ZSH_HIGHLIGHT_STYLES[reserved-word]=fg=cyan,bold
ZSH_HIGHLIGHT_STYLES[suffix-alias]=fg=green,underline
ZSH_HIGHLIGHT_STYLES[global-alias]=fg=green,bold
ZSH_HIGHLIGHT_STYLES[precommand]=fg=green,underline
ZSH_HIGHLIGHT_STYLES[commandseparator]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[autodirectory]=fg=green,underline
ZSH_HIGHLIGHT_STYLES[path]=bold
ZSH_HIGHLIGHT_STYLES[path_pathseparator]=
ZSH_HIGHLIGHT_STYLES[path_prefix_pathseparator]=
ZSH_HIGHLIGHT_STYLES[globbing]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[history-expansion]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[command-substitution]=none
ZSH_HIGHLIGHT_STYLES[command-substitution-delimiter]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[process-substitution]=none
ZSH_HIGHLIGHT_STYLES[process-substitution-delimiter]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[single-hyphen-option]=fg=green
ZSH_HIGHLIGHT_STYLES[double-hyphen-option]=fg=green
ZSH_HIGHLIGHT_STYLES[back-quoted-argument]=none
ZSH_HIGHLIGHT_STYLES[back-quoted-argument-delimiter]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[single-quoted-argument]=fg=yellow
ZSH_HIGHLIGHT_STYLES[double-quoted-argument]=fg=yellow
ZSH_HIGHLIGHT_STYLES[dollar-quoted-argument]=fg=yellow
ZSH_HIGHLIGHT_STYLES[rc-quote]=fg=magenta
ZSH_HIGHLIGHT_STYLES[dollar-double-quoted-argument]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[back-double-quoted-argument]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[back-dollar-quoted-argument]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[assign]=none
ZSH_HIGHLIGHT_STYLES[redirection]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[comment]=fg=black,bold
ZSH_HIGHLIGHT_STYLES[named-fd]=none
ZSH_HIGHLIGHT_STYLES[numeric-fd]=none
ZSH_HIGHLIGHT_STYLES[arg0]=fg=cyan
ZSH_HIGHLIGHT_STYLES[bracket-error]=fg=red,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-1]=fg=blue,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-2]=fg=green,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-3]=fg=magenta,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-4]=fg=yellow,bold
ZSH_HIGHLIGHT_STYLES[bracket-level-5]=fg=cyan,bold
ZSH_HIGHLIGHT_STYLES[cursor-matchingbracket]=standout
fi
else
PROMPT='${debian_chroot:+($debian_chroot)}%n@%m:%~%(#.#.$) '
fi
unset color_prompt force_color_prompt
toggle_oneline_prompt(){
if [ "$PROMPT_ALTERNATIVE" = oneline ]; then
PROMPT_ALTERNATIVE=twoline
else
PROMPT_ALTERNATIVE=oneline
fi
configure_prompt
zle reset-prompt
}
zle -N toggle_oneline_prompt
bindkey ^P toggle_oneline_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*|Eterm|aterm|kterm|gnome*|alacritty)
TERM_TITLE=$'\e]0;${debian_chroot:+($debian_chroot)}${VIRTUAL_ENV:+($(basename $VIRTUAL_ENV))}%n@%m: %~\a'
;;
*)
;;
esac
precmd() {
# Print the previously configured title
print -Pnr -- "$TERM_TITLE"
# Print a new line before the prompt, but only if it is not the first line
if [ "$NEWLINE_BEFORE_PROMPT" = yes ]; then
if [ -z "$_NEW_LINE_BEFORE_PROMPT" ]; then
_NEW_LINE_BEFORE_PROMPT=1
else
print ""
fi
fi
}
# enable color support of ls, less and man, and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
export LS_COLORS="$LS_COLORS:ow=30;44:" # fix ls color for folders with 777 permissions
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
alias diff='diff --color=auto'
alias ip='ip --color=auto'
export LESS_TERMCAP_mb=$'\E[1;31m' # begin blink
export LESS_TERMCAP_md=$'\E[1;36m' # begin bold
export LESS_TERMCAP_me=$'\E[0m' # reset bold/blink
export LESS_TERMCAP_so=$'\E[01;33m' # begin reverse video
export LESS_TERMCAP_se=$'\E[0m' # reset reverse video
export LESS_TERMCAP_us=$'\E[1;32m' # begin underline
export LESS_TERMCAP_ue=$'\E[0m' # reset underline
# Take advantage of $LS_COLORS for completion as well
zstyle ':completion:*' list-colors "${(s.:.)LS_COLORS}"
zstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#)*=0=01;31'
fi
# some more ls aliases
alias ll='ls -l'
alias la='ls -A'
alias l='ls -CF'
# enable auto-suggestions based on the history
if [ -f /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh ]; then
. /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh
# change suggestion color
ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE='fg=#999'
fi
# enable command-not-found if installed
if [ -f /etc/zsh_command_not_found ]; then
. /etc/zsh_command_not_found
fi

44
initializers/system-setup/apply Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
# KNEL System Initialization
# This initializer performs basic system detection and setup
set -euo pipefail
echo "Performing system initialization..."
# Detect system characteristics
export UBUNTU_CHECK="$(grep -c Ubuntu /etc/os-release 2>/dev/null || echo 0)"
export IS_PHYSICAL_HOST="$(/usr/sbin/dmidecode -t System 2>/dev/null | grep -c Dell || echo 0)"
export SUBODEV_CHECK="$(getent passwd | grep -c subodev || echo 0)"
export LOCALUSER_CHECK="$(getent passwd | grep -c localuser || echo 0)"
export KALI_CHECK="$(grep -c kali /etc/os-release 2>/dev/null || echo 0)"
# Detect virtualization
if command -v virt-what >/dev/null 2>&1; then
export VIRT_TYPE="$(virt-what 2>/dev/null || echo "")"
export IS_VIRT_GUEST="$(echo "$VIRT_TYPE" | grep -E -c 'hyperv|kvm' || echo 0)"
export IS_KVM_GUEST="$(echo "$VIRT_TYPE" | grep -c 'kvm' || echo 0)"
else
export VIRT_TYPE=""
export IS_VIRT_GUEST="0"
export IS_KVM_GUEST="0"
fi
# Detect special host types
export LIBRENMS_CHECK="$(hostname | grep -c tsys-librenms || echo 0)"
export NTP_SERVER_CHECK="$(hostname | grep -E -c 'pfv-netboot|pfvsvrpi' || echo 0)"
export DEV_WORKSTATION_CHECK="$(hostname | grep -E -c 'subopi-dev|CharlesDevServer' || echo 0)"
# Raspberry Pi detection
if command -v vcgencmd >/dev/null 2>&1; then
export IS_RASPI="1"
else
export IS_RASPI="0"
fi
# Set current timestamp for logging
export CURRENT_TIMESTAMP="$(date '+%Y-%m-%d %H:%M:%S')"
echo "System initialization complete"
echo "Ubuntu: $UBUNTU_CHECK, Physical: $IS_PHYSICAL_HOST, Virtual: $IS_VIRT_GUEST"

View File

@@ -0,0 +1,26 @@
#!/bin/bash
# KNEL Unattended Upgrades Initializer
# Configures automatic security updates based on Debian unattended-upgrades
set -euo pipefail
echo "Running unattended upgrades initializer..."
# Install unattended-upgrades
DEBIAN_FRONTEND="noninteractive" apt-get -y install unattended-upgrades
# Configure unattended-upgrades
if [[ -f ./configs/50unattended-upgrades ]]; then
cp ./configs/50unattended-upgrades /etc/apt/apt.conf.d/50unattended-upgrades
fi
# Copy auto-upgrades configuration template
if [[ -f ./configs/auto-upgrades ]]; then
cp ./configs/auto-upgrades /etc/apt/apt.conf.d/auto-upgrades
fi
# Enable unattended-upgrades service
dpkg-reconfigure -f noninteractive unattended-upgrades
echo "Unattended upgrades initializer completed"

View File

@@ -0,0 +1,46 @@
// KNEL Unattended-Upgrades Configuration
// Automatically install security updates
Unattended-Upgrade {
// Automatically upgrade packages from these origins
Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
"${distro_id}ESMApps:${distro_codename}-apps-security";
"${distro_id}ESM:${distro_codename}-infra-security";
};
// Package blacklist - never auto-upgrade these
Package-Blacklist {
};
// Send email to this address for problems or packages upgrades
// Uncomment and set to a valid email address for notifications
//Unattended-Upgrade::Mail "admin@knownelement.com";
// Remove unused automatically installed kernel-related packages
Remove-Unused-Kernel-Packages "true";
// Do automatic removal of newly unused dependencies after the upgrade
Remove-New-Unused-Dependencies "true";
// Remove unused dependencies
Remove-Unused-Dependencies "true";
// Automatically reboot *WITHOUT CONFIRMATION* if the file
// /var/run/reboot-required is found after the upgrade
Automatic-Reboot "false";
// If automatic reboot is enabled and the system needs to reboot,
// reboot at the specific time instead of immediately
//Automatic-Reboot-Time "02:00";
// Use apt bandwidth limit feature
//Acquire::http::Dl-Limit "70";
// Enable logging to syslog
SyslogEnable "true";
// Syslog facility
SyslogFacility "daemon";
};

View File

@@ -0,0 +1,7 @@
// KNEL Auto-Upgrades Configuration
// Enable unattended-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";

View File

@@ -0,0 +1,26 @@
#!/bin/bash
# KNEL User Configuration Initializer
# Configures user shells and other user-specific settings
set -euo pipefail
echo "Running user configuration initializer..."
# Change shell to zsh for root
chsh -s $(which zsh) root
# Change shell to zsh for localuser if exists
if [[ $LOCALUSER_CHECK -gt 0 ]]; then
chsh -s "$(which zsh)" localuser
fi
# Change shell to zsh for subodev if exists
if [[ $SUBODEV_CHECK -gt 0 ]]; then
chsh -s "$(which zsh)" subodev
fi
# Enable accounting
/usr/sbin/accton on
echo "User configuration initializer completed"

44
initializers/wazuh/apply Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
# KNEL Wazuh Security Module
# Deploys and configures Wazuh security monitoring
set -euo pipefail
echo "Running Wazuh security module..."
# Check if this is the Wazuh server
export TSYS_NSM_CHECK="$(hostname | grep -c tsys-nsm || echo 0)"
if [[ $TSYS_NSM_CHECK -eq 0 ]]; then
echo "Setting up Wazuh agent..."
# Remove existing keyring if present
if [[ -f /usr/share/keyrings/wazuh.gpg ]]; then
rm -f /usr/share/keyrings/wazuh.gpg
fi
# Add Wazuh repository
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import
chmod 644 /usr/share/keyrings/wazuh.gpg
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" > /etc/apt/sources.list.d/wazuh.list
# Install Wazuh agent
apt-get update
DEBIAN_FRONTEND="noninteractive" apt-get -y install wazuh-agent
# Configure Wazuh agent
if [[ -f ./configs/wazuh-agent.conf ]]; then
cp ./configs/wazuh-agent.conf /var/ossec/etc/ossec.conf
fi
# Start and enable Wazuh agent
systemctl daemon-reload
systemctl enable wazuh-agent
systemctl restart wazuh-agent
else
echo "This is a Wazuh server, skipping agent setup"
fi
echo "Wazuh security module completed"

View File

@@ -0,0 +1,118 @@
<!-- KNEL Wazuh Agent Configuration -->
<ossec_config>
<client>
<server>
<address>tsys-nsm.knel.net</address>
<port>1514</port>
<protocol>tcp</protocol>
</server>
<config-profile>ubuntu, ubuntu20, ubuntu20.04</config-profile>
<notify_time>10</notify_time>
<time-reconnect>60</time-reconnect>
<auto_restart>yes</auto_restart>
<crypto_method>aes</crypto_method>
</client>
<client_buffer>
<!-- Agent buffer options -->
<disabled>no</disabled>
<queue_size>5000</queue_size>
<events_per_second>500</events_per_second>
</client_buffer>
<!-- Policy monitoring -->
<rootcheck>
<disabled>no</disabled>
<check_files>yes</check_files>
<check_trojans>yes</check_trojans>
<check_dev>yes</check_dev>
<check_sys>yes</check_sys>
<check_pids>yes</check_pids>
<check_ports>yes</check_ports>
<check_unixaudit>yes</check_unixaudit>
<frequency>43200</frequency>
</rootcheck>
<!-- File integrity monitoring -->
<syscheck>
<disabled>no</disabled>
<frequency>43200</frequency>
<scan_on_start>yes</scan_on_start>
<alert_new_files>yes</alert_new_files>
<auto_ignore>no</auto_ignore>
<!-- Directories to monitor -->
<directories check_all="yes">/etc,/usr/bin,/usr/sbin,/bin,/sbin</directories>
<directories check_all="yes">/usr/local/bin,/usr/local/sbin</directories>
<!-- Files to monitor -->
<files>/etc/passwd,/etc/shadow,/etc/group,/etc/gshadow</files>
<files>/etc/ssh/sshd_config,/etc/ssh/ssh_config</files>
<!-- Ignore these files -->
<ignore>/etc/mtab</ignore>
<ignore>/etc/hosts.deny</ignore>
<ignore>/etc/mail/statistics</ignore>
<ignore>/etc/random-seed</ignore>
<ignore>/etc/adjtime</ignore>
<ignore>/etc/httpd/logs</ignore>
<ignore>/etc/utmpx</ignore>
<ignore>/etc/wtmpx</ignore>
<ignore>/etc/cups/certs</ignore>
<ignore>/etc/dumpdates</ignore>
<ignore>/etc/svc/volatile</ignore>
<!-- File types to ignore -->
<nodiff>/etc/ssl/private.key</nodiff>
</syscheck>
<!-- Log analysis -->
<localfile>
<log_format>COMMAND</log_format>
<command>df -P</command>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>full_command</log_format>
<command>netstat -tulpn | sed 's/:::/:/g' | sed 's/::/:/g' | sed 's/0\.0\.0\.0/:/g' | sed 's/127\.0\.0\.1/:/g' | sort</command>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>full_command</log_format>
<command>last -n 20</command>
<frequency>360</frequency>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/syslog</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/auth.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/kern.log</location>
</localfile>
<localfile>
<log_format>syslog</log_format>
<location>/var/log/dmesg</location>
</localfile>
<!-- Active response -->
<active-response>
<disabled>no</disabled>
</active-response>
<!-- Labels -->
<labels>
<label key="environment">production</label>
<label key="organization">KnownElement</label>
</labels>
</ossec_config>

4
modules/README.md Normal file
View File

@@ -0,0 +1,4 @@
# Modules directory is empty - all functionality moved to initializers for one-time provisioning
# Future: Modules will be created for ongoing management when transitioned to Ansible/Salt
# This directory is intentionally left as placeholder for eventual Ansible module structure

5
roles/monitoring Normal file
View File

@@ -0,0 +1,5 @@
# Monitoring Role
# Combines monitoring-related initializers
oam
salt-client

Some files were not shown because too many files have changed in this diff Show More