Table of Contents
Introduction
Hirte is a deterministic multi-node service controller designed specifically for highly-regulated industries with functional safety requirements. Unlike traditional orchestrators like Kubernetes, Hirte provides predictable behavior critical for safety systems while maintaining a lightweight footprint. It manages service states across multiple nodes by integrating with systemd via its D-Bus API and relaying D-Bus messages over TCP for multi-node support.
This guide provides a comprehensive overview of Hirte’s architecture, implementation, and practical deployment on Rocky Linux 9.
What is Hirte?
Hirte (German for “shepherd”) is a minimalist service orchestrator that prioritizes:
- Deterministic behavior: Critical for safety-certified systems
 - Lightweight architecture: Minimal resource overhead
 - Direct systemd integration: Leverages existing Linux infrastructure
 - Multi-node coordination: TCP-based D-Bus message relay
 - Container support: Native integration with Podman and Quadlet
 
Key Features
- Predictable State Management: Every action has a deterministic outcome
 - Minimal Dependencies: Built on standard Linux components
 - Safety-First Design: Suitable for FuSa (Functional Safety) environments
 - Simple API: Easy integration with existing tools
 - Container-Native: First-class support for containerized workloads
 
Architecture Overview
Hirte’s architecture is elegantly simple yet powerful:
graph TB    subgraph "Control Node"        CLI[hirtectl CLI]        HM[Hirte Manager<br/>Port 2020]        DBUS1[D-Bus API]        SD1[systemd]        AGENT1[Hirte Agent]    end
    subgraph "Worker Node 1"        AGENT2[Hirte Agent]        DBUS2[D-Bus API]        SD2[systemd]        SVC1[Services/<br/>Containers]    end
    subgraph "Worker Node 2"        AGENT3[Hirte Agent]        DBUS3[D-Bus API]        SD3[systemd]        SVC2[Services/<br/>Containers]    end
    CLI --> HM    HM --> AGENT1    AGENT1 --> DBUS1    DBUS1 --> SD1
    HM -.TCP 2020.-> AGENT2    AGENT2 --> DBUS2    DBUS2 --> SD2    SD2 --> SVC1
    HM -.TCP 2020.-> AGENT3    AGENT3 --> DBUS3    DBUS3 --> SD3    SD3 --> SVC2
    style HM fill:#4a90e2,color:#fff    style SD1 fill:#50e3c2,color:#000    style SD2 fill:#50e3c2,color:#000    style SD3 fill:#50e3c2,color:#000Implementation Workflow
The implementation process follows a structured approach:
sequenceDiagram    participant Admin    participant hirtectl    participant Manager    participant Agent    participant DBus    participant systemd    participant Service
    Admin->>hirtectl: start node1 nginx.service    hirtectl->>Manager: Service start request    Manager->>Manager: Validate request    Manager->>Agent: Relay D-Bus command    Agent->>DBus: D-Bus method call    DBus->>systemd: Start unit    systemd->>Service: Execute service    Service-->>systemd: Running status    systemd-->>DBus: Success response    DBus-->>Agent: Method response    Agent-->>Manager: Operation result    Manager-->>hirtectl: Success confirmation    hirtectl-->>Admin: Service startedContainer Integration with Podman
Hirte seamlessly integrates with Podman through Quadlet:
graph TB    subgraph "Container Management Flow"        Q[Quadlet Files<br/>.container/.pod]        SG[systemd-generator]        SU[systemd Units]        H[Hirte Control]        P[Podman Runtime]        C[Containers]    end
    Q --> SG    SG --> SU    H --> SU    SU --> P    P --> C
    style Q fill:#f9a825,color:#000    style H fill:#4a90e2,color:#fff    style P fill:#7b1fa2,color:#fffHirte vs Kubernetes Comparison
Understanding when to use Hirte versus Kubernetes:
graph LR    subgraph "Hirte Characteristics"        H1[Deterministic]        H2[Lightweight<br/>~10MB]        H3[Direct systemd]        H4[Simple State]        H5[FuSa Ready]    end
    subgraph "Kubernetes Characteristics"        K1[Declarative]        K2[Heavy<br/>~1GB+]        K3[Abstract Layers]        K4[Complex State]        K5[Cloud Native]    end
    subgraph "Use Cases"        U1[Safety Critical ✓]        U2[Edge Computing ✓]        U3[Regulated Industry ✓]        U4[Large Scale ✗]        U5[Multi-Cloud ✗]    end
    H1 --> U1    H2 --> U2    H5 --> U3    K4 --> U4    K5 --> U5
    style H1 fill:#4caf50,color:#fff    style H5 fill:#4caf50,color:#fff    style K4 fill:#2196f3,color:#fff    style K5 fill:#2196f3,color:#fffSetup Instructions
Prerequisites
For this implementation, you’ll need:
- Two Rocky Linux 9 servers (one primary, one agent node)
 - Network connectivity between nodes
 - Root or sudo access
 - Basic understanding of systemd
 
Step 1: Install Hirte
Since Hirte isn’t in default repositories, we’ll use the COPR repository.
On the primary node:
# Enable COPR repositorysudo dnf install -y dnf-plugins-coresudo dnf copr enable mperina/hirte-snapshot el9
# Install Hirte componentssudo dnf install -y hirte hirte-agent hirtectl
# Verify installationhirtectl --versionOn the agent node:
# Enable COPR repositorysudo dnf install -y dnf-plugins-coresudo dnf copr enable mperina/hirte-snapshot el9
# Install only the agentsudo dnf install -y hirte-agentStep 2: Configure Hirte
On the primary node (replace 192.168.1.10 with your primary node’s IP):
# Create Hirte manager configurationsudo tee /etc/hirte/hirte.conf << EOF[hirte]ManagerPort=2020AllowedNodeNames=primary,agentLogLevel=infoLogTarget=journaldEOF
# Create agent configuration for local agentsudo tee /etc/hirte/agent.conf << EOF[hirte-agent]NodeName=primaryManagerHost=127.0.0.1ManagerPort=2020LogLevel=infoLogTarget=journaldHeartbeatInterval=5EOFOn the agent node (replace 192.168.1.10 with your primary node’s IP):
# Create agent configurationsudo tee /etc/hirte/agent.conf << EOF[hirte-agent]NodeName=agentManagerHost=192.168.1.10ManagerPort=2020LogLevel=infoLogTarget=journaldHeartbeatInterval=5ReconnectInterval=10EOFStep 3: Configure Firewall
On the primary node:
# Open Hirte manager portsudo firewall-cmd --permanent --add-port=2020/tcpsudo firewall-cmd --reload
# Verify firewall rulessudo firewall-cmd --list-portsStep 4: Start Services
On the primary node:
# Start and enable servicessudo systemctl start hirte hirte-agentsudo systemctl enable hirte hirte-agent
# Verify services are runningsudo systemctl status hirte hirte-agentOn the agent node:
# Start and enable agentsudo systemctl start hirte-agentsudo systemctl enable hirte-agent
# Verify agent is runningsudo systemctl status hirte-agentStep 5: Verify Connectivity
On the primary node:
# Check logs for agent connectionsudo journalctl -lfu hirte
# Expected output should show:# "Agent 'agent' connected from 192.168.1.20"
# List connected nodessudo hirtectl list-nodes
# Expected output:# NODE      STATE     LAST SEEN# primary   online    now# agent     online    2s agoTesting Functionality
Basic Service Control
Let’s test controlling services across nodes:
# List all units on all nodessudo hirtectl list-units
# List units on specific nodesudo hirtectl list-units agent
# Install a test service on agent nodessh agent-node "sudo dnf install -y httpd"
# Control the service from primary nodesudo hirtectl start agent httpd.servicesudo hirtectl status agent httpd.servicesudo hirtectl stop agent httpd.servicesudo hirtectl restart agent httpd.service
# Enable service at bootsudo hirtectl enable agent httpd.serviceContainer Management with Podman
First, install Podman on the agent node:
# On agent nodesudo dnf install -y podman
# Create a Quadlet container filesudo mkdir -p /etc/containers/systemdsudo tee /etc/containers/systemd/webserver.container << EOF[Unit]Description=Nginx Web ServerAfter=network-online.target
[Container]Image=docker.io/nginx:latestPublishPort=8080:80Volume=/var/www:/usr/share/nginx/html:roEnvironment=NGINX_HOST=example.com
[Service]Restart=alwaysTimeoutStartSec=900
[Install]WantedBy=default.targetEOF
# Reload systemd to generate servicesudo systemctl daemon-reloadNow control the container from the primary node:
# Start containersudo hirtectl start agent webserver.service
# Check container statussudo hirtectl status agent webserver.service
# View container logsssh agent-node "sudo journalctl -u webserver.service"
# Stop containersudo hirtectl stop agent webserver.serviceMulti-Container Pod Example
Create a pod with multiple containers:
# On agent node - Create pod filesudo tee /etc/containers/systemd/webapp.pod << EOF[Unit]Description=Web Application Pod
[Pod]PodName=webappNetwork=bridgePublishPort=8080:80PublishPort=9090:9090EOF
# Create application containersudo tee /etc/containers/systemd/webapp-app.container << EOF[Unit]Description=Web ApplicationAfter=webapp.pod
[Container]Image=docker.io/myapp:latestPod=webappEnvironment=DB_HOST=localhostEnvironment=DB_PORT=5432
[Service]Restart=always
[Install]WantedBy=default.targetEOF
# Create database containersudo tee /etc/containers/systemd/webapp-db.container << EOF[Unit]Description=PostgreSQL DatabaseAfter=webapp.pod
[Container]Image=docker.io/postgres:14Pod=webappEnvironment=POSTGRES_PASSWORD=secretEnvironment=POSTGRES_DB=webappVolume=webapp-db-data:/var/lib/postgresql/data
[Service]Restart=always
[Install]WantedBy=default.targetEOF
# Reload systemdsudo systemctl daemon-reloadControl the pod from primary node:
# Start the entire podsudo hirtectl start agent webapp.servicesudo hirtectl start agent webapp-app.servicesudo hirtectl start agent webapp-db.service
# Check pod statussudo hirtectl list-units agent | grep webappAdvanced Features
Service Dependencies
Leverage systemd’s dependency model:
# Create dependent services on agent nodesudo tee /etc/systemd/system/app-backend.service << EOF[Unit]Description=Application BackendAfter=network.target postgresql.serviceRequires=postgresql.service
[Service]Type=simpleExecStart=/usr/bin/app-backendRestart=on-failureUser=appuserGroup=appgroup
[Install]WantedBy=multi-user.targetEOF
sudo tee /etc/systemd/system/app-frontend.service << EOF[Unit]Description=Application FrontendAfter=app-backend.serviceRequires=app-backend.service
[Service]Type=simpleExecStart=/usr/bin/app-frontendRestart=on-failure
[Install]WantedBy=multi-user.targetEOF
# Reload systemdsudo systemctl daemon-reload
# Start frontend (will automatically start backend)sudo hirtectl start agent app-frontend.serviceMonitoring Integration
Create a monitoring setup:
# Create monitoring scriptsudo tee /usr/local/bin/hirte-monitor.sh << 'EOF'#!/bin/bash
# Monitor all nodesfor node in $(hirtectl list-nodes | tail -n +2 | awk '{print $1}'); do    echo "=== Node: $node ==="
    # Get failed units    failed=$(hirtectl list-units $node | grep -c failed) || failed=0    if [ $failed -gt 0 ]; then        echo "WARNING: $failed failed units on $node"        hirtectl list-units $node | grep failed    fi
    # Check node connectivity    last_seen=$(hirtectl list-nodes | grep $node | awk '{print $3, $4}')    echo "Last seen: $last_seen"
    echo ""doneEOF
sudo chmod +x /usr/local/bin/hirte-monitor.sh
# Create systemd timer for monitoringsudo tee /etc/systemd/system/hirte-monitor.service << EOF[Unit]Description=Hirte Monitoring
[Service]Type=oneshotExecStart=/usr/local/bin/hirte-monitor.shStandardOutput=journalEOF
sudo tee /etc/systemd/system/hirte-monitor.timer << EOF[Unit]Description=Run Hirte Monitoring every 5 minutes
[Timer]OnBootSec=5minOnUnitActiveSec=5min
[Install]WantedBy=timers.targetEOF
# Enable monitoringsudo systemctl daemon-reloadsudo systemctl enable --now hirte-monitor.timerMulti-Node Orchestration
Create orchestration scripts for complex deployments:
# Orchestration scriptsudo tee /usr/local/bin/deploy-stack.sh << 'EOF'#!/bin/bashset -e
echo "Deploying application stack..."
# Start database on primary nodeecho "Starting database..."hirtectl start primary postgresql.servicesleep 5
# Wait for database to be readyuntil hirtectl exec primary "pg_isready -U postgres"; do    echo "Waiting for database..."    sleep 2done
# Start backend services on agent nodesecho "Starting backend services..."hirtectl start agent app-backend.servicehirtectl start agent cache.service
# Wait for backendssleep 10
# Start frontend servicesecho "Starting frontend services..."hirtectl start agent nginx.servicehirtectl start agent app-frontend.service
echo "Stack deployment complete!"
# Show statushirtectl list-units primary | grep -E "postgresql|running"hirtectl list-units agent | grep -E "app-|nginx|cache|running"EOF
sudo chmod +x /usr/local/bin/deploy-stack.shSecurity Considerations
TLS Encryption
While Hirte doesn’t natively support TLS, you can add it using stunnel:
# Install stunnelsudo dnf install -y stunnel
# On primary node - Create stunnel server configsudo tee /etc/stunnel/hirte-server.conf << EOF[hirte-manager]accept = 0.0.0.0:2021connect = 127.0.0.1:2020cert = /etc/stunnel/hirte.pemEOF
# On agent node - Create stunnel client configsudo tee /etc/stunnel/hirte-client.conf << EOFclient = yes
[hirte-agent]accept = 127.0.0.1:2020connect = 192.168.1.10:2021EOF
# Generate certificate (on primary)sudo openssl req -new -x509 -days 365 -nodes \    -out /etc/stunnel/hirte.pem \    -keyout /etc/stunnel/hirte.pem \    -subj "/C=US/ST=State/L=City/O=Organization/CN=hirte.local"
# Start stunnel servicessudo systemctl enable --now stunnel@hirte-serversudo systemctl enable --now stunnel@hirte-client
# Update agent config to use local stunnelsudo sed -i 's/ManagerHost=.*/ManagerHost=127.0.0.1/' /etc/hirte/agent.confsudo systemctl restart hirte-agentAccess Control
Implement basic access control:
# Create hirte groupsudo groupadd hirte-operators
# Add users to groupsudo usermod -a -G hirte-operators operator1
# Restrict hirtectl accesssudo chown root:hirte-operators /usr/bin/hirtectlsudo chmod 750 /usr/bin/hirtectl
# Create sudo rules for specific operationssudo tee /etc/sudoers.d/hirte << EOF# Allow hirte-operators to use hirtectl%hirte-operators ALL=(root) NOPASSWD: /usr/bin/hirtectl list-units *%hirte-operators ALL=(root) NOPASSWD: /usr/bin/hirtectl status * *%hirte-operators ALL=(root) NOPASSWD: /usr/bin/hirtectl start * *.service%hirte-operators ALL=(root) NOPASSWD: /usr/bin/hirtectl stop * *.service%hirte-operators ALL=(root) NOPASSWD: /usr/bin/hirtectl restart * *.serviceEOFAudit Logging
Enable comprehensive audit logging:
# Configure auditd rulessudo tee -a /etc/audit/rules.d/hirte.rules << EOF# Audit Hirte operations-w /usr/bin/hirtectl -p x -k hirte_commands-w /etc/hirte/ -p wa -k hirte_config-w /var/log/hirte/ -p wa -k hirte_logs
# Audit systemd operations via Hirte-a always,exit -F arch=b64 -S execve -F path=/usr/bin/systemctl -k systemd_hirteEOF
# Reload audit rulessudo augenrules --loadsudo systemctl restart auditd
# Create log aggregationsudo tee /usr/local/bin/hirte-audit-report.sh << 'EOF'#!/bin/bashecho "Hirte Audit Report - $(date)"echo "=========================="echo ""echo "Recent Hirte Commands:"ausearch -k hirte_commands -ts recent | aureport -xecho ""echo "Configuration Changes:"ausearch -k hirte_config -ts recent | aureport -fecho ""echo "Service Operations:"journalctl -u hirte -u hirte-agent --since "1 hour ago" | grep -E "start|stop|restart"EOF
sudo chmod +x /usr/local/bin/hirte-audit-report.shTroubleshooting
Common Issues and Solutions
Connection Failures
# Check if manager is listeningsudo ss -tlnp | grep 2020
# Test connectivity from agenttelnet 192.168.1.10 2020
# Check firewall on both nodessudo firewall-cmd --list-all
# Verify DNS resolutionping -c 1 primary-node
# Check agent logssudo journalctl -u hirte-agent -fService Control Issues
# Verify local systemd control workssudo systemctl status httpd
# Check D-Bus connectivitysudo busctl status
# Test D-Bus method callssudo busctl call org.freedesktop.systemd1 \    /org/freedesktop/systemd1 \    org.freedesktop.systemd1.Manager \    GetUnit s "httpd.service"
# Check Hirte agent D-Bus permissionssudo -u hirte-agent busctl statusPerformance Issues
# Monitor Hirte resource usagesudo systemctl status hirte hirte-agent
# Check message queuesudo ss -tnp | grep 2020
# Monitor D-Bus trafficsudo busctl monitor org.freedesktop.systemd1
# Check system loadtop -b -n 1 | head -20Debug Mode
Enable verbose logging for troubleshooting:
# Enable debug loggingsudo sed -i 's/LogLevel=.*/LogLevel=debug/' /etc/hirte/hirte.confsudo sed -i 's/LogLevel=.*/LogLevel=debug/' /etc/hirte/agent.conf
# Restart servicessudo systemctl restart hirte hirte-agent
# Watch debug logssudo journalctl -f -u hirte -u hirte-agent
# Disable debug when donesudo sed -i 's/LogLevel=.*/LogLevel=info/' /etc/hirte/hirte.confsudo sed -i 's/LogLevel=.*/LogLevel=info/' /etc/hirte/agent.confsudo systemctl restart hirte hirte-agentBest Practices
1. Service Organization
# Use consistent naming conventionswebapp-frontend.servicewebapp-backend.servicewebapp-database.service
# Group related servicesinfrastructure-*.service  # Infrastructure servicesapp-*.service           # Application servicesmonitor-*.service       # Monitoring services2. Health Checks
# Create health check scriptsudo tee /usr/local/bin/check-hirte-health.sh << 'EOF'#!/bin/bash
ERRORS=0
# Check Hirte servicesfor service in hirte hirte-agent; do    if ! systemctl is-active --quiet $service; then        echo "ERROR: $service is not running"        ((ERRORS++))    fidone
# Check node connectivityNODES=$(hirtectl list-nodes | tail -n +2 | wc -l)if [ $NODES -lt 2 ]; then    echo "ERROR: Expected at least 2 nodes, found $NODES"    ((ERRORS++))fi
# Check for failed unitsFAILED=$(hirtectl list-units | grep -c failed || true)if [ $FAILED -gt 0 ]; then    echo "ERROR: $FAILED failed units detected"    ((ERRORS++))fi
exit $ERRORSEOF
sudo chmod +x /usr/local/bin/check-hirte-health.sh
# Add to monitoringecho "*/5 * * * * root /usr/local/bin/check-hirte-health.sh || logger -t hirte-health 'Health check failed'" | sudo tee -a /etc/crontab3. Backup and Recovery
# Backup Hirte configurationsudo tee /usr/local/bin/backup-hirte.sh << 'EOF'#!/bin/bash
BACKUP_DIR="/var/backups/hirte"TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Backup configurationstar -czf $BACKUP_DIR/hirte-config-$TIMESTAMP.tar.gz \    /etc/hirte/ \    /etc/systemd/system/*.service \    /etc/containers/systemd/
# Backup node statehirtectl list-nodes > $BACKUP_DIR/nodes-$TIMESTAMP.txthirtectl list-units > $BACKUP_DIR/units-$TIMESTAMP.txt
# Keep only last 7 daysfind $BACKUP_DIR -name "*.tar.gz" -mtime +7 -deletefind $BACKUP_DIR -name "*.txt" -mtime +7 -delete
echo "Backup completed: $BACKUP_DIR/*-$TIMESTAMP.*"EOF
sudo chmod +x /usr/local/bin/backup-hirte.sh
# Schedule daily backupsecho "0 2 * * * root /usr/local/bin/backup-hirte.sh" | sudo tee -a /etc/crontabProduction Deployment
High Availability Setup
For production environments, implement HA for the Hirte manager:
# Install keepalived on both manager nodessudo dnf install -y keepalived
# On primary managersudo tee /etc/keepalived/keepalived.conf << EOFvrrp_instance HIRTE_VIP {    state MASTER    interface eth0    virtual_router_id 51    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass secret123    }    virtual_ipaddress {        192.168.1.100/24    }    notify_master "/usr/local/bin/hirte-failover.sh master"    notify_backup "/usr/local/bin/hirte-failover.sh backup"}EOF
# On backup managersudo tee /etc/keepalived/keepalived.conf << EOFvrrp_instance HIRTE_VIP {    state BACKUP    interface eth0    virtual_router_id 51    priority 90    advert_int 1    authentication {        auth_type PASS        auth_pass secret123    }    virtual_ipaddress {        192.168.1.100/24    }    notify_master "/usr/local/bin/hirte-failover.sh master"    notify_backup "/usr/local/bin/hirte-failover.sh backup"}EOF
# Create failover scriptsudo tee /usr/local/bin/hirte-failover.sh << 'EOF'#!/bin/bash
STATE=$1
case $STATE in    master)        systemctl start hirte        logger -t hirte-ha "Became master, starting Hirte manager"        ;;    backup)        systemctl stop hirte        logger -t hirte-ha "Became backup, stopping Hirte manager"        ;;esacEOF
sudo chmod +x /usr/local/bin/hirte-failover.sh
# Start keepalivedsudo systemctl enable --now keepalived
# Configure agents to use VIPsudo sed -i 's/ManagerHost=.*/ManagerHost=192.168.1.100/' /etc/hirte/agent.confsudo systemctl restart hirte-agentMonitoring Dashboard
Create a simple monitoring dashboard:
# Install dependenciessudo dnf install -y python3 python3-flask python3-requests
# Create dashboard applicationsudo tee /opt/hirte-dashboard.py << 'EOF'#!/usr/bin/env python3
from flask import Flask, render_template_stringimport subprocessimport json
app = Flask(__name__)
DASHBOARD_TEMPLATE = '''<!DOCTYPE html><html><head>    <title>Hirte Dashboard</title>    <meta http-equiv="refresh" content="30">    <style>        body { font-family: Arial, sans-serif; margin: 20px; }        table { border-collapse: collapse; width: 100%; margin: 20px 0; }        th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }        th { background-color: #4CAF50; color: white; }        .online { color: green; }        .offline { color: red; }        .running { background-color: #e8f5e9; }        .failed { background-color: #ffebee; }    </style></head><body>    <h1>Hirte Service Controller Dashboard</h1>
    <h2>Nodes</h2>    <table>        <tr><th>Node</th><th>State</th><th>Last Seen</th></tr>        {% for node in nodes %}        <tr>            <td>{{ node.name }}</td>            <td class="{{ node.state }}">{{ node.state }}</td>            <td>{{ node.last_seen }}</td>        </tr>        {% endfor %}    </table>
    <h2>Services</h2>    <table>        <tr><th>Node</th><th>Service</th><th>State</th><th>Description</th></tr>        {% for service in services %}        <tr class="{{ service.state }}">            <td>{{ service.node }}</td>            <td>{{ service.name }}</td>            <td>{{ service.state }}</td>            <td>{{ service.description }}</td>        </tr>        {% endfor %}    </table>
    <p>Last updated: {{ timestamp }}</p></body></html>'''
@app.route('/')def dashboard():    from datetime import datetime
    # Get nodes    nodes = []    result = subprocess.run(['sudo', 'hirtectl', 'list-nodes'],                          capture_output=True, text=True)    if result.returncode == 0:        lines = result.stdout.strip().split('\n')[1:]  # Skip header        for line in lines:            parts = line.split()            if len(parts) >= 3:                nodes.append({                    'name': parts[0],                    'state': parts[1],                    'last_seen': ' '.join(parts[2:])                })
    # Get services    services = []    for node in nodes:        result = subprocess.run(['sudo', 'hirtectl', 'list-units', node['name']],                              capture_output=True, text=True)        if result.returncode == 0:            lines = result.stdout.strip().split('\n')[1:]  # Skip header            for line in lines:                parts = line.split(None, 3)                if len(parts) >= 3:                    services.append({                        'node': node['name'],                        'name': parts[0],                        'state': parts[1],                        'description': parts[3] if len(parts) > 3 else ''                    })
    return render_template_string(DASHBOARD_TEMPLATE,                                nodes=nodes,                                services=services,                                timestamp=datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
if __name__ == '__main__':    app.run(host='0.0.0.0', port=8080)EOF
sudo chmod +x /opt/hirte-dashboard.py
# Create systemd servicesudo tee /etc/systemd/system/hirte-dashboard.service << EOF[Unit]Description=Hirte DashboardAfter=network.target hirte.service
[Service]Type=simpleExecStart=/opt/hirte-dashboard.pyRestart=alwaysUser=nobodyGroup=nobody
[Install]WantedBy=multi-user.targetEOF
# Add sudo permissions for dashboardecho "nobody ALL=(root) NOPASSWD: /usr/bin/hirtectl list-nodes, /usr/bin/hirtectl list-units *" | sudo tee -a /etc/sudoers.d/hirte-dashboard
# Start dashboardsudo systemctl daemon-reloadsudo systemctl enable --now hirte-dashboard
# Open firewall portsudo firewall-cmd --permanent --add-port=8080/tcpsudo firewall-cmd --reloadConclusion
Hirte provides a unique solution for deterministic service orchestration across multiple nodes, making it ideal for:
- Safety-critical systems requiring predictable behavior
 - Regulated environments where Kubernetes complexity is prohibitive
 - Edge computing scenarios with resource constraints
 - Legacy system integration leveraging existing systemd infrastructure
 - Container orchestration without the overhead of k8s
 
Key Advantages
- Deterministic Behavior: Every operation has predictable outcomes
 - Minimal Footprint: ~10MB memory usage vs gigabytes for k8s
 - Native Integration: Direct systemd control without abstraction layers
 - Simple Architecture: Easy to understand, audit, and validate
 - Container Support: First-class Podman/Quadlet integration
 
When to Use Hirte
Choose Hirte when you need:
- Functional safety compliance (ISO 26262, IEC 61508)
 - Predictable real-time behavior
 - Minimal resource overhead
 - Simple multi-node coordination
 - Direct systemd integration
 
When NOT to Use Hirte
Consider alternatives when you need:
- Large-scale deployments (1000+ nodes)
 - Complex scheduling algorithms
 - Multi-cloud portability
 - Extensive ecosystem of operators
 - Dynamic auto-scaling
 
Hirte fills a critical gap in the orchestration landscape, providing a safety-focused alternative to cloud-native solutions while maintaining the simplicity and reliability that regulated industries require.