Automation Tools for Developers: Real Workflows Without AI - CLI, Scripts & Open Source
Master free automation tools for developers. Learn to automate repetitive tasks, workflows, deployments, monitoring, and operations. Build custom automation pipelines with open-source tools—no AI needed.
Introduction: Automation is Your Multiplier
Most developers waste 10-20 hours per week on repetitive tasks.
Deploying code manually. Running tests by hand. Checking system status. Updating documentation. Resetting test databases. Rebuilding Docker containers. Running backups. Scanning logs for errors.
Hours and hours of mindless repetition.
Meanwhile, some developers do the same work in a fraction of the time.
Why?
They automated.
Not with AI. Not with expensive tools.
With free, open-source automation tools that are simple, reliable, and actually work.
Here’s the truth: Automation is the most practical superpower a developer can have.
Every hour you automate is an hour you get back. Forever. Not once. Forever.
Automate a 30-minute weekly task? That’s 26 hours per year. Over a 10-year career, that’s 260 hours. That’s 6+ weeks of work.
Automate correctly, and you live in a different world.
This guide teaches you the automation tools that matter. Not the trendy ones. Not the ones that require AI or clouds or subscriptions.
The free ones. The reliable ones. The ones that actually solve problems.
Chapter 1: Understanding Automation Levels
Before diving into tools, understand what we’re automating.
Level 1: Simple Task Automation
You have a task that takes 5 minutes. You do it once a day.
Example: Backup your database to S3.
Without automation:
# Every morning you manually run
mysqldump -u user -p db_name | gzip > backup_$(date +%Y%m%d).sql.gz
aws s3 cp backup_$(date +%Y%m%d).sql.gz s3://backups/
Time: 5 minutes. Frequency: Daily.
With automation (cron job):
# Set once, runs forever
0 2 * * * mysqldump -u user -p db_name | gzip > /backups/backup_$(date +%Y%m%d).sql.gz && aws s3 cp /backups/backup_*.sql.gz s3://backups/
Time: 0 minutes (after setup). Frequency: Automatic.
Result: 5 min × 365 days = ~30 hours per year saved.
Level 2: Workflow Automation
You have a multi-step process. Steps depend on each other.
Example: Deploy new code to production.
Without automation:
- Pull latest code (2 min)
- Run tests (10 min)
- Build Docker image (5 min)
- Push to registry (3 min)
- Update Kubernetes (2 min)
- Verify deployment (3 min)
Total: 25 minutes. Happens 2-3 times per day.
With automation (CI/CD pipeline):
# GitHub Actions example
name: Deploy
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- run: npm test
- run: docker build -t app:${{ github.sha }} .
- run: docker push app:${{ github.sha }}
- run: kubectl set image deployment=app app=app:${{ github.sha }}
Time: 0 minutes (automatic). Frequency: Every push.
Result: 25 min × 600 deployments/year = 250 hours per year saved.
Level 3: Intelligent Automation
Your automation detects conditions and responds intelligently.
Example: Monitor system health and take action.
Without automation:
- Manually check CPU/memory (1 min)
- Check error logs (5 min)
- If bad, restart service (2 min)
- Notify team (1 min)
Frequency: 3-5 times per week = 54 hours per year.
With automation (monitoring + alerting):
# Monitor continuously
while true; do
cpu=$(top -bn1 | grep "Cpu(s)" | awk '{print 100 - $8}')
if (( $(echo "$cpu > 80" | bc -l) )); then
# Restart service
systemctl restart myapp
# Send alert
curl -X POST https://hooks.slack.com/... -d "{\"text\": \"High CPU, restarted app\"}"
fi
sleep 60
done
Time: 0 minutes (running in background). Frequency: Continuous.
Result: 54 hours per year saved + better reliability.
Chapter 2: Cron Jobs - Scheduled Task Automation
The most basic automation is cron. It runs tasks on a schedule.
Cron Basics
Cron runs commands at specified times.
Format:
MIN HOUR DAY MONTH DAY_OF_WEEK COMMAND
0 2 * * * /path/to/script.sh
Fields:
- MIN (0-59): Minute
- HOUR (0-23): Hour (24-hour format)
- DAY (1-31): Day of month
- MONTH (1-12): Month
- DAY_OF_WEEK (0-6): Day of week (0=Sunday, 6=Saturday)
Common patterns:
# Every minute
* * * * * command
# Every hour
0 * * * * command
# Daily at 2:00 AM
0 2 * * * command
# Weekly on Monday at 9:00 AM
0 9 * * 1 command
# Monthly on 1st at 3:00 AM
0 3 1 * * command
# Every 15 minutes
*/15 * * * * command
# Monday-Friday at 9:00 AM
0 9 * * 1-5 command
# Twice daily (9 AM and 9 PM)
0 9,21 * * * command
Setting Up Cron Jobs
Edit your crontab:
crontab -e
Add your job:
# Backup database daily
0 2 * * * /home/user/backup-db.sh
# Check disk space every hour
0 * * * * /home/user/check-disk.sh
# Deploy to staging every Sunday at midnight
0 0 * * 0 /home/user/deploy-staging.sh
View your crons:
crontab -l
Real Example: Automated Backups
Script: /home/user/backup-db.sh
#!/bin/bash
# Configuration
DB_NAME="production_db"
DB_USER="backup_user"
BACKUP_DIR="/var/backups/database"
S3_BUCKET="s3://company-backups"
RETENTION_DAYS=30
# Create backup
BACKUP_FILE="$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql.gz"
mysqldump -u $DB_USER -p$DB_PASS $DB_NAME | gzip > $BACKUP_FILE
# Upload to S3
aws s3 cp $BACKUP_FILE $S3_BUCKET/
# Delete old backups (local)
find $BACKUP_DIR -mtime +$RETENTION_DAYS -delete
# Delete old backups (S3)
aws s3 rm $S3_BUCKET --recursive --exclude "*" --include "backup_*.sql.gz" --older-than $RETENTION_DAYS
# Send notification
if [ $? -eq 0 ]; then
echo "Backup successful: $BACKUP_FILE" | mail -s "DB Backup Success" admin@company.com
else
echo "Backup failed!" | mail -s "DB Backup FAILED" admin@company.com
fi
Add to crontab:
0 2 * * * /home/user/backup-db.sh >> /var/log/backup.log 2>&1
Result: Automatic daily backups, retention management, error notifications.
Time saved: 30 min/week = 1560 hours over 10 years.
Chapter 3: Shell Scripts - Custom Automation
Cron handles scheduling. Shell scripts handle the logic.
Basic Shell Script Structure
#!/bin/bash
# Configuration at the top
CONFIG_FILE="/etc/myapp/config.conf"
LOG_FILE="/var/log/myapp.log"
# Functions
log_info() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] INFO: $1" >> $LOG_FILE
}
log_error() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $1" >> $LOG_FILE
}
# Main logic
main() {
log_info "Starting automated task"
# Do something
if some_command; then
log_info "Task completed successfully"
else
log_error "Task failed!"
exit 1
fi
log_info "Exiting"
}
# Run
main "$@"
Real Example: Health Check & Recovery
Script: /usr/local/bin/app-health-check.sh
#!/bin/bash
APP_PORT=8080
APP_PID_FILE="/var/run/myapp.pid"
APP_RESTART_CMD="systemctl restart myapp"
SLACK_WEBHOOK="https://hooks.slack.com/..."
health_check() {
# Check if app is responding
http_code=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:$APP_PORT/health)
if [ "$http_code" != "200" ]; then
return 1
fi
return 0
}
notify_slack() {
local message=$1
curl -X POST $SLACK_WEBHOOK \
-H 'Content-type: application/json' \
-d "{\"text\": \"$message\"}"
}
recover_app() {
notify_slack "🚨 App health check failed. Attempting restart..."
$APP_RESTART_CMD
sleep 5
if health_check; then
notify_slack "✅ App recovered after restart"
return 0
else
notify_slack "❌ App recovery failed. Manual intervention needed."
return 1
fi
}
# Main
if ! health_check; then
recover_app
exit $?
fi
exit 0
Add to crontab (every 5 minutes):
*/5 * * * * /usr/local/bin/app-health-check.sh
Result: Automatic recovery if app goes down. Slack notifications.
Chapter 4: GNU Make - Build Automation
Make is ancient, powerful, and perfect for automating build tasks.
Make Basics
Create a Makefile:
# Variables
PYTHON := python3
NODE := node
DOCKER := docker
# Targets
.PHONY: help install test build deploy clean
help:
@echo "Available commands:"
@echo " make install - Install dependencies"
@echo " make test - Run tests"
@echo " make build - Build application"
@echo " make deploy - Deploy to production"
@echo " make clean - Clean build artifacts"
install:
pip install -r requirements.txt
npm install
test:
pytest tests/
npm test
build:
$(DOCKER) build -t myapp:latest .
deploy: build
$(DOCKER) push myapp:latest
kubectl apply -f deployment.yaml
clean:
rm -rf __pycache__
rm -rf build/
rm -rf dist/
Use it:
make install # Runs install target
make test # Runs test target
make deploy # Builds then deploys
Real Example: Multi-Step Deployment
# Variables
SERVICE_NAME := payment-api
REGISTRY := docker.io/company
VERSION := $(shell git rev-parse --short HEAD)
ENVIRONMENT := production
# Targets
.PHONY: build push deploy verify clean
build:
@echo "Building $(SERVICE_NAME):$(VERSION)"
docker build -t $(SERVICE_NAME):$(VERSION) .
docker tag $(SERVICE_NAME):$(VERSION) $(SERVICE_NAME):latest
push: build
@echo "Pushing to registry"
docker tag $(SERVICE_NAME):$(VERSION) $(REGISTRY)/$(SERVICE_NAME):$(VERSION)
docker tag $(SERVICE_NAME):latest $(REGISTRY)/$(SERVICE_NAME):latest
docker push $(REGISTRY)/$(SERVICE_NAME):$(VERSION)
docker push $(REGISTRY)/$(SERVICE_NAME):latest
deploy: push
@echo "Deploying to $(ENVIRONMENT)"
kubectl set image deployment/$(SERVICE_NAME) \
$(SERVICE_NAME)=$(REGISTRY)/$(SERVICE_NAME):$(VERSION) \
--namespace=$(ENVIRONMENT)
kubectl rollout status deployment/$(SERVICE_NAME) -n $(ENVIRONMENT)
verify:
@echo "Verifying deployment"
@./verify-deployment.sh $(SERVICE_NAME) $(ENVIRONMENT)
rollback:
@echo "Rolling back to previous version"
kubectl rollout undo deployment/$(SERVICE_NAME) -n $(ENVIRONMENT)
clean:
docker rmi $(SERVICE_NAME):$(VERSION) $(SERVICE_NAME):latest
Use it:
make deploy # Build, push, and deploy
make rollback # Rollback if needed
make verify # Verify it's working
Chapter 5: Task Runners - Workflow Orchestration
For more complex workflows, use task runners like just or task.
GNU Task (taskfile)
Install:
# macOS
brew install go-task/tap/go-task
# Linux
sudo apt install go-task-cli
Create Taskfile.yml:
version: "3"
vars:
SERVICE: payment-api
REGISTRY: docker.io/company
ENVIRONMENT: production
tasks:
build:
desc: Build Docker image
cmds:
- docker build -t {{.SERVICE}}:{{.VERSION}} .
- docker tag {{.SERVICE}}:{{.VERSION}} {{.SERVICE}}:latest
test:
desc: Run test suite
cmds:
- pytest tests/
- npm test
push:
desc: Push image to registry
deps: [build]
cmds:
- docker tag {{.SERVICE}}:latest {{.REGISTRY}}/{{.SERVICE}}:latest
- docker push {{.REGISTRY}}/{{.SERVICE}}:latest
deploy:
desc: Deploy to production
deps: [push]
cmds:
- kubectl set image deployment/{{.SERVICE}} {{.SERVICE}}={{.REGISTRY}}/{{.SERVICE}}:latest -n {{.ENVIRONMENT}}
- kubectl rollout status deployment/{{.SERVICE}} -n {{.ENVIRONMENT}}
stop:
desc: Stop all containers
cmds:
- docker stop $(docker ps -q)
logs:
desc: View service logs
cmds:
- kubectl logs -f deployment/{{.SERVICE}} -n {{.ENVIRONMENT}}
default:
desc: Build and test
deps: [build, test]
Use it:
task build # Build
task test # Test
task deploy # Build, test, push, deploy (all dependencies)
task logs # View logs
Chapter 6: Watch Tools - File Monitoring & Auto-execution
Automatically run commands when files change.
entr - Execute on File Change
Install:
# macOS
brew install entr
# Linux
apt install entr
Use it:
# Run tests when .py files change
find . -name "*.py" | entr pytest tests/
# Rebuild when Go files change
find . -name "*.go" | entr go build
# Restart server when code changes
find . -name "*.rs" | entr cargo run
# Run multiple commands
ls src/*.js | entr npm test, npm run build
inotify-tools - File System Monitoring
Install:
sudo apt install inotify-tools # Linux only
Use it:
#!/bin/bash
# Trigger action when file is modified
inotifywait -m -e modify src/ |
while read path action file; do
echo "Detected change: $file"
npm test
done
Real Example: Auto-Deployment on Git Push
#!/bin/bash
REPO_PATH="/opt/myapp"
DEPLOY_SCRIPT="/opt/deploy.sh"
# Watch for changes in repository
cd $REPO_PATH
while true; do
git fetch origin
UPSTREAM=$(git rev-parse origin/main)
LOCAL=$(git rev-parse HEAD)
if [ "$UPSTREAM" != "$LOCAL" ]; then
echo "New commits detected. Deploying..."
git pull origin main
$DEPLOY_SCRIPT
fi
sleep 60 # Check every minute
done
Run as service:
# /etc/systemd/system/auto-deploy.service
[Unit]
Description=Auto Deploy on Git Push
After=network.target
[Service]
Type=simple
User=deploy
ExecStart=/usr/local/bin/auto-deploy.sh
Restart=always
[Install]
WantedBy=multi-user.target
Enable:
sudo systemctl enable auto-deploy
sudo systemctl start auto-deploy
Chapter 7: Log Analysis & Monitoring Automation
Automate log analysis and alerting.
Basic Log Analysis with grep & awk
#!/bin/bash
LOG_FILE="/var/log/myapp.log"
ERROR_THRESHOLD=10
TIME_WINDOW=60 # seconds
# Count errors in last N seconds
error_count=$(grep -c "ERROR" $LOG_FILE | tail -c 100)
if [ $error_count -gt $ERROR_THRESHOLD ]; then
echo "Alert: $error_count errors detected"
tail -50 $LOG_FILE | mail -s "App Alert" admin@company.com
fi
Real Example: Performance Degradation Detection
#!/bin/bash
LOG_FILE="/var/log/api.log"
THRESHOLD_MS=1000
SLACK_WEBHOOK="https://hooks.slack.com/..."
# Extract response times from last hour
grep "$(date -d '1 hour ago' +%Y-%m-%d)" $LOG_FILE | \
awk '{print $NF}' | \
awk -v threshold=$THRESHOLD_MS '$1 > threshold {count++} END {print count}' | \
while read slow_requests; do
if [ "$slow_requests" -gt 100 ]; then
curl -X POST $SLACK_WEBHOOK \
-H 'Content-type: application/json' \
-d "{\"text\": \"🚨 Performance degradation: $slow_requests slow requests in last hour\"}"
fi
done
Add to crontab (every hour):
0 * * * * /usr/local/bin/check-performance.sh
Chapter 8: Database Maintenance Automation
Automate routine database maintenance.
Example: PostgreSQL Maintenance
#!/bin/bash
DB_NAME="production_db"
DB_USER="postgres"
LOG_FILE="/var/log/db-maintenance.log"
log_msg() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" >> $LOG_FILE
}
# Vacuum (garbage collection)
log_msg "Starting VACUUM"
psql -U $DB_USER -d $DB_NAME -c "VACUUM ANALYZE;"
log_msg "VACUUM completed"
# Reindex
log_msg "Starting REINDEX"
psql -U $DB_USER -d $DB_NAME -c "REINDEX DATABASE $DB_NAME;"
log_msg "REINDEX completed"
# Update table statistics
log_msg "Updating statistics"
psql -U $DB_USER -d $DB_NAME -c "ANALYZE;"
log_msg "Statistics updated"
# Check for bloated tables
log_msg "Checking for bloat"
psql -U $DB_USER -d $DB_NAME << EOF >> $LOG_FILE
SELECT
schemaname,
tablename,
round(100 * pg_relation_size(schemaname||'.'||tablename) /
pg_total_relation_size(schemaname||'.'||tablename)) AS table_ratio,
pg_total_relation_size(schemaname||'.'||tablename) AS total_size
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY total_size DESC;
EOF
log_msg "Maintenance complete"
Schedule in crontab (Sunday at 3 AM):
0 3 * * 0 /usr/local/bin/db-maintenance.sh
Chapter 9: Notification Automation
Send alerts through multiple channels automatically.
Slack Notifications
#!/bin/bash
notify_slack() {
local message=$1
local channel=${2:-#alerts}
local webhook=$SLACK_WEBHOOK
curl -X POST $webhook \
-H 'Content-type: application/json' \
-d "{
\"channel\": \"$channel\",
\"text\": \"$message\"
}"
}
# Example: Deployment notification
notify_slack "🚀 Deployment to production complete. Version: v1.2.3"
# Example: Error alert with color
curl -X POST $SLACK_WEBHOOK \
-H 'Content-type: application/json' \
-d '{
"attachments": [
{
"color": "danger",
"title": "Critical Error",
"text": "Database connection failed",
"fields": [
{
"title": "Environment",
"value": "production",
"short": true
},
{
"title": "Time",
"value": "'$(date)'",
"short": true
}
]
}
]
}'
Email Notifications
#!/bin/bash
notify_email() {
local subject=$1
local body=$2
local recipient=$3
echo "$body" | mail \
-s "$subject" \
-r "automation@company.com" \
"$recipient"
}
# Example
notify_email "Backup Complete" "Daily backup completed successfully" "admin@company.com"
Webhook Notifications
#!/bin/bash
notify_webhook() {
local url=$1
local event=$2
local data=$3
curl -X POST $url \
-H 'Content-type: application/json' \
-d "{\"event\": \"$event\", \"data\": $data}"
}
# Example
notify_webhook "https://api.company.com/webhooks/events" "deployment" "{\"version\": \"1.2.3\", \"environment\": \"prod\"}"
Chapter 10: Advanced Workflow Automation
Combine everything into sophisticated automation pipelines.
Real Example: Complete Release Automation
Taskfile.yml:
version: '3'
vars:
SERVICE: myapp
REGISTRY: docker.io/company
SLACK_WEBHOOK: https://hooks.slack.com/...
tasks:
release:
desc: Complete release workflow
deps: [test, build, push, deploy, notify]
test:
desc: Run all tests
cmds:
- echo "Running tests..."
- pytest tests/
- npm test
build:
desc: Build application
cmds:
- echo "Building {{.SERVICE}}..."
- docker build -t {{.SERVICE}}:latest .
push:
desc: Push to registry
cmds:
- docker tag {{.SERVICE}}:latest {{.REGISTRY}}/{{.SERVICE}}:latest
- docker push {{.REGISTRY}}/{{.SERVICE}}:latest
deploy:
desc: Deploy to production
cmds:
- kubectl set image deployment/{{.SERVICE}} {{.SERVICE}}={{.REGISTRY}}/{{.SERVICE}}:latest -n production
- kubectl rollout status deployment/{{.SERVICE}} -n production
verify:
desc: Verify deployment health
cmds:
- sleep 30
- kubectl get pods -n production | grep {{.SERVICE}}
notify:
desc: Send notifications
cmds:
- curl -X POST {{.SLACK_WEBHOOK}} -H 'Content-type: application/json' -d '{"text": "✅ Release complete for {{.SERVICE}}"}'
Crontab entry:
# Release every Friday at 6 PM
0 18 * * 5 cd /opt/myapp && task release
Real Example: Data Pipeline Automation
#!/bin/bash
# Configuration
DATA_SOURCE="https://api.datasource.com/export"
STAGING_DIR="/data/staging"
ARCHIVE_DIR="/data/archive"
PROCESSING_SCRIPT="/opt/process-data.py"
SLACK_WEBHOOK="$SLACK_WEBHOOK"
log_info() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] INFO: $1" >> /var/log/data-pipeline.log
}
notify() {
curl -X POST $SLACK_WEBHOOK -d "{\"text\": \"$1\"}"
}
# Step 1: Download data
log_info "Downloading data..."
curl -o $STAGING_DIR/data_$(date +%Y%m%d).csv $DATA_SOURCE
# Step 2: Validate format
log_info "Validating data..."
if ! head -1 $STAGING_DIR/data_$(date +%Y%m%d).csv | grep -q "expected_column"; then
log_info "Data validation failed!"
notify "❌ Data pipeline failed: validation error"
exit 1
fi
# Step 3: Process data
log_info "Processing data..."
python $PROCESSING_SCRIPT $STAGING_DIR/data_$(date +%Y%m%d).csv
# Step 4: Archive
log_info "Archiving..."
gzip $STAGING_DIR/data_$(date +%Y%m%d).csv
mv $STAGING_DIR/data_$(date +%Y%m%d).csv.gz $ARCHIVE_DIR/
# Step 5: Cleanup old files
log_info "Cleaning up old archives..."
find $ARCHIVE_DIR -mtime +30 -delete
log_info "Pipeline complete"
notify "✅ Data pipeline completed successfully"
Schedule (daily at 1 AM):
0 1 * * * /usr/local/bin/data-pipeline.sh
Chapter 11: Building Your Automation System
Don’t try to automate everything at once. Build gradually.
Week 1: Task-Level Automation
Automate simple, repetitive tasks.
# Backup databases
0 2 * * * /usr/local/bin/backup-db.sh
# Update system packages
0 3 * * 0 /usr/local/bin/system-update.sh
# Check disk space
0 */4 * * * /usr/local/bin/check-disk.sh
Estimated time savings: 5 hours/week
Week 2: Workflow Automation
Automate multi-step processes.
# Automated tests on git push
# Deploy to staging on successful tests
# Notify team of results
Estimated time savings: 10 hours/week
Week 3: Monitoring Automation
Add automated health checks and recovery.
# Health checks every 5 minutes
# Auto-restart on failure
# Alert on critical issues
Estimated time savings: 15 hours/week
Week 4: Intelligence
Make automation smarter with decision logic.
# Detect anomalies
# Predict failures
# Suggest optimizations
Estimated time savings: 20+ hours/week
Automation ROI Calculator
Time per task: 15 minutes
Frequency: Daily (5 days/week)
Annual time: 15 min × 5 days × 52 weeks = 6,500 minutes = 108 hours
Setup time: 2 hours
Maintenance: 30 min/month = 6 hours/year
Net savings year 1: 108 - 2 - 6 = 100 hours
Net savings year 2+: 108 - 6 = 102 hours/year
Over 5 years: 100 + (102 × 4) = 508 hours saved
Appendix A: Essential Automation Tools Checklist
Scheduling:
- ✅ Cron (built-in)
- ✅ Systemd timers
- ✅ AT command
- ✅ Anacron (cron for non-24/7 systems)
Scripting:
- ✅ Bash/Zsh
- ✅ Python
- ✅ Go
- ✅ Node.js
Task Running:
- ✅ GNU Make
- ✅ Task (go-task)
- ✅ Just
- ✅ Makefile
File Monitoring:
- ✅ entr
- ✅ inotify-tools
- ✅ watchman
- ✅ nodemon (for Node.js)
Monitoring & Alerting:
- ✅ Prometheus (metrics)
- ✅ AlertManager (alerting)
- ✅ Grafana (visualization)
- ✅ ELK Stack (logging)
- ✅ Telegraf (system metrics)
CI/CD:
- ✅ Jenkins (self-hosted)
- ✅ GitLab CI (if using GitLab)
- ✅ GitHub Actions (if using GitHub)
- ✅ Drone CI (lightweight)
- ✅ Tekton (Kubernetes-native)
Deployment Automation:
- ✅ Ansible (configuration management)
- ✅ Terraform (infrastructure as code)
- ✅ Docker (containerization)
- ✅ Kubernetes (orchestration)
- ✅ Helm (package management)
Notifications:
- ✅ Slack webhooks
- ✅ Email (mail command)
- ✅ PagerDuty (incident response)
- ✅ Telegram bots
- ✅ Discord webhooks
Appendix B: Quick Automation Setup Script
setup-automation.sh:
#!/bin/bash
# Automation Setup Script
# Creates basic automation infrastructure
AUTOMATION_DIR="/opt/automation"
SCRIPTS_DIR="$AUTOMATION_DIR/scripts"
LOGS_DIR="/var/log/automation"
CRON_DIR="$AUTOMATION_DIR/cron"
echo "Setting up automation infrastructure..."
# Create directories
sudo mkdir -p $SCRIPTS_DIR
sudo mkdir -p $LOGS_DIR
sudo mkdir -p $CRON_DIR
# Create log rotation config
cat | sudo tee /etc/logrotate.d/automation > /dev/null <<'EOF'
/var/log/automation/*.log {
daily
rotate 14
compress
delaycompress
notifempty
create 0644 root root
sharedscripts
postrotate
systemctl reload rsyslog > /dev/null 2>&1 || true
endscript
}
EOF
# Create helper script
cat > $SCRIPTS_DIR/helper.sh <<'EOF'
#!/bin/bash
log_info() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] INFO: $1" >> $LOG_FILE
}
log_error() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $1" >> $LOG_FILE
}
notify_slack() {
local message=$1
curl -X POST $SLACK_WEBHOOK \
-H 'Content-type: application/json' \
-d "{\"text\": \"$message\"}" 2>/dev/null
}
retry_command() {
local max_attempts=3
local attempt=1
while [ $attempt -le $max_attempts ]; do
if "$@"; then
return 0
fi
attempt=$((attempt + 1))
sleep 5
done
return 1
}
EOF
chmod +x $SCRIPTS_DIR/helper.sh
# Create example scripts
cat > $SCRIPTS_DIR/health-check.sh <<'EOF'
#!/bin/bash
source "$(dirname "$0")/helper.sh"
export LOG_FILE="/var/log/automation/health-check.log"
log_info "Starting health check..."
# Add your health check logic here
log_info "Health check complete"
EOF
chmod +x $SCRIPTS_DIR/health-check.sh
# Create crontab template
cat > $CRON_DIR/crontab-template <<'EOF'
# Automation Crontab
# Edit and install with: crontab < crontab-template
# Health checks (every 5 minutes)
*/5 * * * * /opt/automation/scripts/health-check.sh
# Daily maintenance (3 AM)
0 3 * * * /opt/automation/scripts/maintenance.sh
# Weekly cleanup (Sunday at 4 AM)
0 4 * * 0 /opt/automation/scripts/cleanup.sh
# Environment variables
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
SLACK_WEBHOOK=https://hooks.slack.com/...
EOF
echo "✅ Automation infrastructure ready!"
echo ""
echo "Next steps:"
echo "1. Edit crontab template: $CRON_DIR/crontab-template"
echo "2. Install crontab: crontab < $CRON_DIR/crontab-template"
echo "3. Add scripts to: $SCRIPTS_DIR"
echo "4. View logs: tail -f /var/log/automation/*.log"
Run it:
chmod +x setup-automation.sh
./setup-automation.sh
Appendix C: Automation Checklist
Planning:
- ✅ Identify repetitive tasks (>15 min/week)
- ✅ Calculate ROI (time savings vs setup effort)
- ✅ Determine automation level needed
- ✅ Plan error handling and notifications
Implementation:
- ✅ Write automation script
- ✅ Test in non-production
- ✅ Add logging and error handling
- ✅ Add notifications (Slack/email)
- ✅ Test failure scenarios
- ✅ Document process
- ✅ Deploy to production
Maintenance:
- ✅ Monitor automation logs weekly
- ✅ Review success rate
- ✅ Fix failures immediately
- ✅ Optimize over time
- ✅ Update documentation
- ✅ Review for obsolete automations (quarterly)
Scaling:
- ✅ Consolidate similar scripts
- ✅ Extract common functions
- ✅ Create libraries and helpers
- ✅ Build dashboard to visualize automation
- ✅ Document patterns for team reuse
Conclusion: The Automation Multiplier
Every hour you automate is an hour you get back. Forever.
Most developers work their entire careers without internalizing this.
They see automation as something “for DevOps” or “for operations.”
Wrong.
Automation is for anyone who does repetitive work.
And that’s every developer.
Start small. Automate one task. See the time savings. Get hooked.
In a year, you’ll have 100+ hours back.
In a career, you’ll have thousands.
That’s the power of automation.
Not AI. Not magic.
Just simple, reliable, free tools working for you in the background.
Building that system is one of the best investments you’ll make.
Start today.
Tags
Related Articles
Building Automation Services with Go: Practical Tools & Real-World Solutions
Master building useful automation services and tools with Go. Learn to create production-ready services that solve real problems: log processors, API monitors, deployment tools, data pipelines, and more.
Automation with Go: Building Scalable, Concurrent Systems for Real-World Tasks
Master Go for automation. Learn to build fast, concurrent automation tools, CLI utilities, monitoring systems, and deployment pipelines. Go's concurrency model makes it perfect for real-world automation.
CLI Productivity Tools for Developers: Writing, Tasks, Projects & Daily Workflow
Master command-line tools for personal productivity. Organize tasks, manage projects, write documentation, and stay organized—all from the terminal.