Building Automation Services with Go: Practical Tools & Real-World Solutions

Building Automation Services with Go: Practical Tools & Real-World Solutions

Master building useful automation services and tools with Go. Learn to create production-ready services that solve real problems: log processors, API monitors, deployment tools, data pipelines, and more.

By Omar Flores

Introduction: Go Services That Solve Real Problems

Most developers think of building automation as a side project. A script. Something quick and temporary.

Wrong.

The best automation is built as a service. A tool. Something that runs continuously, reliably, and solves a real problem.

Go is perfect for this. Not just because it’s fast and concurrent. But because it lets you build services that:

  • Deploy as a single binary
  • Run with minimal resources
  • Scale horizontally
  • Integrate with existing tools
  • Require zero infrastructure

This guide teaches you to build real automation services with Go. Services you’d actually use. Services you’d deploy to production.

Not theoretical examples. Not hello-world toys.

Real tools that solve real problems.


Chapter 1: The Anatomy of an Automation Service

Before we build, understand what makes a good automation service.

Essential Components

Component 1: Configuration Management

Your service needs to be configurable. Environment variables, config files, command-line flags.

type Config struct {
	APIKey      string
	Port        int
	DatabaseURL string
	LogLevel    string
}

func loadConfig() Config {
	return Config{
		APIKey:      os.Getenv("API_KEY"),
		Port:        getEnvInt("PORT", 8080),
		DatabaseURL: os.Getenv("DATABASE_URL"),
		LogLevel:    os.Getenv("LOG_LEVEL", "info"),
	}
}

Component 2: Structured Logging

Your service must be observable. You need logs, not print statements.

import "github.com/sirupsen/logrus"

func main() {
	log := logrus.New()
	log.SetFormatter(&logrus.JSONFormatter{})

	log.WithFields(logrus.Fields{
		"service": "api-monitor",
		"version": "1.0.0",
	}).Info("Service started")
}

Component 3: Health Checks

Services need to report their status. A /health endpoint.

func healthCheckHandler(w http.ResponseWriter, r *http.Request) {
	health := map[string]string{
		"status":    "healthy",
		"timestamp": time.Now().Format(time.RFC3339),
	}

	w.Header().Set("Content-Type", "application/json")
	json.NewEncoder(w).Encode(health)
}

Component 4: Graceful Shutdown

Services should shut down cleanly. Not abruptly kill connections.

sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)

go func() {
	<-sigChan
	log.Info("Shutdown signal received, closing gracefully...")
	server.Close()
}()

Component 5: Metrics & Monitoring

Services should expose metrics. For debugging, alerting, and observability.

import "github.com/prometheus/client_golang/prometheus"

var (
	requestsProcessed = prometheus.NewCounter(prometheus.CounterOpts{
		Name: "requests_processed_total",
		Help: "Total number of requests processed",
	})

	processingTime = prometheus.NewHistogram(prometheus.HistogramOpts{
		Name: "request_duration_seconds",
		Help: "Time spent processing request",
	})
)

Chapter 2: Service 1 - API Health Monitor

Monitor multiple APIs and alert on failures.

package main

import (
	"fmt"
	"io"
	"net/http"
	"os"
	"sync"
	"time"

	"github.com/sirupsen/logrus"
)

type APICheck struct {
	Name     string
	URL      string
	Interval time.Duration
	Timeout  time.Duration
}

type Monitor struct {
	checks map[string]*APICheck
	logger *logrus.Logger
	mu     sync.RWMutex
}

func NewMonitor() *Monitor {
	logger := logrus.New()
	logger.SetFormatter(&logrus.JSONFormatter{})
	return &Monitor{
		checks: make(map[string]*APICheck),
		logger: logger,
	}
}

func (m *Monitor) AddCheck(check APICheck) {
	m.mu.Lock()
	defer m.mu.Unlock()
	m.checks[check.Name] = &check
}

func (m *Monitor) Start() {
	m.mu.RLock()
	checks := make([]*APICheck, 0, len(m.checks))
	for _, check := range m.checks {
		checks = append(checks, check)
	}
	m.mu.RUnlock()

	for _, check := range checks {
		go m.monitorAPI(check)
	}
}

func (m *Monitor) monitorAPI(check *APICheck) {
	ticker := time.NewTicker(check.Interval)
	defer ticker.Stop()

	for range ticker.C {
		m.performCheck(check)
	}
}

func (m *Monitor) performCheck(check *APICheck) {
	start := time.Now()
	client := &http.Client{
		Timeout: check.Timeout,
	}

	resp, err := client.Get(check.URL)
	duration := time.Since(start)

	if err != nil {
		m.logger.WithFields(logrus.Fields{
			"api":      check.Name,
			"status":   "down",
			"duration": duration,
			"error":    err.Error(),
		}).Error("API check failed")

		m.alertSlack(fmt.Sprintf("🚨 %s is DOWN: %v", check.Name, err))
		return
	}
	defer resp.Body.Close()

	// Discard body (don't need to read it)
	io.ReadAll(resp.Body)

	if resp.StatusCode != http.StatusOK {
		m.logger.WithFields(logrus.Fields{
			"api":        check.Name,
			"status":    "unhealthy",
			"http_code": resp.StatusCode,
			"duration":  duration,
		}).Warn("API returned non-200 status")

		m.alertSlack(fmt.Sprintf("⚠️ %s returned %d", check.Name, resp.StatusCode))
		return
	}

	m.logger.WithFields(logrus.Fields{
		"api":      check.Name,
		"status":   "healthy",
		"duration": duration,
	}).Debug("API check passed")
}

func (m *Monitor) alertSlack(message string) {
	webhook := os.Getenv("SLACK_WEBHOOK")
	if webhook == "" {
		return
	}

	payload := fmt.Sprintf(`{"text": "%s"}`, message)
	http.Post(webhook, "application/json", io.NopCloser(nil))
}

func main() {
	monitor := NewMonitor()

	// Add checks
	monitor.AddCheck(APICheck{
		Name:     "API Server",
		URL:      "http://localhost:8080/health",
		Interval: 30 * time.Second,
		Timeout:  5 * time.Second,
	})

	monitor.AddCheck(APICheck{
		Name:     "Database API",
		URL:      "http://localhost:5432/health",
		Interval: 60 * time.Second,
		Timeout:  5 * time.Second,
	})

	monitor.Start()

	// Keep running
	select {}
}

Deploy as service:

# Build
go build -o api-monitor

# Run
SLACK_WEBHOOK="https://hooks.slack.com/..." ./api-monitor

# Or in Docker
docker build -t api-monitor .
docker run -e SLACK_WEBHOOK="..." api-monitor

Chapter 3: Service 2 - Log Processor & Analyzer

Process logs in real-time, extract metrics, detect patterns.

package main

import (
	"bufio"
	"flag"
	"fmt"
	"io"
	"log"
	"os"
	"regexp"
	"strings"
	"sync"
	"sync/atomic"
	"time"
)

type LogProcessor struct {
	errorCount      int64
	warningCount    int64
	successCount    int64
	slowRequests    int64
	slowThreshold   int
	errorPatterns   map[string]int64
	mu              sync.RWMutex
	started         time.Time
}

func NewLogProcessor(slowThreshold int) *LogProcessor {
	return &LogProcessor{
		slowThreshold: slowThreshold,
		errorPatterns: make(map[string]int64),
		started:       time.Now(),
	}
}

func (lp *LogProcessor) ProcessStream(reader io.Reader) error {
	scanner := bufio.NewScanner(reader)

	for scanner.Scan() {
		line := scanner.Text()
		lp.processLine(line)
	}

	return scanner.Err()
}

func (lp *LogProcessor) processLine(line string) {
	// Example log format: [2026-02-17T10:30:45] ERROR Database connection failed

	if strings.Contains(line, "ERROR") {
		atomic.AddInt64(&lp.errorCount, 1)
		lp.recordErrorPattern(line)
	} else if strings.Contains(line, "WARN") {
		atomic.AddInt64(&lp.warningCount, 1)
	} else if strings.Contains(line, "SUCCESS") || strings.Contains(line, "INFO") {
		atomic.AddInt64(&lp.successCount, 1)
	}

	// Detect slow requests
	if match := regexp.MustCompile(`duration=(\d+)ms`).FindStringSubmatch(line); len(match) > 1 {
		var duration int
		fmt.Sscanf(match[1], "%d", &duration)
		if duration > lp.slowThreshold {
			atomic.AddInt64(&lp.slowRequests, 1)
		}
	}
}

func (lp *LogProcessor) recordErrorPattern(line string) {
	// Extract error message
	parts := strings.SplitN(line, "ERROR", 2)
	if len(parts) < 2 {
		return
	}

	pattern := strings.TrimSpace(parts[1])
	// Keep only first 50 chars to group similar errors
	if len(pattern) > 50 {
		pattern = pattern[:50]
	}

	lp.mu.Lock()
	lp.errorPatterns[pattern]++
	lp.mu.Unlock()
}

func (lp *LogProcessor) Report() {
	elapsed := time.Since(lp.started)

	fmt.Printf("\n=== Log Analysis Report ===\n")
	fmt.Printf("Time: %.0f seconds\n", elapsed.Seconds())
	fmt.Printf("Errors: %d\n", atomic.LoadInt64(&lp.errorCount))
	fmt.Printf("Warnings: %d\n", atomic.LoadInt64(&lp.warningCount))
	fmt.Printf("Successes: %d\n", atomic.LoadInt64(&lp.successCount))
	fmt.Printf("Slow Requests (>%dms): %d\n", lp.slowThreshold, atomic.LoadInt64(&lp.slowRequests))

	if atomic.LoadInt64(&lp.errorCount) > 0 {
		fmt.Printf("\nTop Error Patterns:\n")
		lp.mu.RLock()
		for pattern, count := range lp.errorPatterns {
			if count > 0 {
				fmt.Printf("  %d × %s\n", count, pattern)
			}
		}
		lp.mu.RUnlock()
	}

	errorRate := float64(atomic.LoadInt64(&lp.errorCount)) /
		float64(atomic.LoadInt64(&lp.errorCount) + atomic.LoadInt64(&lp.successCount)) * 100

	fmt.Printf("\nError Rate: %.2f%%\n", errorRate)
}

func main() {
	slowThreshold := flag.Int("threshold", 1000, "Slow request threshold in milliseconds")
	flag.Parse()

	processor := NewLogProcessor(*slowThreshold)

	// Process stdin or file
	var input io.Reader
	if len(flag.Args()) > 0 {
		file, err := os.Open(flag.Args()[0])
		if err != nil {
			log.Fatal(err)
		}
		defer file.Close()
		input = file
	} else {
		input = os.Stdin
	}

	if err := processor.ProcessStream(input); err != nil {
		log.Fatal(err)
	}

	processor.Report()
}

Use it:

# Build
go build -o log-analyzer

# Process file
./log-analyzer --threshold 500 app.log

# Process live logs
tail -f app.log | ./log-analyzer

# Process from stdin
cat production.log | ./log-analyzer --threshold 1000

Chapter 4: Service 3 - Scheduled Task Runner

Run tasks on a schedule, with retries and notifications.

package main

import (
	"context"
	"fmt"
	"log"
	"os"
	"os/signal"
	"sync"
	"syscall"
	"time"

	"github.com/robfig/cron/v3"
)

type Task struct {
	Name     string
	Schedule string
	Func     func(ctx context.Context) error
	MaxRetry int
}

type TaskRunner struct {
	cron     *cron.Cron
	tasks    map[string]*Task
	mu       sync.RWMutex
	logger   func(string, ...interface{})
	onError  func(string, error)
	onSuccess func(string, time.Duration)
}

func NewTaskRunner() *TaskRunner {
	return &TaskRunner{
		cron:   cron.New(),
		tasks:  make(map[string]*Task),
		logger: log.Printf,
	}
}

func (tr *TaskRunner) AddTask(task Task) error {
	tr.mu.Lock()
	defer tr.mu.Unlock()

	// Schedule the task
	_, err := tr.cron.AddFunc(task.Schedule, func() {
		tr.executeTask(task)
	})
	if err != nil {
		return fmt.Errorf("failed to schedule task: %w", err)
	}

	tr.tasks[task.Name] = &task
	tr.logger("Task scheduled: %s (%s)", task.Name, task.Schedule)

	return nil
}

func (tr *TaskRunner) executeTask(task Task) {
	var lastErr error
	start := time.Now()

	for attempt := 1; attempt <= task.MaxRetry; attempt++ {
		ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
		err := task.Func(ctx)
		cancel()

		if err == nil {
			duration := time.Since(start)
			tr.logger("✓ %s completed in %v", task.Name, duration)
			if tr.onSuccess != nil {
				tr.onSuccess(task.Name, duration)
			}
			return
		}

		lastErr = err
		if attempt < task.MaxRetry {
			tr.logger("⚠ %s attempt %d/%d failed: %v, retrying...",
				task.Name, attempt, task.MaxRetry, err)
			time.Sleep(time.Duration(attempt*5) * time.Second)
		}
	}

	tr.logger("✗ %s failed after %d attempts: %v", task.Name, task.MaxRetry, lastErr)
	if tr.onError != nil {
		tr.onError(task.Name, lastErr)
	}
}

func (tr *TaskRunner) Start() {
	tr.cron.Start()
	tr.logger("Task runner started with %d tasks", len(tr.tasks))
}

func (tr *TaskRunner) Stop() {
	tr.cron.Stop()
	tr.logger("Task runner stopped")
}

// Example tasks
func backupDatabase(ctx context.Context) error {
	log.Println("Backing up database...")
	// Your backup logic
	time.Sleep(2 * time.Second)
	return nil
}

func cleanupTempFiles(ctx context.Context) error {
	log.Println("Cleaning up temp files...")
	// Your cleanup logic
	return nil
}

func sendDailyReport(ctx context.Context) error {
	log.Println("Sending daily report...")
	// Your report logic
	return nil
}

func main() {
	runner := NewTaskRunner()

	// Add tasks
	runner.AddTask(Task{
		Name:     "Database Backup",
		Schedule: "0 2 * * *", // 2 AM daily
		Func:     backupDatabase,
		MaxRetry: 3,
	})

	runner.AddTask(Task{
		Name:     "Cleanup Temp Files",
		Schedule: "@daily",
		Func:     cleanupTempFiles,
		MaxRetry: 2,
	})

	runner.AddTask(Task{
		Name:     "Daily Report",
		Schedule: "0 9 * * 1-5", // 9 AM weekdays
		Func:     sendDailyReport,
		MaxRetry: 3,
	})

	// Handle errors
	runner.onError = func(name string, err error) {
		fmt.Printf("Task %s failed: %v\n", name, err)
		// Send alert to Slack, email, etc.
	}

	runner.Start()

	// Graceful shutdown
	sigChan := make(chan os.Signal, 1)
	signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
	<-sigChan

	runner.Stop()
}

Chapter 5: Service 4 - Data Pipeline

Transform and process data from multiple sources.

package main

import (
	"context"
	"encoding/csv"
	"encoding/json"
	"fmt"
	"io"
	"log"
	"net/http"
	"os"
	"sync"
	"time"
)

type Pipeline struct {
	sources      []Source
	transformers []Transformer
	sinks        []Sink
	logger       *log.Logger
}

type Source interface {
	Name() string
	Fetch(ctx context.Context) ([]map[string]interface{}, error)
}

type Transformer interface {
	Transform(data []map[string]interface{}) []map[string]interface{}
}

type Sink interface {
	Write(data []map[string]interface{}) error
}

// Example source: API
type APISource struct {
	name string
	url  string
}

func (s *APISource) Name() string { return s.name }

func (s *APISource) Fetch(ctx context.Context) ([]map[string]interface{}, error) {
	req, _ := http.NewRequestWithContext(ctx, "GET", s.url, nil)
	resp, err := http.DefaultClient.Do(req)
	if err != nil {
		return nil, err
	}
	defer resp.Body.Close()

	var data []map[string]interface{}
	json.NewDecoder(resp.Body).Decode(&data)
	return data, nil
}

// Example transformer: Filter
type FilterTransformer struct {
	field string
	value interface{}
}

func (t *FilterTransformer) Transform(data []map[string]interface{}) []map[string]interface{} {
	result := make([]map[string]interface{}, 0)
	for _, item := range data {
		if val, ok := item[t.field]; ok && val == t.value {
			result = append(result, item)
		}
	}
	return result
}

// Example sink: CSV file
type CSVSink struct {
	filename string
	mu       sync.Mutex
}

func (s *CSVSink) Write(data []map[string]interface{}) error {
	s.mu.Lock()
	defer s.mu.Unlock()

	file, err := os.Create(s.filename)
	if err != nil {
		return err
	}
	defer file.Close()

	writer := csv.NewWriter(file)
	defer writer.Flush()

	// Write headers
	if len(data) > 0 {
		headers := make([]string, 0)
		for k := range data[0] {
			headers = append(headers, k)
		}
		writer.Write(headers)

		// Write rows
		for _, item := range data {
			row := make([]string, len(headers))
			for i, h := range headers {
				row[i] = fmt.Sprintf("%v", item[h])
			}
			writer.Write(row)
		}
	}

	return nil
}

func NewPipeline() *Pipeline {
	return &Pipeline{
		logger: log.New(os.Stdout, "[PIPELINE] ", log.LstdFlags),
	}
}

func (p *Pipeline) AddSource(s Source) {
	p.sources = append(p.sources, s)
}

func (p *Pipeline) AddTransformer(t Transformer) {
	p.transformers = append(p.transformers, t)
}

func (p *Pipeline) AddSink(s Sink) {
	p.sinks = append(p.sinks, s)
}

func (p *Pipeline) Run(ctx context.Context) error {
	p.logger.Println("Pipeline starting")
	start := time.Now()

	// Fetch from all sources
	var allData []map[string]interface{}
	for _, source := range p.sources {
		p.logger.Printf("Fetching from %s...", source.Name())
		data, err := source.Fetch(ctx)
		if err != nil {
			p.logger.Printf("Error fetching from %s: %v", source.Name(), err)
			continue
		}
		allData = append(allData, data...)
		p.logger.Printf("Fetched %d records from %s", len(data), source.Name())
	}

	// Apply transformations
	for _, transformer := range p.transformers {
		p.logger.Println("Applying transformation...")
		allData = transformer.Transform(allData)
		p.logger.Printf("After transformation: %d records", len(allData))
	}

	// Write to sinks
	for _, sink := range p.sinks {
		p.logger.Println("Writing to sink...")
		if err := sink.Write(allData); err != nil {
			p.logger.Printf("Error writing to sink: %v", err)
		}
	}

	p.logger.Printf("Pipeline completed in %v", time.Since(start))
	return nil
}

func main() {
	pipeline := NewPipeline()

	// Add source
	pipeline.AddSource(&APISource{
		name: "API",
		url:  "https://api.example.com/data",
	})

	// Add transformer
	pipeline.AddTransformer(&FilterTransformer{
		field: "status",
		value: "active",
	})

	// Add sink
	pipeline.AddSink(&CSVSink{
		filename: "output.csv",
	})

	// Run
	ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
	defer cancel()

	if err := pipeline.Run(ctx); err != nil {
		log.Fatal(err)
	}
}

Chapter 6: Service 5 - Deployment Orchestrator

Deploy to multiple environments with status tracking.

package main

import (
	"context"
	"fmt"
	"os/exec"
	"sync"
	"time"
)

type Deployment struct {
	ID          string
	Service     string
	Environment string
	Version     string
	Status      string
	StartTime   time.Time
	EndTime     time.Time
	Output      string
}

type DeploymentOrchestrator struct {
	deployments map[string]*Deployment
	mu          sync.RWMutex
	logger      func(string, ...interface{})
}

func NewDeploymentOrchestrator() *DeploymentOrchestrator {
	return &DeploymentOrchestrator{
		deployments: make(map[string]*Deployment),
		logger:      fmt.Printf,
	}
}

func (do *DeploymentOrchestrator) Deploy(service, environment, version string) (*Deployment, error) {
	id := fmt.Sprintf("%s-%s-%d", service, environment, time.Now().Unix())

	deployment := &Deployment{
		ID:          id,
		Service:     service,
		Environment: environment,
		Version:     version,
		Status:      "pending",
		StartTime:   time.Now(),
	}

	do.mu.Lock()
	do.deployments[id] = deployment
	do.mu.Unlock()

	go do.executeDeployment(deployment)

	return deployment, nil
}

func (do *DeploymentOrchestrator) executeDeployment(d *Deployment) {
	d.Status = "running"
	do.logger("Starting deployment %s\n", d.ID)

	// Step 1: Build
	if err := do.buildService(d); err != nil {
		d.Status = "failed"
		d.Output = err.Error()
		d.EndTime = time.Now()
		do.logger("Build failed: %v\n", err)
		return
	}

	// Step 2: Test
	if err := do.testService(d); err != nil {
		d.Status = "failed"
		d.Output = err.Error()
		d.EndTime = time.Now()
		do.logger("Tests failed: %v\n", err)
		return
	}

	// Step 3: Deploy
	if err := do.deployToEnvironment(d); err != nil {
		d.Status = "failed"
		d.Output = err.Error()
		d.EndTime = time.Now()
		do.logger("Deployment failed: %v\n", err)
		return
	}

	// Step 4: Verify
	if err := do.verifyDeployment(d); err != nil {
		d.Status = "failed"
		d.Output = err.Error()
		d.EndTime = time.Now()
		do.logger("Verification failed: %v\n", err)
		return
	}

	d.Status = "success"
	d.EndTime = time.Now()
	do.logger("Deployment %s completed successfully\n", d.ID)
}

func (do *DeploymentOrchestrator) buildService(d *Deployment) error {
	do.logger("Building %s:%s\n", d.Service, d.Version)
	cmd := exec.Command("go", "build", "-o", d.Service)
	output, err := cmd.CombinedOutput()
	d.Output += string(output)
	return err
}

func (do *DeploymentOrchestrator) testService(d *Deployment) error {
	do.logger("Testing %s\n", d.Service)
	cmd := exec.Command("go", "test", "./...")
	output, err := cmd.CombinedOutput()
	d.Output += string(output)
	return err
}

func (do *DeploymentOrchestrator) deployToEnvironment(d *Deployment) error {
	do.logger("Deploying to %s\n", d.Environment)
	cmd := exec.Command("kubectl", "set", "image",
		fmt.Sprintf("deployment=%s", d.Service),
		fmt.Sprintf("%s=registry/%s:%s", d.Service, d.Service, d.Version),
		"-n", d.Environment,
	)
	output, err := cmd.CombinedOutput()
	d.Output += string(output)
	return err
}

func (do *DeploymentOrchestrator) verifyDeployment(d *Deployment) error {
	do.logger("Verifying deployment of %s\n", d.Service)

	for i := 0; i < 30; i++ {
		cmd := exec.Command("kubectl", "rollout", "status",
			fmt.Sprintf("deployment/%s", d.Service),
			"-n", d.Environment,
		)
		if err := cmd.Run(); err == nil {
			return nil
		}
		time.Sleep(2 * time.Second)
	}

	return fmt.Errorf("deployment verification timeout")
}

func (do *DeploymentOrchestrator) GetStatus(id string) *Deployment {
	do.mu.RLock()
	defer do.mu.RUnlock()
	return do.deployments[id]
}

func (do *DeploymentOrchestrator) GetAll() []*Deployment {
	do.mu.RLock()
	defer do.mu.RUnlock()

	deployments := make([]*Deployment, 0, len(do.deployments))
	for _, d := range do.deployments {
		deployments = append(deployments, d)
	}
	return deployments
}

func main() {
	orchestrator := NewDeploymentOrchestrator()

	// Deploy multiple services
	deploys := []struct {
		service     string
		environment string
		version     string
	}{
		{"api-service", "staging", "v1.2.3"},
		{"web-service", "staging", "v1.2.3"},
		{"worker-service", "staging", "v1.2.3"},
	}

	for _, d := range deploys {
		deployment, _ := orchestrator.Deploy(d.service, d.environment, d.version)
		fmt.Printf("Started deployment: %s\n", deployment.ID)
	}

	// Monitor progress
	for {
		time.Sleep(5 * time.Second)

		allDeployments := orchestrator.GetAll()
		pending := 0
		for _, d := range allDeployments {
			if d.Status == "pending" || d.Status == "running" {
				pending++
			}
		}

		if pending == 0 {
			fmt.Println("All deployments complete")
			break
		}

		fmt.Printf("Still running: %d deployments\n", pending)
	}

	// Print final status
	for _, d := range orchestrator.GetAll() {
		duration := d.EndTime.Sub(d.StartTime)
		fmt.Printf("%s: %s (%.0fs)\n", d.ID, d.Status, duration.Seconds())
	}
}

Chapter 7: Deploying Services

Make your services production-ready.

Docker

Dockerfile:

FROM golang:1.21-alpine AS builder

WORKDIR /build
COPY . .
RUN go build -o service .

FROM alpine:latest

RUN apk --no-cache add ca-certificates curl

COPY --from=builder /build/service /usr/local/bin/

HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1

EXPOSE 8080

ENTRYPOINT ["service"]

Build and run:

docker build -t my-service:v1 .
docker run -p 8080:8080 my-service:v1

Kubernetes

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-service
  template:
    metadata:
      labels:
        app: my-service
    spec:
      containers:
        - name: my-service
          image: my-service:v1
          ports:
            - containerPort: 8080
          env:
            - name: LOG_LEVEL
              value: "info"
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: database-url
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 30

Deploy:

kubectl apply -f deployment.yaml
kubectl get pods -l app=my-service
kubectl logs -f deployment/my-service

Chapter 8: Monitoring Services

Keep your services healthy and observable.

Prometheus Metrics

import "github.com/prometheus/client_golang/prometheus"

var (
	tasksDuration = prometheus.NewHistogramVec(
		prometheus.HistogramOpts{
			Name: "tasks_duration_seconds",
			Help: "Time taken to execute tasks",
		},
		[]string{"task_name", "status"},
	)

	tasksTotal = prometheus.NewCounterVec(
		prometheus.CounterOpts{
			Name: "tasks_total",
			Help: "Total tasks executed",
		},
		[]string{"task_name", "status"},
	)
)

func init() {
	prometheus.MustRegister(tasksDuration, tasksTotal)
}

Health Endpoint

func healthHandler(w http.ResponseWriter, r *http.Request) {
	health := map[string]interface{}{
		"status":    "healthy",
		"uptime":    time.Since(startTime).Seconds(),
		"timestamp": time.Now().RFC3339,
		"version":   "1.0.0",
	}

	w.Header().Set("Content-Type", "application/json")
	json.NewEncoder(w).Encode(health)
}

Appendix A: Common Patterns

Error Handling:

if err != nil {
	log.WithError(err).Error("Operation failed")
	return fmt.Errorf("failed operation: %w", err)
}

Timeouts:

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

Graceful Shutdown:

sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM)
<-sigChan
cleanup()

Concurrency:

var wg sync.WaitGroup
for _, item := range items {
	wg.Add(1)
	go func(i Item) {
		defer wg.Done()
		process(i)
	}(item)
}
wg.Wait()

Appendix B: Service Deployment Checklist

Before Deploying:

  • ✅ Unit tests pass
  • ✅ Integration tests pass
  • ✅ Docker image builds
  • ✅ Health endpoint responds
  • ✅ Metrics exposed
  • ✅ Logging configured
  • ✅ Configuration in environment variables
  • ✅ Error handling complete

After Deploying:

  • ✅ Service starts without errors
  • ✅ Health checks passing
  • ✅ Metrics available
  • ✅ Logs appearing
  • ✅ Can handle graceful shutdown
  • ✅ Alerts configured
  • ✅ Monitoring dashboard set up

Conclusion: Services That Matter

Go gives you the tools to build automation services that:

  • Deploy easily (single binary)
  • Scale reliably (concurrent by default)
  • Monitor clearly (metrics and logging)
  • Run safely (graceful shutdown)
  • Integrate cleanly (HTTP, gRPC, CLI)

These aren’t hobby projects. They’re production services that run your operations.

Start with one. Build it right. Deploy it safely.

Then build the next one.

Tags

#go #automation #services #cli-tools #monitoring #data-processing #deployment #real-world #practical #production #golang #systems #workflows #tools