Go: From Zero to Deep Internals — The Complete Senior Developer Guide

Go: From Zero to Deep Internals — The Complete Senior Developer Guide

The most complete Go guide ever written. From absolute zero to compiler internals, GMP scheduler, garbage collector, memory model, goroutines, CPU/RAM interaction, standard library anatomy, and senior Go mindset.

By Omar Flores

Go: From Zero to Deep Internals

The Language That Refused to Be Clever

In 2007, three engineers at Google were waiting for a C++ program to compile.

Robert Griesemer, Rob Pike, and Ken Thompson had thirty to forty-five minutes to kill while the compiler ran. They started talking. They talked about what they did not like about the languages they used every day.

They did not like C++‘s build times. They did not like Java’s verbosity and the cognitive overhead of its type system. They did not like Python’s lack of type safety in large codebases. They did not like that every language seemed designed to show off what it could do, rather than to let programmers do what they needed to do.

By the time the compiler finished, they had the beginning of an idea.

The language they eventually created — Go — is famous for what it left out. No generics for the first thirteen years. No exceptions. No inheritance. No operator overloading. No implicit conversions. No macros.

Every time someone suggested adding a feature, the question was: does this make Go simpler, or more complex? If more complex, it was rejected.

This produced something unusual. A language that is genuinely easy to read at any level of experience. A language where code written by a junior developer and code written by a senior developer look remarkably similar, because the language does not reward cleverness.

But under that plain surface, Go is doing some of the most sophisticated systems programming work in any language runtime. The garbage collector, the goroutine scheduler, the concurrency primitives — these are engineering masterpieces hiding behind a simple syntax.

This guide uncovers all of it.

We start from absolute zero. We end at the deepest internals. Every step is explained, not assumed.


Part 1 — Starting From Zero: What Go Actually Is

A Compiled, Statically Typed Language

Before we write any code, we need a mental model.

Go is a compiled language. This means that before your program runs, a compiler reads your source code and produces a binary file — an executable that the operating system can run directly. No interpreter is involved at runtime. No virtual machine starts up. The binary runs and talks directly to the operating system.

Compare this to Python or JavaScript, which are interpreted. When you run a Python script, the Python interpreter reads your source code and executes it line by line. This is flexible but slower.

Go sits between C (compiled, no runtime, manual memory management) and Java (compiled to bytecode, runs on JVM, managed memory).

Go is compiled like C — it produces native machine code. But it has managed memory like Java — you do not call malloc and free. The Go runtime handles memory for you.

This combination is rare. Fast execution, safe memory, simple code.

The Hello World, Explained in Full

Every Go program begins with a package declaration. Here is the simplest possible Go program, with every line explained.

// Package main tells the Go compiler this file belongs to the "main" package.
// The main package is special: it produces an executable binary.
// All other packages produce libraries, not executables.
package main

// Import declares which packages this file uses.
// "fmt" is the format package from the standard library.
// It provides functions for formatted input and output.
import "fmt"

// func main() is the entry point of every Go executable.
// When the binary starts, the Go runtime calls main() first.
// No arguments. No return value.
func main() {
    // fmt.Println prints a line to standard output.
    // Println adds a newline character at the end automatically.
    fmt.Println("Hello, Go")
}

To run this:

# Method 1: Run directly (compile + execute in one step)
go run main.go

# Method 2: Compile to binary, then run
go build -o hello main.go
./hello

# Method 3: Install to your PATH
go install .

Each method uses the same compiler. The difference is only whether the binary is kept on disk.

Variables and Types

Go is statically typed. Every variable has a type that is fixed at compile time.

package main

import "fmt"

func main() {
    // Explicit declaration with type annotation.
    // Syntax: var name type = value
    var name string = "Alice"

    // Short declaration. The compiler infers the type from the value.
    // This is the most common style in Go.
    // := is only available inside functions.
    age := 30

    // Multiple variable declaration.
    // Useful when declaring related variables together.
    var (
        height float64 = 1.75
        weight float64 = 70.0
    )

    // Constants. Evaluated at compile time. Cannot be changed.
    const maxRetries = 3

    fmt.Println(name, age, height, weight, maxRetries)
}

The basic types in Go:

Boolean:        bool
Integers:       int, int8, int16, int32, int64
                uint, uint8, uint16, uint32, uint64, uintptr
Floating point: float32, float64
Complex:        complex64, complex128
Text:           string, rune (alias for int32, represents a Unicode code point)
Byte:           byte (alias for uint8)

The int type is special. Its size depends on the platform. On a 64-bit system, int is 64 bits. On a 32-bit system, it is 32 bits. When you need a specific size, use int32 or int64 explicitly. When you just need a number for general use, int is almost always the right choice.

The Zero Value

In Go, every variable has a zero value. When you declare a variable without assigning to it, it gets its type’s zero value automatically.

var i int       // i = 0
var f float64   // f = 0.0
var b bool      // b = false
var s string    // s = "" (empty string)
var p *int      // p = nil (nil pointer)

This is deliberately designed. There is no concept of an “uninitialized” variable in Go. Every variable is always in a valid state from the moment it is declared.

This eliminates an entire class of bugs that plague C and C++ programs: reading uninitialized memory.


Part 2 — The Building Blocks: Functions, Structs, and Interfaces

Functions: First Class Citizens

Functions in Go are values. You can assign them to variables, pass them as arguments, and return them from other functions.

// A simple function. Takes two ints, returns an int.
// In Go, the type comes after the name, not before.
// This is not convention — it is the syntax.
func add(a int, b int) int {
    return a + b
}

// Multiple return values. This is idiomatic Go.
// The second return value is typically an error.
func divide(a, b float64) (float64, error) {
    if b == 0 {
        // errors.New creates a simple error value with a message.
        return 0, fmt.Errorf("cannot divide by zero")
    }
    return a / b, nil // nil means "no error"
}

// Named return values. The return variables are declared in the signature.
// A bare "return" returns the current values of named variables.
// This is useful for documentation, but bare returns hurt readability in long functions.
func minMax(numbers []int) (min int, max int) {
    min, max = numbers[0], numbers[0]
    for _, n := range numbers {
        if n < min { min = n }
        if n > max { max = n }
    }
    return // Returns min and max
}

// Variadic function. Accepts any number of int arguments.
// Inside the function, nums is a slice of int.
func sum(nums ...int) int {
    total := 0
    for _, n := range nums {
        total += n
    }
    return total
}

Structs: Go’s Version of Classes

Go has no classes. Instead, it has structs: named collections of fields.

Behavior is added to structs through methods — functions with a receiver.

// A struct is a type definition.
// It describes a shape of data.
type Employee struct {
    ID         int
    FirstName  string
    LastName   string
    Department string
    Salary     float64
    Active     bool
}

// A method is a function with a receiver.
// The receiver appears between "func" and the method name.
// (e Employee) is a VALUE receiver. The method receives a copy of the struct.
func (e Employee) FullName() string {
    return e.FirstName + " " + e.LastName
}

// (e *Employee) is a POINTER receiver. The method receives a pointer to the struct.
// Use pointer receivers when you need to modify the struct,
// or when the struct is large (to avoid copying).
func (e *Employee) GiveRaise(percent float64) {
    e.Salary *= (1 + percent/100)
}

func main() {
    // Struct literal initialization.
    // Field names make the order irrelevant and the code self-documenting.
    emp := Employee{
        ID:         1001,
        FirstName:  "Alice",
        LastName:   "Johnson",
        Department: "Engineering",
        Salary:     95000.0,
        Active:     true,
    }

    fmt.Println(emp.FullName()) // "Alice Johnson"

    // GiveRaise modifies emp.Salary.
    // We use &emp to pass a pointer to the method.
    // But actually, Go automatically takes the address when needed.
    emp.GiveRaise(10) // emp.Salary is now 104500.0

    fmt.Printf("New salary: %.2f\n", emp.Salary)
}

Interfaces: The Cornerstone of Go’s Design

An interface in Go defines a set of method signatures. Any type that implements those methods satisfies the interface — implicitly, without declaring “implements.”

This is called structural typing or duck typing with static verification.

// Writer is an interface. Any type with a Write method that matches
// this signature satisfies the Writer interface.
type Writer interface {
    Write(data []byte) (int, error)
}

// Logger is a more complete interface.
type Logger interface {
    Log(level, message string)
    SetLevel(level string)
}

// ConsoleLogger is a concrete type.
type ConsoleLogger struct {
    level string
}

// ConsoleLogger implements Logger because it has Log and SetLevel methods.
// There is no "implements Logger" declaration. Go checks this at compile time.
func (l *ConsoleLogger) Log(level, message string) {
    if l.shouldLog(level) {
        fmt.Printf("[%s] %s\n", level, message)
    }
}

func (l *ConsoleLogger) SetLevel(level string) {
    l.level = level
}

func (l *ConsoleLogger) shouldLog(level string) bool {
    // Simplified: always log
    return true
}

// FileLogger also implements Logger.
type FileLogger struct {
    filename string
    level    string
}

func (l *FileLogger) Log(level, message string) {
    // Write to a file instead of the console.
    // Error handling omitted for clarity.
    fmt.Printf("[FILE:%s] [%s] %s\n", l.filename, level, message)
}

func (l *FileLogger) SetLevel(level string) {
    l.level = level
}

// This function accepts any Logger.
// It does not care whether it is a ConsoleLogger or FileLogger.
// This is the power of interfaces: decoupling.
func startService(logger Logger) {
    logger.SetLevel("INFO")
    logger.Log("INFO", "Service started")
    logger.Log("DEBUG", "Connecting to database")
}

func main() {
    // Both types satisfy the Logger interface.
    // Either can be passed to startService.
    console := &ConsoleLogger{}
    file    := &FileLogger{filename: "app.log"}

    startService(console)
    startService(file)
}

The empty interface, interface{} (or any since Go 1.18), is satisfied by every type. It is the escape hatch for when you genuinely need to work with unknown types.

// any is an alias for interface{}.
// Use it when you truly cannot know the type at compile time.
func printAnything(value any) {
    fmt.Printf("Type: %T, Value: %v\n", value, value)
}

// Type assertions let you recover the original type from an interface.
func processValue(value any) {
    // Type switch: checks multiple types.
    switch v := value.(type) {
    case int:
        fmt.Printf("Integer: %d\n", v)
    case string:
        fmt.Printf("String: %q\n", v)
    case bool:
        fmt.Printf("Boolean: %v\n", v)
    default:
        fmt.Printf("Unknown type: %T\n", v)
    }
}

Part 3 — The Go Toolchain: More Than Just a Compiler

The go Command

When you install Go, you get one command: go. This single command is the entire toolchain.

go build    # Compile packages and dependencies
go run      # Compile and run
go test     # Test packages
go get      # Add dependencies
go mod      # Module management
go fmt      # Format source code
go vet      # Static analysis — catches common mistakes
go install  # Compile and install to $GOPATH/bin
go clean    # Remove compiled files
go doc      # Show documentation
go generate # Run code generation tools
go env      # Print Go environment information
go list     # List packages and modules
go work     # Workspace management (multi-module)

go build: What Actually Happens

When you run go build, the toolchain performs these steps in order.

Source files (.go)


  1. Parsing
     (lex tokens, build AST)


  2. Type Checking
     (semantic analysis, type inference)


  3. Escape Analysis
     (decide what lives on heap vs stack)


  4. Optimization
     (inlining, dead code elimination)


  5. SSA Generation
     (Static Single Assignment intermediate form)


  6. Machine Code Generation
     (emit native instructions for target architecture)


  7. Linking
     (combine object files into final binary)


  Binary executable

This entire process is remarkably fast. A 100,000-line Go project compiles in under ten seconds on modern hardware. Google’s multi-million-line monorepo builds each service independently in seconds.

Speed was a design requirement. Go’s import system is structured so that each package is compiled once, and its compiled form is cached. If nothing changed in a dependency, it is not recompiled.

go fmt: The End of Style Wars

Go has one official code formatter. Run go fmt, and your code conforms to Go’s canonical style.

No arguments. No configuration file. No options.

Teams do not argue about brace style, indentation, or line length. The formatter decides. The discussion is over.

This might seem rigid. In practice, it is liberating. Code looks the same everywhere. Reading someone else’s Go code feels like reading your own. New team members immediately fit in stylistically.

# Format all files in the current directory and subdirectories
go fmt ./...

# See what would change without actually changing it
gofmt -d main.go

# List files that need formatting
gofmt -l ./...

go vet: Catching Bugs Before They Run

go vet runs a suite of static analyzers on your code. It catches bugs that the compiler allows but that are almost certainly wrong.

// Bad: Printf with wrong argument type
fmt.Printf("%d", "hello") // go vet catches this

// Bad: Unreachable code
func bad() int {
    return 1
    fmt.Println("never runs") // go vet catches this
}

// Bad: Copying a sync.Mutex by value (breaks the mutex)
var mu sync.Mutex
mu2 := mu // go vet catches this

Run go vet ./... as part of every CI pipeline. It is free and catches real bugs.

Modules: Go’s Dependency Management

A Go module is a collection of packages with a version. Every project should have a go.mod file.

# Initialize a new module
go mod init github.com/username/projectname

# This creates go.mod:
# module github.com/username/projectname
# go 1.21

When you add a dependency:

go get github.com/gin-gonic/gin@latest

Go adds two files:

go.mod — declares direct dependencies with version requirements.

go.sum — contains the cryptographic hashes of all dependency contents. Immutable. Prevents supply chain attacks.

# go.mod example
module github.com/username/myapp

go 1.21

require (
    github.com/gin-gonic/gin v1.9.1
    github.com/lib/pq v1.10.9
    go.uber.org/zap v1.26.0
)

Cross Compilation: One Build for Every Platform

Go can compile for any target platform from any source platform. This is built-in, not a plugin.

# Compile for Linux on 64-bit Intel/AMD from any OS
GOOS=linux GOARCH=amd64 go build -o app-linux-amd64 .

# Compile for Windows on 64-bit Intel/AMD
GOOS=windows GOARCH=amd64 go build -o app-windows.exe .

# Compile for Apple Silicon (ARM64)
GOOS=darwin GOARCH=arm64 go build -o app-mac-arm64 .

# Compile for Raspberry Pi (ARM 32-bit)
GOOS=linux GOARCH=arm GOARM=7 go build -o app-raspberrypi .

# See all supported GOOS/GOARCH combinations
go tool dist list

No cross-compilation toolchain needed. No special setup. The Go standard library and runtime are implemented for every supported platform. One command, any platform.


Part 4 — The Go Compiler: What It Does to Your Code

Abstract Syntax Trees

When the compiler reads your source code, its first job is to understand the structure. It does this by building an Abstract Syntax Tree (AST).

Consider this function:

func multiply(a, b int) int {
    return a * b
}

The compiler does not see text. It sees a tree:

FuncDecl
├── Name: "multiply"
├── Params
│   ├── Field: a int
│   └── Field: b int
├── Results
│   └── Field: int
└── Body
    └── ReturnStmt
        └── BinaryExpr (operator: *)
            ├── Ident: a
            └── Ident: b

This tree structure makes transformations straightforward. The compiler walks the tree and applies rules. Type checking, optimization, code generation — all tree transformations.

You can inspect this AST yourself:

// This is a Go program that parses and prints the AST of Go source code.
// The go/parser and go/ast packages are part of the standard library.
package main

import (
    "go/ast"
    "go/parser"
    "go/token"
    "fmt"
)

func main() {
    src := `
package main

func multiply(a, b int) int {
    return a * b
}
`
    // Parse the source into an AST.
    fset := token.NewFileSet()
    f, err := parser.ParseFile(fset, "", src, parser.AllErrors)
    if err != nil {
        panic(err)
    }

    // Print the AST structure.
    ast.Print(fset, f)

    // Walk the AST to find all function declarations.
    ast.Inspect(f, func(n ast.Node) bool {
        if fn, ok := n.(*ast.FuncDecl); ok {
            fmt.Printf("Found function: %s\n", fn.Name.Name)
        }
        return true
    })
}

Function Inlining

One of the most important compiler optimizations is function inlining. When a function is small, the compiler replaces calls to it with the function body itself.

Before inlining:

func double(x int) int {
    return x * 2
}

func main() {
    y := double(5) // This is a function call: push args, jump, return
}

After inlining (what the compiler actually generates):

func main() {
    y := 5 * 2 // No function call. The body was pasted in place.
}

This eliminates the overhead of the function call: no argument passing, no return address setup, no stack frame creation.

You can see what the compiler decided to inline:

go build -gcflags="-m" ./...

# Output:
# main.go:4:6: can inline double
# main.go:9:12: inlining call to double

The -gcflags="-m" flag prints all inlining decisions. Every line that says “can inline” is a function the compiler will inline when called. Every “inlining call to” line shows where an actual inlining happened.

Escape Analysis: The Most Important Optimization

Escape analysis decides whether a variable is allocated on the stack or the heap.

This is critical for performance.

Stack allocation is extremely fast. The stack grows and shrinks automatically as functions are called and return. Freeing stack memory is free — the stack pointer just moves.

Heap allocation is slower. The garbage collector must track heap-allocated objects and eventually reclaim their memory. Heap allocation involves metadata, synchronization, and GC pressure.

Go’s compiler performs escape analysis: it determines whether a variable “escapes” the function that created it. If it does not escape, it can live on the stack. If it does escape, it must live on the heap.

// noEscape: x is used only within this function.
// The compiler will allocate x on the stack.
func noEscape() int {
    x := 42
    return x // Returning the VALUE. x itself does not escape.
}

// doesEscape: x's address is returned.
// The caller will hold a pointer to x even after this function returns.
// x must be on the heap — it must outlive the stack frame.
func doesEscape() *int {
    x := 42
    return &x // Returning the ADDRESS. x escapes to the heap.
}

// interfaceEscape: storing a value in an interface causes heap allocation
// because the interface must hold a pointer to the concrete value.
func interfaceEscape() interface{} {
    x := 42
    return x // x escapes because it is stored in an interface{}
}

You can see the escape analysis decisions:

go build -gcflags="-m -m" ./...

# Output:
# main.go:8:2: x does not escape
# main.go:14:2: &x escapes to heap
# main.go:20:2: x escapes to heap (because of interface{})

Understanding escape analysis is how Go senior developers write zero-allocation code in performance-critical paths. The goal is to keep hot objects on the stack and minimize heap pressure.

Static Single Assignment (SSA)

Before generating machine code, the compiler converts your program to SSA form. In SSA, every variable is assigned exactly once. Multiple assignments create new versions.

Original code:

x := 1
x = x + 2
x = x * 3

SSA form:

x1 := 1
x2 := x1 + 2
x3 := x2 * 3

SSA makes many optimizations straightforward. Dead code is easy to identify (a variable assigned but never used). Constant propagation is clear (replace x2 with 3 if we know x1 is 1).

You can print the SSA form your code compiles to:

GOSSAFUNC=multiply go build .
# This creates a file: ssa.html
# Open it in a browser to see the full SSA pass-by-pass transformation.

This is one of the most illuminating tools in the entire Go ecosystem. You can see exactly what the compiler does to your function at every optimization stage.


Part 5 — Memory: Stack, Heap, and How Go Manages Both

The Stack

Every goroutine in Go has its own stack. The stack is a contiguous block of memory used for:

  • Local variables of functions
  • Function arguments and return values
  • The return address (where to jump when the function returns)

Stacks are LIFO (Last In, First Out). When you call a function, a new stack frame is pushed. When the function returns, its frame is popped.

Stack growth during function calls:

main()

  ├── frame: main
  │     age = 30
  │     name = "Alice"

  └── calls connect()

        ├── frame: connect
        │     host = "localhost"
        │     port = 5432

        └── calls query()

              └── frame: query
                    sql = "SELECT..."
                    rows = ...

When query returns, its frame disappears. When connect returns, its frame disappears. This is why returning a pointer to a local variable is dangerous in C but handled safely by Go’s escape analysis.

Go stacks start small and grow. A new goroutine starts with a 2-4KB stack (historically 8KB, reduced in Go 1.4). As the goroutine calls functions that need more space, the stack grows. The Go runtime handles this transparently through a mechanism called stack copying.

When the stack needs to grow, Go allocates a new, larger stack, copies all frames from the old stack to the new stack, and adjusts all pointers. This is why Go can start thousands of goroutines: each starts with a tiny stack, not a fixed 1-8MB stack like most operating system threads.

The Heap

The heap is the region of memory used for objects that outlive the function that created them. Heap objects are managed by the garbage collector.

When you call make([]int, 1000), the slice’s backing array is on the heap. When you call new(Employee), the resulting pointer points to heap-allocated memory. When a variable escapes (as we saw in escape analysis), it goes to the heap.

The Go runtime manages a pool of memory for heap allocation. It uses a span-based allocator with size classes.

Size classes work like this: objects of similar sizes are grouped. A request for 24 bytes goes to the 24-byte size class. A request for 25 bytes goes to the 32-byte size class (next size up). This reduces fragmentation.

Size classes in Go (simplified):
  8, 16, 24, 32, 48, 64, 80, 96, 112, 128, 144, ...
  up to 32KB (objects larger than 32KB are allocated directly)

The heap allocator is designed to be fast and to minimize lock contention. Each P (processor, which we will explain shortly) has its own local memory cache called an mcache. Small allocations happen from the mcache without any locking. Only when the mcache needs to be refilled does a global lock occur.

Pointers in Go

A pointer holds the memory address of a value. In Go, pointers are safer than in C: you cannot do pointer arithmetic, and the garbage collector tracks all pointers.

// Declare a value.
x := 42

// Get a pointer to x using the address-of operator &.
// p holds the memory address where x lives.
p := &x

// Dereference the pointer to access the value it points to.
// * before a pointer variable dereferences it.
fmt.Println(*p) // 42

// Modifying through a pointer changes the original.
*p = 100
fmt.Println(x)  // 100

// new() allocates a zero-initialized value on the heap
// and returns a pointer to it.
n := new(int)   // n is *int, pointing to a heap-allocated 0
*n = 55
fmt.Println(*n) // 55

// Nil pointer. A pointer with no address.
// Dereferencing a nil pointer is a runtime panic.
var nilPtr *int
fmt.Println(nilPtr) // <nil>
// fmt.Println(*nilPtr) // panic: nil pointer dereference

When should you use pointers versus values?

Use a pointer when:

  • The struct is large and copying would be expensive
  • You need to modify the original value
  • You want to represent “optional” or “nullable” (the pointer can be nil)
  • The type has a mutex or other type that must not be copied

Use a value when:

  • The type is small (a few fields)
  • You want copies to be independent
  • The type is immutable by design (e.g., a value object)
  • You want to avoid heap allocation (values on stack are faster)

Part 6 — The Garbage Collector: A Masterpiece of Engineering

The Problem: Memory Management Without Manual Free

In C, the programmer manages memory. You call malloc to allocate, free to release. Get it wrong and you have a memory leak (allocated memory never freed) or a use-after-free bug (accessing memory after freeing it).

In Go, the garbage collector (GC) manages heap memory automatically. It tracks which heap objects are still reachable from your program and frees the ones that are not.

The challenge: doing this without stopping the program for more than a millisecond at a time.

Tricolor Mark and Sweep

Go uses a tricolor concurrent mark-and-sweep algorithm. This is a sophisticated design that allows GC work to happen concurrently with your program (the “mutator” in GC terminology).

Every heap object is one of three colors:

White — Not yet visited. At the start of GC, everything is white. At the end, white objects are unreachable garbage to be collected.

Gray — Discovered but not fully processed. Gray objects have been found to be reachable, but we have not yet checked what they point to.

Black — Fully processed. Black objects are reachable, and all objects they point to have been checked.

The algorithm:

Phase 1: Mark Setup (STW)
  Stop the world briefly (microseconds).
  Enable write barriers.
  Snapshot the set of goroutine stacks.
  Restart the world.

Phase 2: Concurrent Marking
  While your program runs, mark goroutine stacks as roots.
  Color root objects gray.
  Loop: pick a gray object, scan its pointers,
        color referenced objects gray,
        color the processed object black.
  Continue until no gray objects remain.

Phase 3: Mark Termination (STW)
  Stop the world briefly (microseconds).
  Flush write barrier buffers.
  Disable write barriers.
  Restart the world.

Phase 4: Concurrent Sweeping
  While your program runs, free white (unreachable) spans.
  Reclaim memory.

The “Stop the World” (STW) pauses in modern Go (1.14+) are typically under 500 microseconds, often under 100 microseconds. This is orders of magnitude better than the GC in older JVM versions.

Write Barriers

There is a subtle problem with concurrent marking. While the GC marks objects gray and black, your program is running and modifying pointers. What if your program moves a pointer from a black object to a new object, making the new object unreachable from all gray objects? The GC would incorrectly collect it.

Write barriers solve this. A write barrier is a small piece of code inserted by the compiler around every pointer write. When your program writes a pointer to a heap object, the write barrier ensures the GC is notified.

// You write:
obj.Field = newValue

// The compiler turns this into approximately:
writeBarrier(obj, &obj.Field, newValue)
obj.Field = newValue

The write barrier is extremely cheap — just a few instructions. But it is not zero. This is the main overhead of having a GC in Go. Hot code that performs many pointer writes will have slightly higher overhead than equivalent C code.

This is why a common performance trick in Go is to minimize pointer writes in hot paths — use value types instead of pointers, prefer arrays over linked lists.

GC Tuning

Go exposes one tuning parameter: GOGC.

GOGC controls when the next GC cycle triggers. By default, GOGC=100, meaning GC triggers when heap size doubles from the previous collection.

  • GOGC=50 — GC more frequently. Lower memory, more CPU.
  • GOGC=200 — GC less frequently. Higher memory, less CPU.
  • GOGC=off — Disable GC entirely. Only for special cases (batch jobs).

Since Go 1.19, there is also GOMEMLIMIT, which sets a soft limit on the total memory the Go runtime will use. The GC will work harder to stay under this limit.

# Set GC target to 50% heap growth
GOGC=50 ./myapp

# Set memory limit to 500MB
GOMEMLIMIT=500MiB ./myapp

# Disable GC (not recommended for long-running services)
GOGC=off ./myapp

You can observe GC behavior in real time:

GODEBUG=gctrace=1 ./myapp

# Output format:
# gc N @Xs+Ys+Zs: A->B->CMB, D MB goal, E P
# N = GC cycle number
# X = seconds of program running
# Y = seconds of STW (stop the world)
# Z = seconds of wall clock time
# A = heap size before GC
# B = heap size after GC
# C = live heap size
# D = next GC goal
# E = number of Ps (logical processors)

Part 7 — Goroutines: The Killer Feature

Why Goroutines Are Different From Threads

An operating system thread is expensive. Creating one allocates a fixed stack (1-8MB on most systems), involves a kernel system call, and the OS scheduler has no knowledge of your program’s logic.

A goroutine is a lightweight execution unit managed by the Go runtime, not the operating system.

Differences:

OS Thread vs Goroutine

OS Thread:
  Stack size:     1-8 MB (fixed at creation)
  Creation cost:  ~17 microseconds + kernel involvement
  Switch cost:    ~1-2 microseconds (kernel context switch)
  Max practical:  ~10,000 per process

Goroutine:
  Stack size:     2-4 KB initial (grows as needed)
  Creation cost:  ~0.3 microseconds (no kernel call)
  Switch cost:    ~0.1 microseconds (user-space only)
  Max practical:  Millions per process

Starting a goroutine is just a function call with go in front:

package main

import (
    "fmt"
    "sync"
    "time"
)

func worker(id int, wg *sync.WaitGroup) {
    defer wg.Done() // Decrement counter when this function returns
    fmt.Printf("Worker %d starting\n", id)
    time.Sleep(time.Second) // Simulate work
    fmt.Printf("Worker %d done\n", id)
}

func main() {
    var wg sync.WaitGroup

    // Start 100 goroutines. This is trivially cheap.
    for i := 1; i <= 100; i++ {
        wg.Add(1)                // Increment counter before launching goroutine
        go worker(i, &wg)        // Start goroutine. Returns immediately.
    }

    // Wait for all goroutines to finish.
    wg.Wait()
    fmt.Println("All workers done")
}

The GMP Scheduler: The Heart of Goroutine Execution

Go’s runtime implements its own scheduler. Understanding it is key to writing high-performance concurrent Go.

The scheduler has three actors:

G — Goroutine. The unit of work. Carries its stack, its program counter, and its current state.

M — Machine (OS Thread). The actual OS-level thread that executes code on the CPU.

P — Processor. A virtual processor. Holds a local run queue of goroutines. Mediates between G and M.

GMP Model:

P1  ──── M1 ──── CPU Core 1

├── RunQueue: [G1, G2, G3]

P2  ──── M2 ──── CPU Core 2

├── RunQueue: [G4, G5]

P3  ──── M3 ──── CPU Core 3 (M3 is blocked on syscall)

└── RunQueue: [G6, G7, G8]

Global RunQueue: [G9, G10, ...]

The number of Ps is controlled by GOMAXPROCS, which defaults to the number of CPU cores on the machine. This is the maximum number of goroutines that can run simultaneously.

When you call runtime.GOMAXPROCS(n), you set how many Ps exist. With GOMAXPROCS=1, only one goroutine runs at a time (but all can be scheduled concurrently). With GOMAXPROCS=8 on an 8-core machine, up to 8 goroutines run simultaneously.

How the Scheduler Works

Every P has a local run queue (a ring buffer of Gs). When a goroutine is runnable, it gets added to the current P’s local queue.

When an M runs out of work (its P’s run queue is empty), it:

  1. Tries to steal work from other Ps’ queues (work stealing)
  2. Tries to take work from the global run queue
  3. Goes to sleep if no work is available anywhere

Work stealing is the key innovation. It keeps all CPUs busy without requiring a global lock for every scheduling decision.

Goroutine states:

Runnable ──▶ Running ──▶ Waiting
    ▲            │           │
    │            ▼           │
    └──── Blocked/Sleeping ◀─┘
              (channel, mutex, syscall, sleep)

Preemption: Go has cooperative preemption since Go 1.14. Goroutines can be preempted at safe points (function calls, garbage collection safe points). This prevents a CPU-bound goroutine from starving others.

System calls: When a goroutine makes a blocking system call (reading from disk, waiting on a network socket), the M running it gets detached from its P. Another M takes that P and continues running other goroutines. When the system call completes, the goroutine becomes runnable again and looks for a P to run on.

This is how Go achieves high concurrency even when many goroutines are blocked on I/O. The CPUs are never idle waiting for one goroutine to finish a system call.

Goroutine Lifecycle in Detail

package main

import (
    "fmt"
    "runtime"
    "sync"
)

func main() {
    // At startup: G-main is created. GOMAXPROCS Ps are created.
    fmt.Printf("CPUs: %d, GOMAXPROCS: %d\n",
        runtime.NumCPU(), runtime.GOMAXPROCS(0))

    var wg sync.WaitGroup
    results := make([]int, 10)

    for i := 0; i < 10; i++ {
        wg.Add(1)
        i := i // Capture loop variable. IMPORTANT: see explanation below.
        go func() {
            defer wg.Done()
            results[i] = i * i
        }()
    }

    wg.Wait()
    fmt.Println(results) // [0 1 4 9 16 25 36 49 64 81]
}

The i := i line is essential. Without it, every goroutine would capture the same loop variable i. By the time goroutines run, the loop may have finished and i might be 10. Each goroutine would compute 10 * 10 = 100 for every slot.

With i := i, each goroutine captures its own copy of i at the time of its creation. This is a classic Go gotcha.

(Note: Go 1.22 changed this behavior for for range loops. Loop variables are now per-iteration in Go 1.22+. But the pattern is still good practice for clarity.)


Part 8 — Channels: Communication, Not Shared State

The Philosophy

Go’s concurrency philosophy, from Rob Pike:

“Do not communicate by sharing memory. Share memory by communicating.”

In most concurrent programming, you protect shared data with mutexes: you lock before reading or writing, unlock after. This works but is error-prone. Forget to lock: data race. Lock in the wrong order: deadlock.

Channels are Go’s alternative. Instead of multiple goroutines accessing the same variable, one goroutine sends data through a channel and another receives it. Only one goroutine touches the data at a time.

Creating and Using Channels

// make(chan Type) creates an unbuffered channel.
// An unbuffered channel synchronizes sender and receiver:
// the sender blocks until a receiver is ready,
// and the receiver blocks until a sender is ready.
ch := make(chan int)

// make(chan Type, capacity) creates a buffered channel.
// The sender can send up to capacity values without blocking.
// It only blocks when the buffer is full.
buffered := make(chan string, 10)

// Sending a value: arrow points to the channel.
ch <- 42

// Receiving a value: arrow points from the channel.
value := <-ch

// Close signals that no more values will be sent.
// Receivers get remaining buffered values, then the zero value.
close(ch)

// Range over a channel. Terminates when the channel is closed.
for v := range ch {
    fmt.Println(v)
}

A Real Producer-Consumer Pattern

package main

import (
    "fmt"
    "sync"
)

// producer generates work and sends it to the jobs channel.
// It signals completion by closing the channel.
func producer(jobs chan<- int, count int) {
    for i := 0; i < count; i++ {
        jobs <- i
    }
    close(jobs) // Signal: no more jobs.
}

// consumer processes jobs from the jobs channel and sends results.
func consumer(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
    defer wg.Done()
    for job := range jobs {
        // Process the job (square it).
        result := job * job
        results <- result
        fmt.Printf("Worker %d: job=%d result=%d\n", id, job, result)
    }
}

func main() {
    const numJobs    = 20
    const numWorkers = 4

    jobs    := make(chan int, numJobs)
    results := make(chan int, numJobs)

    // Start workers.
    var wg sync.WaitGroup
    for w := 1; w <= numWorkers; w++ {
        wg.Add(1)
        go consumer(w, jobs, results, &wg)
    }

    // Start producer.
    go producer(jobs, numJobs)

    // Wait for all workers, then close results.
    go func() {
        wg.Wait()
        close(results)
    }()

    // Collect results.
    var sum int
    for r := range results {
        sum += r
    }

    fmt.Printf("Sum of squares: %d\n", sum)
}

The select Statement: Multiplexing Channels

select is Go’s way of waiting on multiple channel operations simultaneously. It picks whichever one is ready. If multiple are ready, it picks one at random.

package main

import (
    "fmt"
    "time"
)

func main() {
    ticker := time.NewTicker(500 * time.Millisecond)
    timeout := time.After(2 * time.Second)
    done    := make(chan bool)

    go func() {
        time.Sleep(1500 * time.Millisecond)
        done <- true
    }()

    // Event loop using select.
    for {
        select {
        case t := <-ticker.C:
            fmt.Println("Tick at", t.Format("15:04:05.000"))

        case <-done:
            fmt.Println("Done signal received. Stopping.")
            ticker.Stop()
            return

        case <-timeout:
            fmt.Println("Timeout! Stopping.")
            ticker.Stop()
            return
        }
    }
}

Channel Internals

A channel is a data structure in the runtime: runtime/chan.go.

Under the hood, a channel contains:

  • A circular ring buffer (for buffered channels)
  • A mutex to protect the buffer
  • A send queue: goroutines blocked waiting to send
  • A receive queue: goroutines blocked waiting to receive
  • The buffer’s current count and capacity
Buffered channel (cap=3, len=2):

  ┌─────────────────────────────────────────────┐
  │  hchan {                                     │
  │    buf:      [42, 17, _, _]  (ring buffer)   │
  │    sendx:    2               (next write pos) │
  │    recvx:    0               (next read pos)  │
  │    qcount:   2               (items in buf)   │
  │    dataqsiz: 3               (buffer capacity)│
  │    sendq:    (empty)         (blocked senders)│
  │    recvq:    (empty)         (blocked recvrs) │
  │    lock:     (unlocked)                       │
  │  }                                            │
  └─────────────────────────────────────────────┘

When a goroutine sends to a full channel, it is added to sendq and descheduled (moved to Waiting state). When a receiver removes an item and room becomes available, it wakes the first goroutine in sendq.

The efficiency comes from this: blocked goroutines are simply parked with no CPU cost. They resume only when the channel operation can complete.


Part 9 — Concurrency Patterns for Senior Developers

Context: Cancellation and Deadlines

The context package is fundamental in production Go code. It provides a way to carry deadlines, cancellation signals, and request-scoped values across API boundaries.

package main

import (
    "context"
    "fmt"
    "time"
)

// fetchData simulates an operation that respects context cancellation.
// Every long-running operation should accept a context.
func fetchData(ctx context.Context, url string) (string, error) {
    // Simulate slow work with a timer.
    done := make(chan string, 1)
    go func() {
        time.Sleep(2 * time.Second) // Simulated latency
        done <- "data from " + url
    }()

    // Wait for either the work to complete or the context to be cancelled.
    select {
    case result := <-done:
        return result, nil
    case <-ctx.Done():
        // ctx.Err() tells you WHY the context was done:
        // context.DeadlineExceeded or context.Canceled
        return "", fmt.Errorf("fetchData cancelled: %w", ctx.Err())
    }
}

func main() {
    // WithTimeout creates a context that cancels after 1 second.
    ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
    defer cancel() // Always defer cancel to release resources.

    result, err := fetchData(ctx, "https://example.com/api")
    if err != nil {
        fmt.Println("Error:", err) // Error: fetchData cancelled: context deadline exceeded
        return
    }
    fmt.Println("Result:", result)
}

Context propagation: every function that does I/O, calls an external service, or runs for more than a few milliseconds should accept a context.Context as its first argument.

// Idiomatic Go: context is always the first parameter.
func (r *Repository) GetUser(ctx context.Context, userID string) (*User, error)
func (s *Service) ProcessOrder(ctx context.Context, order Order) error
func (c *Client) SendRequest(ctx context.Context, req Request) (Response, error)

The Pipeline Pattern

A pipeline is a sequence of stages where each stage takes input, processes it, and passes the result to the next stage. Each stage runs in its own goroutine.

package main

import "fmt"

// generate creates a channel of numbers.
// The channel is closed when all numbers are sent.
func generate(nums ...int) <-chan int {
    out := make(chan int)
    go func() {
        for _, n := range nums {
            out <- n
        }
        close(out)
    }()
    return out
}

// square reads from in, squares each value, sends to out.
func square(in <-chan int) <-chan int {
    out := make(chan int)
    go func() {
        for n := range in {
            out <- n * n
        }
        close(out)
    }()
    return out
}

// filter keeps only values that pass the predicate.
func filter(in <-chan int, pred func(int) bool) <-chan int {
    out := make(chan int)
    go func() {
        for n := range in {
            if pred(n) {
                out <- n
            }
        }
        close(out)
    }()
    return out
}

func main() {
    // Build the pipeline:
    // generate(1..10) → square → filter(even) → print
    nums    := generate(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
    squares := square(nums)
    evens   := filter(squares, func(n int) bool { return n%2 == 0 })

    for n := range evens {
        fmt.Print(n, " ") // 4 16 36 64 100
    }
    fmt.Println()
}

Fan-Out and Fan-In

Fan-out: distribute work across multiple goroutines. Fan-in: merge multiple channels into one.

package main

import (
    "fmt"
    "sync"
)

// fanOut distributes work from one input channel to multiple output channels.
func fanOut(in <-chan int, outs ...chan<- int) {
    var wg sync.WaitGroup
    for _, out := range outs {
        out := out // Capture for goroutine
        wg.Add(1)
        go func() {
            defer wg.Done()
            for n := range in {
                out <- n
            }
        }()
    }
    go func() {
        wg.Wait()
        for _, out := range outs {
            close(out)
        }
    }()
}

// fanIn merges multiple input channels into one output channel.
func fanIn(ins ...<-chan int) <-chan int {
    var wg sync.WaitGroup
    merged := make(chan int, 100)

    output := func(c <-chan int) {
        defer wg.Done()
        for n := range c {
            merged <- n
        }
    }

    wg.Add(len(ins))
    for _, in := range ins {
        go output(in)
    }

    go func() {
        wg.Wait()
        close(merged)
    }()

    return merged
}

sync.Mutex and sync.RWMutex

Sometimes shared state is unavoidable. A sync.Mutex protects it.

package main

import (
    "fmt"
    "sync"
)

// Cache is a thread-safe map.
// Multiple goroutines can safely call Get, Set, and Delete concurrently.
type Cache struct {
    mu    sync.RWMutex      // RWMutex: multiple readers, one writer
    store map[string]string
}

func NewCache() *Cache {
    return &Cache{store: make(map[string]string)}
}

// Get acquires a read lock. Multiple goroutines can read simultaneously.
func (c *Cache) Get(key string) (string, bool) {
    c.mu.RLock()
    defer c.mu.RUnlock()
    val, ok := c.store[key]
    return val, ok
}

// Set acquires a write lock. Only one goroutine can write at a time.
// All readers are blocked during a write.
func (c *Cache) Set(key, value string) {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.store[key] = value
}

// Delete removes a key.
func (c *Cache) Delete(key string) {
    c.mu.Lock()
    defer c.mu.Unlock()
    delete(c.store, key)
}

func main() {
    cache := NewCache()

    var wg sync.WaitGroup

    // Start 50 concurrent writers.
    for i := 0; i < 50; i++ {
        wg.Add(1)
        go func(i int) {
            defer wg.Done()
            key := fmt.Sprintf("key-%d", i)
            cache.Set(key, fmt.Sprintf("value-%d", i))
        }(i)
    }

    // Start 100 concurrent readers.
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func(i int) {
            defer wg.Done()
            key := fmt.Sprintf("key-%d", i%50)
            val, _ := cache.Get(key)
            _ = val
        }(i)
    }

    wg.Wait()
    fmt.Println("Done. No race conditions.")
}

The sync/atomic Package

For simple numeric operations, sync/atomic provides lock-free operations that map to single CPU instructions.

import "sync/atomic"

var counter int64

// atomic.AddInt64 is equivalent to counter++ but thread-safe.
// No mutex needed. Faster than a mutex for simple counters.
atomic.AddInt64(&counter, 1)

// Load reads the current value atomically.
current := atomic.LoadInt64(&counter)

// Store writes a value atomically.
atomic.StoreInt64(&counter, 0)

// CompareAndSwap: only writes if the current value matches expected.
// Returns true if the swap happened.
swapped := atomic.CompareAndSwapInt64(&counter, 100, 200)

Go 1.19 introduced sync/atomic.Value for type-safe atomic access to arbitrary values:

var config atomic.Value

// Store a new configuration atomically.
// Safe to call from multiple goroutines.
config.Store(&Config{Timeout: 30})

// Load the current configuration.
current := config.Load().(*Config)

Part 10 — The Standard Library: A Universe of Capabilities

The Standard Library Is Not “Basic”

Go’s standard library is one of its greatest strengths. It is large, coherent, well-documented, and sufficient for most production needs.

A Go binary linked against only the standard library can serve HTTP, parse JSON, interact with databases, handle TLS, run tests, write structured logs, do cryptography, manage files, and much more — with no external dependencies.

Let us tour the most important packages.

net/http: A Production-Ready HTTP Server

package main

import (
    "encoding/json"
    "log"
    "net/http"
    "time"
)

// User is the domain model for this example.
type User struct {
    ID       int       `json:"id"`
    Name     string    `json:"name"`
    Email    string    `json:"email"`
    JoinedAt time.Time `json:"joined_at"`
}

// usersHandler handles GET /users and POST /users.
func usersHandler(w http.ResponseWriter, r *http.Request) {
    // Set response content type.
    w.Header().Set("Content-Type", "application/json")

    switch r.Method {
    case http.MethodGet:
        users := []User{
            {ID: 1, Name: "Alice", Email: "alice@example.com", JoinedAt: time.Now()},
            {ID: 2, Name: "Bob",   Email: "bob@example.com",   JoinedAt: time.Now()},
        }
        json.NewEncoder(w).Encode(users)

    case http.MethodPost:
        var user User
        if err := json.NewDecoder(r.Body).Decode(&user); err != nil {
            http.Error(w, "invalid request body", http.StatusBadRequest)
            return
        }
        // Assign ID (in real code, this comes from a database).
        user.ID = 3
        user.JoinedAt = time.Now()

        w.WriteHeader(http.StatusCreated)
        json.NewEncoder(w).Encode(user)

    default:
        http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
    }
}

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc("/users", usersHandler)
    mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
        w.WriteHeader(http.StatusOK)
        w.Write([]byte(`{"status":"ok"}`))
    })

    server := &http.Server{
        Addr:         ":8080",
        Handler:      mux,
        ReadTimeout:  10 * time.Second,  // Max time to read request headers
        WriteTimeout: 30 * time.Second,  // Max time to write response
        IdleTimeout:  60 * time.Second,  // Max time for keep-alive connections
    }

    log.Printf("Server starting on %s", server.Addr)
    if err := server.ListenAndServe(); err != http.ErrServerClosed {
        log.Fatal(err)
    }
}

The net/http server is production-ready without any framework. Every incoming request runs in its own goroutine. The server handles keep-alive, TLS, content negotiation, and connection management automatically.

encoding/json: Marshaling and Unmarshaling

package main

import (
    "encoding/json"
    "fmt"
    "time"
)

// JSON struct tags control serialization.
// "json:\"name\"" sets the JSON key name.
// "omitempty" skips the field if it is the zero value.
// "-" excludes the field from JSON entirely.
type Order struct {
    ID          int       `json:"id"`
    CustomerID  string    `json:"customer_id"`
    Total       float64   `json:"total"`
    Currency    string    `json:"currency"`
    Status      string    `json:"status"`
    Notes       string    `json:"notes,omitempty"`   // Omit if empty string
    InternalRef string    `json:"-"`                  // Never in JSON
    CreatedAt   time.Time `json:"created_at"`
}

func main() {
    order := Order{
        ID:         1001,
        CustomerID: "cust-abc",
        Total:      299.99,
        Currency:   "USD",
        Status:     "confirmed",
        CreatedAt:  time.Now(),
        // Notes is empty, will be omitted.
        // InternalRef is excluded always.
    }

    // Marshal: Go value → JSON bytes
    data, err := json.MarshalIndent(order, "", "  ")
    if err != nil {
        panic(err)
    }
    fmt.Println(string(data))

    // Unmarshal: JSON bytes → Go value
    jsonInput := `{"id":2002,"customer_id":"cust-xyz","total":149.99,"currency":"EUR","status":"pending","created_at":"2026-02-18T00:00:00Z"}`
    var received Order
    if err := json.Unmarshal([]byte(jsonInput), &received); err != nil {
        panic(err)
    }
    fmt.Printf("Received order: %+v\n", received)

    // Streaming JSON decoder — efficient for large payloads or network streams
    // json.NewDecoder(r.Body).Decode(&target) reads from io.Reader
    // Much better than json.Unmarshal for HTTP request bodies
}

database/sql: The Database Abstraction

database/sql is Go’s standard database interface. It defines a common API for all databases. Actual database support comes from drivers.

package main

import (
    "context"
    "database/sql"
    "fmt"
    "log"
    "time"

    _ "github.com/lib/pq"  // PostgreSQL driver. The blank import runs init(),
                           // which registers the driver with database/sql.
)

type Product struct {
    ID          int
    Name        string
    Price       float64
    Stock       int
    LastUpdated time.Time
}

// DB wraps sql.DB with our domain methods.
type DB struct {
    conn *sql.DB
}

// NewDB opens a connection pool to PostgreSQL.
func NewDB(dsn string) (*DB, error) {
    conn, err := sql.Open("postgres", dsn)
    if err != nil {
        return nil, fmt.Errorf("sql.Open: %w", err)
    }

    // Connection pool settings.
    conn.SetMaxOpenConns(25)                 // Max simultaneous connections
    conn.SetMaxIdleConns(10)                 // Connections kept idle in pool
    conn.SetConnMaxLifetime(5 * time.Minute) // Recycle connections after 5 min

    // Verify connectivity.
    if err := conn.PingContext(context.Background()); err != nil {
        return nil, fmt.Errorf("db ping: %w", err)
    }

    return &DB{conn: conn}, nil
}

// GetProduct fetches one product by ID.
func (db *DB) GetProduct(ctx context.Context, id int) (*Product, error) {
    const query = `
        SELECT id, name, price, stock, last_updated
        FROM products
        WHERE id = $1
    `

    var p Product
    err := db.conn.QueryRowContext(ctx, query, id).Scan(
        &p.ID, &p.Name, &p.Price, &p.Stock, &p.LastUpdated,
    )
    if err == sql.ErrNoRows {
        return nil, fmt.Errorf("product %d not found", id)
    }
    if err != nil {
        return nil, fmt.Errorf("query product: %w", err)
    }

    return &p, nil
}

// ListProducts returns all products with stock above threshold.
func (db *DB) ListProducts(ctx context.Context, minStock int) ([]Product, error) {
    const query = `
        SELECT id, name, price, stock, last_updated
        FROM products
        WHERE stock >= $1
        ORDER BY name ASC
    `

    rows, err := db.conn.QueryContext(ctx, query, minStock)
    if err != nil {
        return nil, fmt.Errorf("query products: %w", err)
    }
    defer rows.Close() // ALWAYS close rows to return connection to pool.

    var products []Product
    for rows.Next() {
        var p Product
        if err := rows.Scan(&p.ID, &p.Name, &p.Price, &p.Stock, &p.LastUpdated); err != nil {
            return nil, fmt.Errorf("scan product: %w", err)
        }
        products = append(products, p)
    }

    // rows.Err() returns any error that occurred during iteration.
    // Always check this — errors can occur after rows.Next() starts.
    if err := rows.Err(); err != nil {
        return nil, fmt.Errorf("rows error: %w", err)
    }

    return products, nil
}

// UpdateStock updates product stock within a transaction.
// Transactions ensure that either all changes succeed or none do.
func (db *DB) TransferStock(ctx context.Context, fromID, toID, quantity int) error {
    // Begin a transaction.
    tx, err := db.conn.BeginTx(ctx, nil)
    if err != nil {
        return fmt.Errorf("begin tx: %w", err)
    }
    // defer Rollback is a safety net. If Commit succeeds, Rollback is a no-op.
    defer tx.Rollback()

    // Decrement source stock.
    _, err = tx.ExecContext(ctx, `
        UPDATE products SET stock = stock - $1 WHERE id = $2 AND stock >= $1
    `, quantity, fromID)
    if err != nil {
        return fmt.Errorf("decrement stock: %w", err)
    }

    // Increment destination stock.
    _, err = tx.ExecContext(ctx, `
        UPDATE products SET stock = stock + $1 WHERE id = $2
    `, quantity, toID)
    if err != nil {
        return fmt.Errorf("increment stock: %w", err)
    }

    // Commit. If this returns an error, defer Rollback will clean up.
    return tx.Commit()
}

io: The Composition Foundation

The io package defines two interfaces that underpin the entire I/O model in Go:

// Reader: anything that can provide bytes.
type Reader interface {
    Read(p []byte) (n int, err error)
}

// Writer: anything that can accept bytes.
type Writer interface {
    Write(p []byte) (n int, err error)
}

These two interfaces power composable I/O:

// Copy reads from src until EOF and writes to dst.
// Works with ANY Reader and ANY Writer:
// files, networks, buffers, HTTP request/response bodies, etc.
io.Copy(dst, src)

// Example: copy an HTTP response body to stdout.
resp, _ := http.Get("https://example.com")
defer resp.Body.Close()
io.Copy(os.Stdout, resp.Body)

// Example: compress a file using composition.
inFile,  _ := os.Open("data.txt")
outFile, _ := os.Create("data.txt.gz")
gzWriter    := gzip.NewWriter(outFile)
defer gzWriter.Close()

// This single line copies data through the gzip compression pipeline.
io.Copy(gzWriter, inFile)

The power of io.Reader and io.Writer is that you can chain transformations without reading the entire input into memory. A 10GB file can be compressed and uploaded to S3 using constant memory because data flows through the pipeline in chunks.


Part 11 — CPU, RAM, and System Interaction

How Go Talks to the Operating System

Go programs run on top of the operating system. When they need OS services — reading files, writing to the network, allocating memory, creating threads — they make system calls.

On Linux, a system call transfers control to the kernel, which performs the operation and returns. This context switch (user space → kernel space → user space) costs approximately 100-400 nanoseconds.

For every system call Go makes, it goes through the runtime:

Go code


runtime syscall wrapper


Operating System (kernel)


Hardware

The runtime’s syscall wrappers do two important things. First, they handle goroutine parking: when a goroutine makes a blocking syscall, the runtime parks it so the M (OS thread) can run other goroutines. Second, they handle signals: the runtime listens for OS signals (SIGTERM, SIGINT) and routes them to Go code.

Memory Layout

A running Go process has a well-defined memory layout:

High address
┌─────────────────────────────────┐
│  Stack                          │  OS thread stacks
│  (grows downward)               │  Not goroutine stacks
├─────────────────────────────────┤
│  Memory-mapped files            │  mmap() regions
├─────────────────────────────────┤
│  Heap                           │  Managed by Go GC
│  (grows upward)                 │  Contains all escaping values
├─────────────────────────────────┤
│  BSS                            │  Uninitialized global variables
├─────────────────────────────────┤
│  Data                           │  Initialized global variables
├─────────────────────────────────┤
│  Text                           │  Compiled machine code (read-only)
│  (instructions)                 │  Constants, string literals
└─────────────────────────────────┘
Low address

Go goroutine stacks are allocated on the heap (managed by the Go runtime), not on the OS thread stack. This is how they can start at 2KB and grow.

CPU Cache Efficiency

Modern CPUs have multiple levels of cache (L1, L2, L3) between the processor and main memory. L1 cache access is ~1ns. Main memory access is ~100ns. Writing cache-efficient code can make a 100x performance difference.

The key principle: access memory sequentially. A CPU fetches data in cache lines (typically 64 bytes). Sequential access maximizes cache line utilization.

// SLOW: Column-major access pattern. Poor cache performance.
// We skip 1000 elements between each access.
// Every access is likely a cache miss.
func sumColMajor(matrix [1000][1000]int) int {
    sum := 0
    for col := 0; col < 1000; col++ {
        for row := 0; row < 1000; row++ {
            sum += matrix[row][col] // Jump 1000 ints forward each time
        }
    }
    return sum
}

// FAST: Row-major access pattern. Excellent cache performance.
// We read 1000 consecutive elements, then move to the next row.
// Each cache line is fully utilized.
func sumRowMajor(matrix [1000][1000]int) int {
    sum := 0
    for row := 0; row < 1000; row++ {
        for col := 0; col < 1000; col++ {
            sum += matrix[row][col] // Read 8 bytes after 8 bytes sequentially
        }
    }
    return sum
}

The row-major version is typically 3-5x faster on modern hardware. The same computation. The same number of operations. The only difference is memory access pattern.

Slices: The Details That Matter

A slice in Go is a three-field descriptor:

type SliceHeader struct {
    Data uintptr // Pointer to backing array
    Len  int     // Number of elements in use
    Cap  int     // Capacity of backing array
}

Understanding this structure explains several common Go behaviors:

// When append exceeds capacity, a new backing array is allocated.
// The old backing array remains until GC collects it.
s := make([]int, 0, 5) // len=0, cap=5, Data→[_,_,_,_,_]
s = append(s, 1, 2, 3) // len=3, cap=5, Data→[1,2,3,_,_]
s = append(s, 4, 5, 6) // len=6, cap=10 (doubled!), Data→NEW ARRAY

// Slices of the same backing array share memory.
a := []int{1, 2, 3, 4, 5}
b := a[1:3] // b = [2, 3], same backing array as a

b[0] = 99   // Modifies a[1] too!
fmt.Println(a) // [1 99 3 4 5]

// Use copy to get an independent slice.
c := make([]int, len(a))
copy(c, a)
c[0] = 100  // Does not modify a
fmt.Println(a) // [1 99 3 4 5] — unchanged

Growth factor: When a slice doubles its capacity, it allocates a new backing array of double size and copies all elements. This is an O(n) operation. Repeated appends to a slice are amortized O(1) — just like Go’s append.

If you know the final size in advance, always pre-allocate:

// Slow: many small reallocations as items grow.
var items []Item
for i := 0; i < 10000; i++ {
    items = append(items, generateItem(i))
}

// Fast: one allocation, no reallocation needed.
items := make([]Item, 0, 10000)
for i := 0; i < 10000; i++ {
    items = append(items, generateItem(i))
}

Maps: Under the Hood

Go maps are hash tables. Keys are hashed to find the bucket they belong to.

m := make(map[string]int)
m["alice"] = 1
m["bob"]   = 2

Under the hood:

hash("alice") → bucket 3
hash("bob")   → bucket 7

Bucket 3: [("alice", 1), ...]
Bucket 7: [("bob",   2), ...]

Important map behaviors:

// Iteration order is randomized by design.
// Go deliberately randomizes it to prevent programs from relying on map order.
for k, v := range m {
    fmt.Println(k, v) // Order varies each run
}

// The zero value of a map is nil. Reading from nil map is safe (returns zero value).
// Writing to nil map panics.
var nilMap map[string]int
_ = nilMap["key"]       // Safe: returns 0
nilMap["key"] = 1       // PANIC: assignment to entry in nil map

// Check if a key exists.
val, ok := m["alice"]
if ok {
    fmt.Println("Found:", val)
}

// Delete a key.
delete(m, "alice")

// Maps are NOT safe for concurrent use.
// Use sync.Map or protect with a mutex for concurrent access.

Maps are not safe for concurrent use. Multiple goroutines can read simultaneously, but if any goroutine writes while others read, it is a data race. Go detects this at runtime with -race and panics immediately.


Part 12 — Generics: Type Parameters in Go

What Generics Solve

Before Go 1.18, writing generic code meant using interface{} and losing type safety:

// Before generics: type-unsafe
func Max(a, b interface{}) interface{} {
    // Need a type switch or reflection to compare.
    // Callers must type-assert the result. Ugly and error-prone.
}

// After generics: type-safe
func Max[T constraints.Ordered](a, b T) T {
    if a > b {
        return a
    }
    return b
}

// The compiler generates a concrete version for each type used.
iMax := Max(3, 7)         // T = int, returns int
fMax := Max(3.14, 2.71)   // T = float64, returns float64
sMax := Max("apple", "banana") // T = string, returns string

A Generic Data Structure

package main

import "fmt"

// Stack[T] is a generic stack that works with any type.
// T is a type parameter.
type Stack[T any] struct {
    items []T
}

func (s *Stack[T]) Push(item T) {
    s.items = append(s.items, item)
}

func (s *Stack[T]) Pop() (T, bool) {
    if len(s.items) == 0 {
        var zero T // zero value of type T
        return zero, false
    }
    last := s.items[len(s.items)-1]
    s.items = s.items[:len(s.items)-1]
    return last, true
}

func (s *Stack[T]) Len() int {
    return len(s.items)
}

func main() {
    // A stack of ints.
    intStack := &Stack[int]{}
    intStack.Push(1)
    intStack.Push(2)
    intStack.Push(3)

    for intStack.Len() > 0 {
        v, _ := intStack.Pop()
        fmt.Print(v, " ") // 3 2 1
    }
    fmt.Println()

    // A stack of strings. Same code, different type.
    strStack := &Stack[string]{}
    strStack.Push("go")
    strStack.Push("is")
    strStack.Push("great")

    for strStack.Len() > 0 {
        v, _ := strStack.Pop()
        fmt.Print(v, " ") // great is go
    }
    fmt.Println()
}

Type Constraints

Constraints define which types are valid for a type parameter.

import "golang.org/x/exp/constraints"

// Ordered is any type that supports <, >, <=, >=
// Includes all integer types, float types, and string.

// Number is a custom constraint: any integer or float type.
type Number interface {
    constraints.Integer | constraints.Float
}

// Sum adds up any slice of numbers.
func Sum[T Number](nums []T) T {
    var total T
    for _, n := range nums {
        total += n
    }
    return total
}

// Filter returns elements where the predicate returns true.
// Works with any slice type.
func Filter[T any](items []T, pred func(T) bool) []T {
    result := make([]T, 0, len(items))
    for _, item := range items {
        if pred(item) {
            result = append(result, item)
        }
    }
    return result
}

// Map transforms each element using a function.
func Map[T, U any](items []T, fn func(T) U) []U {
    result := make([]U, len(items))
    for i, item := range items {
        result[i] = fn(item)
    }
    return result
}

Part 13 — Profiling and Observability: Seeing Inside Your Program

pprof: The Standard Profiler

pprof is Go’s built-in profiling tool. It can profile CPU usage, memory allocation, goroutine blocking, mutex contention, and goroutine count.

Add one import to enable the HTTP profiling endpoint:

import _ "net/http/pprof"
// This blank import registers profiling handlers at /debug/pprof/

func main() {
    // Start a profiling server on a separate port.
    // Never expose this on production's public port.
    go func() {
        log.Println(http.ListenAndServe(":6060", nil))
    }()

    // Your actual server.
    // ...
}

Now you can profile your running service:

# CPU profile: what is the CPU spending time on?
# Samples the program every 10ms for 30 seconds.
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30

# Memory profile: what is on the heap?
go tool pprof http://localhost:6060/debug/pprof/heap

# Goroutine profile: how many goroutines? Where are they blocked?
go tool pprof http://localhost:6060/debug/pprof/goroutine

# Mutex profile: which mutexes are contended?
go tool pprof http://localhost:6060/debug/pprof/mutex

# Block profile: where are goroutines blocking on channels?
go tool pprof http://localhost:6060/debug/pprof/block

Inside pprof’s interactive shell:

(pprof) top10             # Top 10 functions by CPU time
(pprof) top10 -cum        # Top 10 by cumulative (including called functions)
(pprof) list FunctionName # Show annotated source for a function
(pprof) web               # Open a flame graph in the browser
(pprof) tree              # Show call tree

Writing Benchmarks

Go’s testing package includes a benchmark runner.

// benchmarks_test.go
package main

import (
    "testing"
    "strings"
)

// A benchmark function starts with BenchmarkXxx and takes *testing.B.
func BenchmarkStringConcat(b *testing.B) {
    // b.N is set by the benchmark runner.
    // It runs the loop enough times to get a stable measurement.
    for i := 0; i < b.N; i++ {
        s := ""
        for j := 0; j < 100; j++ {
            s += "x" // Slow: creates 100 intermediate strings
        }
        _ = s
    }
}

func BenchmarkStringBuilder(b *testing.B) {
    for i := 0; i < b.N; i++ {
        var sb strings.Builder
        for j := 0; j < 100; j++ {
            sb.WriteString("x") // Fast: one allocation, writes in-place
        }
        _ = sb.String()
    }
}

// Run benchmarks:
// go test -bench=. -benchmem
//
// -bench=.       runs all benchmarks
// -benchmem      reports allocations per operation
// -benchtime=10s runs for 10 seconds instead of 1
// -count=5       runs each benchmark 5 times (for stability)

Benchmark output:

BenchmarkStringConcat-8       200000    7541 ns/op    5296 B/op    99 allocs/op
BenchmarkStringBuilder-8     5000000     321 ns/op    1024 B/op     3 allocs/op

The strings.Builder version is 23x faster and uses 99% fewer allocations. This is the kind of insight benchmarks reveal.

go test -race: Finding Data Races

The race detector instruments every memory access and every goroutine synchronization. At runtime, it detects when two goroutines access the same memory location concurrently without synchronization.

// This has a data race:
var counter int

func increment() {
    counter++ // Not atomic. Read-modify-write with no protection.
}

func main() {
    go increment()
    go increment()
    time.Sleep(time.Millisecond)
    fmt.Println(counter)
}
go run -race main.go
# Output:
# WARNING: DATA RACE
# Write at 0x... by goroutine 7:
#   main.increment()
#       main.go:7
# Previous write at 0x... by goroutine 6:
#   main.increment()
#       main.go:7

Run the race detector on every test and benchmark in CI:

go test -race ./...

The race detector has a 5-10x CPU overhead. Use it in development and CI, not in production.

Execution Tracer

The execution tracer captures a detailed timeline of everything the Go runtime does: goroutine scheduling, GC pauses, system calls, network events.

# Collect a 5-second trace.
curl http://localhost:6060/debug/pprof/trace?seconds=5 > trace.out

# Open the trace viewer.
go tool trace trace.out

The trace viewer shows a Gantt chart of all goroutines across time. You can see exactly when each goroutine ran, when it was preempted, when the GC ran, and where blocking occurred.

This is the most powerful debugging tool in Go for latency issues. It can reveal problems that no other tool can.


Part 14 — External Tools: The Professional Toolkit

golangci-lint: The Meta-Linter

golangci-lint runs dozens of linters simultaneously. It is the industry standard for Go code quality automation.

# Install
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest

# Run on entire project
golangci-lint run ./...

# Run with a specific config file
golangci-lint run --config .golangci.yml ./...

A production .golangci.yml:

linters:
  enable:
    - govet # go vet checks
    - errcheck # Check that errors are handled
    - staticcheck # Static analysis from staticcheck.io
    - gosimple # Simplification suggestions
    - ineffassign # Ineffective assignments
    - unused # Unused code
    - gofmt # Formatting
    - goimports # Import organization
    - gocritic # Additional code style checks
    - godot # Comment formatting
    - misspell # Spelling errors in comments and strings
    - revive # Fast, configurable linter (replaces golint)
    - exhaustive # Exhaustiveness of enum switches
    - bodyclose # HTTP response body close
    - noctx # Missing context in HTTP requests
    - sqlcloserows # Unclosed database rows

linters-settings:
  errcheck:
    check-type-assertions: true
    check-blank: true

  govet:
    enable-all: true

issues:
  exclude-rules:
    - path: "_test.go"
      linters:
        - errcheck # Less strict in tests

dlv: The Debugger

dlv (Delve) is Go’s debugger. It understands goroutines, knows about defer, and works with Go’s runtime.

# Install
go install github.com/go-delve/delve/cmd/dlv@latest

# Debug a running program
dlv debug ./cmd/server

# Attach to a running process
dlv attach $(pgrep myapp)

# Post-mortem debug a core dump
dlv core ./myapp core.dump

Inside dlv:

(dlv) break main.processOrder    # Set breakpoint at function entry
(dlv) break server.go:145        # Set breakpoint at specific line
(dlv) continue                   # Run until next breakpoint
(dlv) next                       # Step to next line
(dlv) step                       # Step into function call
(dlv) stepout                    # Step out of current function
(dlv) print variable             # Print variable value
(dlv) locals                     # Print all local variables
(dlv) goroutines                 # List all goroutines
(dlv) goroutine 7 bt             # Stack trace for goroutine 7
(dlv) watch variable             # Break when variable changes

mockery: Generating Mocks From Interfaces

go install github.com/vektra/mockery/v2@latest

# Generate a mock for the UserRepository interface
# in package repository, output to mocks/
mockery --name=UserRepository --output=mocks --outpkg=mocks

Generated mock:

// mocks/UserRepository.go (auto-generated, do not edit)
type MockUserRepository struct {
    mock.Mock
}

func (m *MockUserRepository) GetByID(ctx context.Context, id string) (*User, error) {
    args := m.Called(ctx, id)
    return args.Get(0).(*User), args.Error(1)
}

Using the mock in tests:

func TestOrderService_PlaceOrder(t *testing.T) {
    mockUserRepo := new(mocks.MockUserRepository)

    // Set expectation: when GetByID is called with "user-1",
    // return this user and no error.
    mockUserRepo.On("GetByID", mock.Anything, "user-1").
        Return(&User{ID: "user-1", Name: "Alice"}, nil)

    service := NewOrderService(mockUserRepo)
    err := service.PlaceOrder(context.Background(), "user-1", order)

    assert.NoError(t, err)
    mockUserRepo.AssertExpectations(t) // Verify all expectations were met
}

Wire: Compile-Time Dependency Injection

go install github.com/google/wire/cmd/wire@latest

Wire generates dependency injection code at compile time, not runtime. No reflection. No container. Just generated Go code.

// wire.go — the provider declarations
//go:build wireinject

package main

import (
    "github.com/google/wire"
)

// InitializeServer uses Wire to build the dependency graph.
func InitializeServer(cfg Config) (*Server, error) {
    wire.Build(
        NewDatabase,
        NewUserRepository,
        NewOrderRepository,
        NewOrderService,
        NewHTTPServer,
    )
    return nil, nil // Wire replaces this with generated code
}

Wire analyzes the provider function signatures, determines which providers satisfy which dependencies, and generates plain Go code that wires everything together.

sqlc: Type-Safe SQL

sqlc generates type-safe Go code from SQL queries. You write SQL, sqlc generates the Go functions.

go install github.com/sqlc-io/sqlc/cmd/sqlc@latest
sqlc generate

SQL query file:

-- name: GetProduct :one
SELECT id, name, price, stock FROM products WHERE id = $1;

-- name: ListActiveProducts :many
SELECT id, name, price, stock FROM products
WHERE stock > 0 ORDER BY name ASC;

-- name: UpdateProductStock :exec
UPDATE products SET stock = $2 WHERE id = $1;

Generated Go code (what sqlc produces):

// db.go (auto-generated)
func (q *Queries) GetProduct(ctx context.Context, id int32) (Product, error) {
    row := q.db.QueryRowContext(ctx, getProduct, id)
    var i Product
    err := row.Scan(&i.ID, &i.Name, &i.Price, &i.Stock)
    return i, err
}

func (q *Queries) ListActiveProducts(ctx context.Context) ([]Product, error) {
    rows, err := q.db.QueryContext(ctx, listActiveProducts)
    // ... (full implementation generated)
}

No more writing Scan calls by hand. No more typos in column names. Compile-time verification that your SQL and Go code match.


Part 15 — How Senior Go Developers Think

The Mental Model of Simplicity

A junior Go developer learns syntax. A mid-level Go developer learns idioms. A senior Go developer has internalized the philosophy.

Go’s philosophy can be summarized in three words: clarity over cleverness.

When a senior Go developer reviews code, their first questions are:

“Can I understand what this does in thirty seconds?”

“What is the simplest implementation that is also correct?”

“Am I using complexity to solve a real problem, or am I being clever for its own sake?”

When someone proposes adding a design pattern, a senior developer asks: “What problem does this solve that a simple function or struct does not?” If the answer is “flexibility” without a concrete scenario, the pattern probably does not belong.

// Junior: This seems more "object-oriented" and "proper"
type UserServiceInterface interface { GetUser(id string) (*User, error) }
type UserServiceImpl struct { db *sql.DB }
func (s *UserServiceImpl) GetUser(id string) (*User, error) { /* ... */ }
type UserServiceFactory struct { db *sql.DB }
func (f *UserServiceFactory) Create() UserServiceInterface { return &UserServiceImpl{f.db} }

// Senior: This is what the problem actually requires
func getUser(ctx context.Context, db *sql.DB, id string) (*User, error) {
    // Just a function. No interface unless other implementations exist.
    // No factory. No factory factory.
}

The senior developer writes the second version and introduces the interface only when a second implementation is needed (testing mock, alternative backend).

Error Handling: The Go Way

In Go, errors are values. They are returned from functions, not thrown.

// This is not Go error handling:
func badPattern(id string) User {
    user, err := db.GetUser(id)
    if err != nil {
        panic(err) // WRONG: panics crash the server
    }
    return user
}

// This is Go error handling:
func goodPattern(ctx context.Context, id string) (*User, error) {
    user, err := db.GetUser(ctx, id)
    if err != nil {
        // Wrap the error with context. %w enables errors.Is and errors.As.
        return nil, fmt.Errorf("getUser(id=%s): %w", id, err)
    }
    return user, nil
}

Wrapping errors with %w builds an error chain. The caller can inspect it:

err := processOrder(ctx, orderID)
if err != nil {
    // errors.Is checks if any error in the chain matches the target.
    if errors.Is(err, ErrOrderNotFound) {
        return http.StatusNotFound, "order not found"
    }
    // errors.As extracts a specific error type from the chain.
    var dbErr *DatabaseError
    if errors.As(err, &dbErr) {
        log.Printf("database error code %d: %v", dbErr.Code, err)
        return http.StatusInternalServerError, "database error"
    }
    return http.StatusInternalServerError, "internal error"
}

Concurrency: When to Use What

Senior developers choose the right concurrency primitive without hesitation.

Decision tree for Go concurrency:

Do you need to communicate data between goroutines?
├── YES → Use a channel
│         ├── One sender, one receiver → unbuffered channel
│         ├── Producer faster than consumer → buffered channel
│         └── Fan-out/fan-in → channel pipeline
└── NO  → You need to protect shared state
          ├── Simple counter/flag → sync/atomic
          ├── Single reader/writer → sync.Mutex
          ├── Many readers, few writers → sync.RWMutex
          ├── One-time initialization → sync.Once
          └── Per-goroutine data → sync.Map or goroutine-local pattern

Writing Zero-Dependency Services

A common senior Go practice is shipping services with minimal or zero dependencies. The standard library is comprehensive enough for most needs.

A senior developer reaches for a third-party library only when:

  1. The standard library genuinely cannot do the task (e.g., complex CLI argument parsing, protocol buffers)
  2. The third-party library has a significantly better API for the task
  3. The library is mature, well-maintained, and widely trusted

For everything else, the standard library is preferred. It is always available, always maintained, never has breaking changes, and requires no go get.

The Table-Driven Test

Senior Go developers test everything with table-driven tests. They write one test function that covers many cases with a data table.

func TestValidateEmail(t *testing.T) {
    // Each row is a test case.
    tests := []struct {
        name    string
        input   string
        wantErr bool
    }{
        {"valid simple",      "alice@example.com",      false},
        {"valid with dots",   "alice.bob@example.com",  false},
        {"valid with plus",   "alice+tag@example.com",  false},
        {"missing @",         "aliceexample.com",       true},
        {"missing domain",    "alice@",                 true},
        {"empty string",      "",                       true},
        {"only whitespace",   "   ",                    true},
        {"double @",          "alice@@example.com",     true},
        {"very long",         strings.Repeat("a", 300) + "@b.com", true},
    }

    for _, tt := range tests {
        // t.Run creates a sub-test for each case.
        // Run a specific sub-test with: go test -run TestValidateEmail/valid_simple
        t.Run(tt.name, func(t *testing.T) {
            err := ValidateEmail(tt.input)
            if tt.wantErr && err == nil {
                t.Errorf("ValidateEmail(%q) = nil, want error", tt.input)
            }
            if !tt.wantErr && err != nil {
                t.Errorf("ValidateEmail(%q) = %v, want nil", tt.input, err)
            }
        })
    }
}

Adding a new test case is one line. Every case gets its own name in the test output, making failures immediately identifiable.


Part 16 — Advanced Patterns: What Separates Good From Great

The Functional Options Pattern

When a function has many optional parameters, the functional options pattern provides a clean, extensible API.

// Server has many configurable options.
type Server struct {
    addr         string
    readTimeout  time.Duration
    writeTimeout time.Duration
    maxConns     int
    logger       *log.Logger
    tls          bool
}

// Option is a function that modifies a Server.
type Option func(*Server)

// Each option is a constructor that returns an Option function.
func WithAddress(addr string) Option {
    return func(s *Server) { s.addr = addr }
}

func WithReadTimeout(d time.Duration) Option {
    return func(s *Server) { s.readTimeout = d }
}

func WithMaxConnections(n int) Option {
    return func(s *Server) { s.maxConns = n }
}

func WithTLS(enabled bool) Option {
    return func(s *Server) { s.tls = enabled }
}

// NewServer applies options to a default configuration.
func NewServer(opts ...Option) *Server {
    s := &Server{
        addr:         ":8080",           // default
        readTimeout:  10 * time.Second,  // default
        writeTimeout: 30 * time.Second,  // default
        maxConns:     1000,              // default
    }
    for _, opt := range opts {
        opt(s)
    }
    return s
}

// Usage: clean, readable, extensible without breaking existing callers.
srv := NewServer(
    WithAddress(":9090"),
    WithReadTimeout(5 * time.Second),
    WithTLS(true),
)

Adding a new option never breaks existing code. This is backward compatibility by design.

The Builder Pattern for SQL Queries

// QueryBuilder builds SQL queries programmatically.
// Prevents SQL injection through parameterization.
type QueryBuilder struct {
    table      string
    conditions []string
    args       []interface{}
    orderBy    string
    limit      int
    offset     int
}

func NewQuery(table string) *QueryBuilder {
    return &QueryBuilder{table: table}
}

func (qb *QueryBuilder) Where(condition string, args ...interface{}) *QueryBuilder {
    qb.conditions = append(qb.conditions, condition)
    qb.args = append(qb.args, args...)
    return qb
}

func (qb *QueryBuilder) OrderBy(field string) *QueryBuilder {
    qb.orderBy = field
    return qb
}

func (qb *QueryBuilder) Limit(n int) *QueryBuilder {
    qb.limit = n
    return qb
}

func (qb *QueryBuilder) Build() (string, []interface{}) {
    query := "SELECT * FROM " + qb.table
    if len(qb.conditions) > 0 {
        query += " WHERE " + strings.Join(qb.conditions, " AND ")
    }
    if qb.orderBy != "" {
        query += " ORDER BY " + qb.orderBy
    }
    if qb.limit > 0 {
        query += fmt.Sprintf(" LIMIT %d", qb.limit)
    }
    return query, qb.args
}

// Usage:
q, args := NewQuery("products").
    Where("stock > $1", 0).
    Where("price < $2", 100.0).
    OrderBy("name").
    Limit(20).
    Build()

The Repository Pattern with Interfaces

// UserRepository defines the contract for user data access.
// It is an application-layer interface — defined where it is used, not where it is implemented.
type UserRepository interface {
    GetByID(ctx context.Context, id string) (*User, error)
    GetByEmail(ctx context.Context, email string) (*User, error)
    Save(ctx context.Context, user *User) error
    Delete(ctx context.Context, id string) error
    List(ctx context.Context, filter UserFilter) ([]User, error)
}

// PostgresUserRepository is the production implementation.
type PostgresUserRepository struct {
    db *sql.DB
}

func (r *PostgresUserRepository) GetByID(ctx context.Context, id string) (*User, error) {
    // Real SQL query
}

// InMemoryUserRepository is the test implementation.
type InMemoryUserRepository struct {
    mu    sync.RWMutex
    users map[string]*User
}

func (r *InMemoryUserRepository) GetByID(ctx context.Context, id string) (*User, error) {
    r.mu.RLock()
    defer r.mu.RUnlock()
    user, ok := r.users[id]
    if !ok {
        return nil, ErrUserNotFound
    }
    return user, nil
}

// The service uses UserRepository, not PostgresUserRepository.
// It is completely decoupled from the database.
type UserService struct {
    users UserRepository
}

func NewUserService(users UserRepository) *UserService {
    return &UserService{users: users}
}

func (s *UserService) GetProfile(ctx context.Context, id string) (*UserProfile, error) {
    user, err := s.users.GetByID(ctx, id)
    if err != nil {
        return nil, err
    }
    return buildProfile(user), nil
}

Part 17 — Modern Go: Features From 1.18 to 1.25

Go 1.18 — The Generics Release

Generics (type parameters) were Go’s most requested feature for years. They arrived in 1.18.

Also in 1.18: fuzzing. Go’s testing package now includes a fuzzer that generates random inputs to find bugs.

// fuzz_test.go
func FuzzParseDate(f *testing.F) {
    // Seed corpus: known-good inputs.
    f.Add("2026-02-18")
    f.Add("01/15/2025")
    f.Add("")

    f.Fuzz(func(t *testing.T, input string) {
        // ParseDate must not panic on any input.
        // If it panics, the fuzzer found a bug.
        result, err := ParseDate(input)
        if err == nil && result.IsZero() {
            t.Errorf("ParseDate(%q): returned zero time without error", input)
        }
    })
}

Run the fuzzer:

go test -fuzz=FuzzParseDate -fuzztime=60s

The fuzzer mutates inputs and looks for panics, crashes, or test failures. It saves any failing input to a corpus file so you can reproduce and fix the bug.

Go 1.21 — slog and Built-In Min/Max

Go 1.21 added log/slog, a structured logging package to the standard library.

import "log/slog"

// Default logger writes JSON to stdout
slog.Info("request processed",
    "method", "POST",
    "path",   "/api/orders",
    "status", 201,
    "latency_ms", 42,
)
// Output: {"time":"2026-02-18T10:00:00Z","level":"INFO","msg":"request processed","method":"POST","path":"/api/orders","status":201,"latency_ms":42}

// Create a custom logger
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
    Level: slog.LevelDebug,
}))

// Add persistent attributes to a logger (for request-scoped logging)
requestLogger := logger.With(
    "request_id", "req-abc-123",
    "user_id",    "user-456",
)
requestLogger.Info("processing order")
requestLogger.Debug("validating payment method")

Go 1.21 also added min and max as built-in functions, removing the need for custom helper functions for those common operations.

Go 1.22 — Loop Variable Semantics

Go 1.22 changed loop variable semantics. Each iteration now gets its own variable:

// Before Go 1.22: this was a notorious bug.
// All goroutines would print the SAME value (10) because
// they all shared the same loop variable i.
for i := 0; i < 10; i++ {
    go func() {
        fmt.Println(i) // All print 10
    }()
}

// After Go 1.22: each goroutine gets its own i.
// They print 0 through 9 (in some order).
for i := 0; i < 10; i++ {
    go func() {
        fmt.Println(i) // Each prints its own i
    }()
}

This change eliminated an entire class of bugs. Code that was previously correct (with the i := i capture pattern) continues to work. Code that was previously buggy is now correct automatically.


The Architecture of a Production Go Service

Let us see how all of these concepts fit together in a real production service.

cmd/
  server/
    main.go              ← Entry point. Parse flags. Wire dependencies. Start server.

internal/
  domain/
    user.go              ← User entity: struct, methods, validation
    order.go             ← Order entity: aggregate root
    errors.go            ← Domain error types (ErrUserNotFound, etc.)

  application/
    user_service.go      ← Use cases: CreateUser, GetUser, UpdateUser
    order_service.go     ← Use cases: PlaceOrder, CancelOrder
    ports.go             ← Repository and external service interfaces

  infrastructure/
    postgres/
      user_repo.go       ← PostgreSQL implementation of UserRepository
      order_repo.go      ← PostgreSQL implementation of OrderRepository
      migrations/        ← SQL migration files

    http/
      server.go          ← HTTP server setup, middleware, graceful shutdown
      user_handler.go    ← HTTP handlers for user endpoints
      order_handler.go   ← HTTP handlers for order endpoints
      middleware.go      ← Auth, logging, rate limiting middleware

    cache/
      redis.go           ← Redis-based cache implementation

config/
  config.go              ← Configuration from environment variables

pkg/
  validator/             ← Reusable validation utilities
  pagination/            ← Reusable pagination helpers

The main.go wires everything together:

func main() {
    // Load configuration from environment.
    cfg := config.Load()

    // Infrastructure: connect to external systems.
    db, err := postgres.Connect(cfg.DatabaseURL)
    if err != nil {
        log.Fatal("database connection failed:", err)
    }
    defer db.Close()

    // Repositories: implement the ports defined in application layer.
    userRepo  := postgres.NewUserRepository(db)
    orderRepo := postgres.NewOrderRepository(db)

    // Application services: business logic, no infrastructure dependency.
    userService  := application.NewUserService(userRepo)
    orderService := application.NewOrderService(orderRepo, userService)

    // HTTP layer: adapters that translate HTTP to application layer calls.
    router := http.NewRouter(
        http.NewUserHandler(userService),
        http.NewOrderHandler(orderService),
    )

    // Server with graceful shutdown.
    server := &http.Server{
        Addr:         cfg.ListenAddr,
        Handler:      router,
        ReadTimeout:  10 * time.Second,
        WriteTimeout: 30 * time.Second,
    }

    // Start server in a goroutine.
    go func() {
        log.Printf("Server starting on %s", cfg.ListenAddr)
        if err := server.ListenAndServe(); err != http.ErrServerClosed {
            log.Fatal("server error:", err)
        }
    }()

    // Wait for shutdown signal (SIGTERM or SIGINT from Ctrl+C or k8s).
    quit := make(chan os.Signal, 1)
    signal.Notify(quit, syscall.SIGTERM, syscall.SIGINT)
    <-quit

    log.Println("Shutting down gracefully...")
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    defer cancel()

    if err := server.Shutdown(ctx); err != nil {
        log.Fatal("server forced to shutdown:", err)
    }
    log.Println("Server stopped.")
}

The Hierarchy of Go Knowledge

Level 1: Can write Go code
  - Understands syntax
  - Can read documentation
  - Can build simple programs

Level 2: Can write idiomatic Go
  - Understands interfaces
  - Handles errors properly
  - Writes table-driven tests
  - Uses goroutines and channels correctly

Level 3: Can design Go systems
  - Understands dependency injection
  - Designs good package APIs
  - Uses context correctly
  - Benchmarks and profiles code

Level 4: Understands Go internals
  - Understands GMP scheduler
  - Understands escape analysis
  - Understands GC behavior
  - Can read goroutine stack traces
  - Knows when to use sync/atomic vs mutex

Level 5: Can use Go at its full potential
  - Writes zero-allocation hot paths
  - Designs for cache efficiency
  - Understands tradeoffs between all approaches
  - Contributes to others' deep understanding
  - Reads the Go runtime source code comfortably

Most developers who code in Go for years stay at Level 2-3. Levels 4-5 require deliberate study — reading the runtime source, writing benchmarks, staring at pprof profiles.

But the payoff is immense. A Level 4 developer can look at a 50-line function and immediately say “this allocates on every call, here is why, here is how to fix it.” They can see performance problems before they happen. They can explain exactly why one version of code is 10x faster than another.

That is the goal of this guide.


Conclusion: Go Grows With You

What makes Go unusual is not any single feature. It is the combination.

Fast compilation means tight feedback loops. You try something, build, run, and see results in seconds.

Simplicity means code is readable years later. The function you wrote in 2023 still reads clearly in 2026 because the language did not change underneath it and it does not rely on magic.

The runtime means you get goroutines, GC, and a professional profiling toolchain without any configuration.

The standard library means you can build real systems without evaluating, choosing, and trusting third-party packages for the basics.

A programmer who starts with Go today and studies it seriously — who goes from Hello World to understanding the GMP scheduler, who profiles their code and reads escape analysis output, who designs clean interfaces and tests them properly — that programmer will be productive, efficient, and clear-headed.

Not because Go is the best language in every dimension. It is not.

But because Go is the language most designed to make you productive across your entire career — from your first week to your fifteenth year.

The gopher is patient. The gopher is consistent. The gopher ships.


Part 18 — The Complete Native Standard Library: Every Package Explained

The Go standard library ships with every Go installation. No download required, no version conflicts, no license questions. It is one of the most comprehensive and well-designed standard libraries in any programming language.

This chapter is a complete reference. Every package, every purpose, every key function. Think of it as the library card catalog you always wished existed.


Package archive/tar

Reads and writes tar archives — the same format used by Linux tar.

What it does: Create archives that bundle multiple files and directories into one stream. Common in build systems, deployment pipelines, and Docker layer internals.

Key types:

  • tar.Reader — reads entries from a tar stream
  • tar.Writer — writes entries to a tar stream
  • tar.Header — metadata for each entry (name, size, permissions, timestamps)
import "archive/tar"

// Create a tar archive in memory.
var buf bytes.Buffer
tw := tar.NewWriter(&buf)

// Add a file to the archive.
content := []byte("Hello from tar")
hdr := &tar.Header{
    Name: "hello.txt",
    Mode: 0600,
    Size: int64(len(content)),
}
tw.WriteHeader(hdr)
tw.Write(content)
tw.Close()

// Read back the archive.
tr := tar.NewReader(&buf)
for {
    hdr, err := tr.Next()
    if err == io.EOF { break }
    data, _ := io.ReadAll(tr)
    fmt.Printf("File: %s, Content: %s\n", hdr.Name, data)
}

Use when: Building deployment tools, inspecting Docker images, creating backup utilities.


Package archive/zip

Reads and writes ZIP archives, the most common archive format on Windows and widely used everywhere else.

Key types:

  • zip.Reader — opens a zip for reading
  • zip.Writer — creates a zip
  • zip.File — represents one entry in the archive
import "archive/zip"

// Create a zip file.
f, _ := os.Create("bundle.zip")
zw := zip.NewWriter(f)

// Add a file inside the zip.
w, _ := zw.Create("config.json")
w.Write([]byte(`{"env":"production"}`))
zw.Close()
f.Close()

// Read a zip file.
r, _ := zip.OpenReader("bundle.zip")
defer r.Close()
for _, f := range r.File {
    rc, _ := f.Open()
    data, _ := io.ReadAll(rc)
    rc.Close()
    fmt.Printf("%s: %s\n", f.Name, data)
}

Use when: Serving downloadable file bundles, reading .xlsx or .docx files (they are ZIP archives internally), distributing plugin packages.


Package bufio

Adds buffering to any io.Reader or io.Writer. Buffering dramatically reduces the number of system calls when reading or writing one small piece at a time.

Key types:

  • bufio.Reader — buffered reading with helper methods
  • bufio.Writer — buffered writing, flushes at buffer capacity
  • bufio.Scanner — scans input line by line or by custom split function
import "bufio"

// Read a file line by line without loading it entirely into memory.
// Works on files of any size.
f, _ := os.Open("large.log")
defer f.Close()

scanner := bufio.NewScanner(f)
lineNum := 0
for scanner.Scan() {
    lineNum++
    line := scanner.Text() // One line, newline stripped
    if strings.Contains(line, "ERROR") {
        fmt.Printf("Line %d: %s\n", lineNum, line)
    }
}

// Buffered writer: accumulate writes, flush once.
bw := bufio.NewWriterSize(os.Stdout, 64*1024) // 64KB buffer
for i := 0; i < 10000; i++ {
    fmt.Fprintf(bw, "record %d\n", i) // No syscall yet
}
bw.Flush() // One syscall for all 10000 writes

// Read words from stdin.
scanner2 := bufio.NewScanner(os.Stdin)
scanner2.Split(bufio.ScanWords) // Default is ScanLines
for scanner2.Scan() {
    fmt.Println(scanner2.Text())
}

Key methods on bufio.Reader: ReadLine, ReadString(delim), ReadBytes(delim), Peek(n) (look ahead without consuming), UnreadByte, UnreadRune.

Use when: Parsing large files, implementing line-based protocols, any I/O where you need to read one delimiter at a time.


Package bytes

Manipulates byte slices ([]byte). Mirrors much of the strings package but operates on []byte, which avoids string allocations in performance-critical code.

Key functions:

  • bytes.Contains(b, sub) — reports whether sub is within b
  • bytes.Split(b, sep) — splits b around sep
  • bytes.Join(slices, sep) — joins byte slices with separator
  • bytes.Replace(b, old, new, n) — replace occurrences
  • bytes.TrimSpace(b) — strips leading/trailing whitespace
  • bytes.ToUpper(b) / bytes.ToLower(b) — case conversion
  • bytes.Index(b, sub) — find index of sub in b
  • bytes.Equal(a, b) — compare two byte slices

Key type: bytes.Buffer — a resizable byte buffer implementing io.Reader, io.Writer, and io.ByteWriter. The workhorse for building byte content incrementally.

import "bytes"

// Build a CSV row efficiently with a Buffer.
var buf bytes.Buffer
fields := []string{"Alice", "30", "Engineering"}
for i, f := range fields {
    if i > 0 { buf.WriteByte(',') }
    buf.WriteString(f)
}
buf.WriteByte('\n')
fmt.Print(buf.String()) // Alice,30,Engineering

// Parse a log entry.
entry := []byte("2026-02-18 ERROR connection refused")
parts := bytes.SplitN(entry, []byte(" "), 3)
date    := parts[0] // "2026-02-18"
level   := parts[1] // "ERROR"
message := parts[2] // "connection refused"
fmt.Printf("Date: %s, Level: %s, Message: %s\n", date, level, message)

Use when: Parsing binary protocols, building HTTP request bodies, processing large text without string allocation overhead.


Package compress/*

A family of compression packages. All implement io.Reader/io.Writer interfaces so they compose with any I/O stream.

Sub-packages:

compress/gzip — gzip format (widely used, moderate compression) compress/zlib — zlib/deflate (used inside PNG, PDF, HTTP) compress/flate — raw deflate algorithm (underpins gzip and zlib) compress/bzip2 — bzip2 decompression (read-only; write via external tool) compress/lzw — Lempel-Ziv-Welch (used in GIF and TIFF)

import (
    "compress/gzip"
    "bytes"
)

// Compress data in memory.
var compressed bytes.Buffer
gw := gzip.NewWriter(&compressed)
gw.Write([]byte("This will be compressed"))
gw.Close()

fmt.Printf("Original: 23 bytes, Compressed: %d bytes\n", compressed.Len())

// Decompress.
gr, _ := gzip.NewReader(&compressed)
defer gr.Close()
decompressed, _ := io.ReadAll(gr)
fmt.Println(string(decompressed))

// Compress a file on disk.
in,  _ := os.Open("data.json")
out, _ := os.Create("data.json.gz")
gzw    := gzip.NewWriter(out)
io.Copy(gzw, in)
gzw.Close()
out.Close()

Use when: Compressing HTTP responses (gzip is the standard web compression), storing large datasets, reading .gz log archives.


Package context

Carries deadlines, cancellation signals, and request-scoped key-value pairs across API boundaries. Essential in every production Go service.

Key functions:

  • context.Background() — the root context. Starting point for all context trees.
  • context.TODO() — placeholder when you have not decided the context yet.
  • context.WithCancel(parent) — returns a child context and a cancel function.
  • context.WithTimeout(parent, duration) — cancels after duration.
  • context.WithDeadline(parent, time) — cancels at a specific time.
  • context.WithValue(parent, key, value) — attaches a value to the context.
import "context"

// Pattern: propagate cancellation through a call tree.
func handleRequest(w http.ResponseWriter, r *http.Request) {
    // The request's context is already timeout-aware from the HTTP server.
    ctx := r.Context()

    // Add a tighter deadline for this specific operation.
    ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
    defer cancel()

    result, err := fetchFromDatabase(ctx)
    if errors.Is(err, context.DeadlineExceeded) {
        http.Error(w, "database timeout", http.StatusGatewayTimeout)
        return
    }
    // ...
}

// context.WithValue: attach request-scoped data.
// Convention: use unexported types as keys to avoid collisions between packages.
type contextKey string
const requestIDKey contextKey = "requestID"

func withRequestID(ctx context.Context, id string) context.Context {
    return context.WithValue(ctx, requestIDKey, id)
}

func getRequestID(ctx context.Context) string {
    id, _ := ctx.Value(requestIDKey).(string)
    return id
}

Rule: Pass context.Context as the first argument to every function that does I/O, makes a network call, or might run for more than a few milliseconds.


Package crypto/*

A comprehensive cryptography toolkit. Go’s crypto packages are battle-tested and match OpenSSL’s feature set for most use cases.

Sub-packages:

crypto/aes — AES symmetric block cipher (128/192/256-bit keys) crypto/cipher — Block cipher modes (GCM, CBC, CTR, OFB, CFB) crypto/rsa — RSA public-key cryptography (encrypt, sign, verify) crypto/ecdsa — Elliptic Curve Digital Signature Algorithm crypto/ed25519 — Edwards-curve signatures (fast, modern, used in SSH/TLS) crypto/sha256 / crypto/sha512 — SHA hash functions crypto/sha1 — SHA-1 (for legacy compatibility only; not secure for new code) crypto/md5 — MD5 (legacy; not for security) crypto/hmac — Hash-based Message Authentication Code crypto/rand — Cryptographically secure random number generator crypto/tls — TLS 1.2/1.3 implementation crypto/x509 — X.509 certificates (used in TLS)

import (
    "crypto/aes"
    "crypto/cipher"
    "crypto/hmac"
    "crypto/rand"
    "crypto/sha256"
)

// Generate a random 32-byte AES key. Always use crypto/rand, never math/rand.
key := make([]byte, 32)
rand.Read(key)

// AES-GCM: authenticated encryption. The standard choice for symmetric encryption.
block, _ := aes.NewCipher(key)
gcm, _   := cipher.NewGCM(block)

nonce := make([]byte, gcm.NonceSize())
rand.Read(nonce)

plaintext := []byte("sensitive data")
ciphertext := gcm.Seal(nonce, nonce, plaintext, nil)

// Decrypt.
nonceSize := gcm.NonceSize()
decrypted, err := gcm.Open(nil, ciphertext[:nonceSize], ciphertext[nonceSize:], nil)
if err != nil { panic("authentication failed") }
fmt.Println(string(decrypted))

// HMAC-SHA256: verify message integrity and authenticity.
mac := hmac.New(sha256.New, []byte("secret-key"))
mac.Write([]byte("message to authenticate"))
signature := mac.Sum(nil)
fmt.Printf("HMAC: %x\n", signature)

// Verify.
mac2 := hmac.New(sha256.New, []byte("secret-key"))
mac2.Write([]byte("message to authenticate"))
valid := hmac.Equal(signature, mac2.Sum(nil))
fmt.Println("Valid:", valid) // true

Use when: Encrypting stored secrets, signing JWT tokens, building secure communication between services, verifying webhook payloads.


Package database/sql

The standard database abstraction layer. Defines interfaces that every database driver implements. (Covered in depth in Part 10, but here is the complete picture.)

Key types:

  • sql.DB — a connection pool. Not a single connection. Thread-safe.
  • sql.Tx — an active transaction. Methods mirror sql.DB.
  • sql.Rows — cursor over a query result set. Always defer rows.Close().
  • sql.Row — result of a single-row query.
  • sql.Stmt — a prepared statement. Reuse for repeated identical queries.
  • sql.NullString, sql.NullInt64, etc. — handle nullable database columns.

Key functions:

  • sql.Open(driver, dsn) — create a connection pool (does not connect yet)
  • db.QueryContext(ctx, query, args...) — execute query returning rows
  • db.QueryRowContext(ctx, query, args...) — expect exactly one row
  • db.ExecContext(ctx, query, args...) — INSERT/UPDATE/DELETE (no rows returned)
  • db.PrepareContext(ctx, query) — create a reusable prepared statement
  • db.BeginTx(ctx, opts) — start a transaction
// Handle nullable columns with sql.Null types.
type User struct {
    ID       int
    Name     string
    Bio      sql.NullString // Bio can be NULL in the database
    Age      sql.NullInt64
}

var u User
err := db.QueryRowContext(ctx,
    "SELECT id, name, bio, age FROM users WHERE id = $1", 42,
).Scan(&u.ID, &u.Name, &u.Bio, &u.Age)

if u.Bio.Valid {
    fmt.Println("Bio:", u.Bio.String)
} else {
    fmt.Println("Bio: not provided")
}

// Prepared statement: compile SQL once, execute many times.
stmt, _ := db.PrepareContext(ctx,
    "INSERT INTO events(type, payload, created_at) VALUES($1, $2, $3)",
)
defer stmt.Close()

for _, event := range events {
    stmt.ExecContext(ctx, event.Type, event.Payload, time.Now())
}

Package encoding/*

A family of packages for encoding and decoding various data formats.

encoding/json — JSON. The most-used encoding package. Covered in Part 10.

encoding/xml — XML marshaling and unmarshaling.

import "encoding/xml"

type Person struct {
    XMLName xml.Name `xml:"person"`
    Name    string   `xml:"name"`
    Age     int      `xml:"age"`
}

p := Person{Name: "Alice", Age: 30}
data, _ := xml.MarshalIndent(p, "", "  ")
fmt.Println(string(data))
// <person><name>Alice</name><age>30</age></person>

var p2 Person
xml.Unmarshal(data, &p2)

encoding/csv — CSV (comma-separated values).

import "encoding/csv"

// Write CSV.
w := csv.NewWriter(os.Stdout)
records := [][]string{
    {"Name", "Age", "City"},
    {"Alice", "30", "New York"},
    {"Bob",   "25", "London"},
}
w.WriteAll(records)

// Read CSV.
r := csv.NewReader(strings.NewReader("Alice,30,NY\nBob,25,London\n"))
r.FieldsPerRecord = 3
for {
    record, err := r.Read()
    if err == io.EOF { break }
    fmt.Println(record[0], record[1], record[2])
}

encoding/base64 — Base64 encoding (used in HTTP Basic Auth, JWT, email attachments).

import "encoding/base64"

encoded := base64.StdEncoding.EncodeToString([]byte("binary data"))
decoded, _ := base64.StdEncoding.DecodeString(encoded)
// URL-safe variant (no + or / characters): base64.URLEncoding

encoding/hex — Hexadecimal encoding/decoding.

import "encoding/hex"

encoded := hex.EncodeToString([]byte{0xDE, 0xAD, 0xBE, 0xEF})
fmt.Println(encoded) // "deadbeef"

decoded, _ := hex.DecodeString("deadbeef")

encoding/gob — Go’s own binary serialization format. Fast, but Go-specific (not interoperable with other languages).

import "encoding/gob"

// Encode.
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
enc.Encode(MyStruct{Name: "Alice", Score: 100})

// Decode.
var result MyStruct
dec := gob.NewDecoder(&buf)
dec.Decode(&result)

encoding/binary — Read/write fixed-size binary values. Essential for binary protocols (network packets, file formats).

import "encoding/binary"

// Write a uint32 in big-endian byte order.
var buf bytes.Buffer
binary.Write(&buf, binary.BigEndian, uint32(1024))

// Read it back.
var n uint32
binary.Read(&buf, binary.BigEndian, &n)
fmt.Println(n) // 1024

Package errors

The standard errors package, dramatically expanded in Go 1.13 with wrapping, unwrapping, and error chain inspection.

Key functions:

  • errors.New(text) — create a simple sentinel error
  • errors.Is(err, target) — check if any error in the chain matches target
  • errors.As(err, target) — extract a specific error type from the chain
  • errors.Unwrap(err) — get the next error in the chain
import "errors"

// Sentinel errors: predefined error values for comparison.
var (
    ErrNotFound   = errors.New("not found")
    ErrPermission = errors.New("permission denied")
    ErrTimeout    = errors.New("operation timed out")
)

// Wrapping: add context without losing the original error.
func readConfig(path string) error {
    data, err := os.ReadFile(path)
    if err != nil {
        return fmt.Errorf("readConfig(%q): %w", path, err)
        // %w wraps the error, making it inspectable with errors.Is/As
    }
    _ = data
    return nil
}

// Custom error type with additional fields.
type ValidationError struct {
    Field   string
    Message string
}

func (e *ValidationError) Error() string {
    return fmt.Sprintf("validation error: %s%s", e.Field, e.Message)
}

// Caller can inspect the chain:
err := processForm(input)
var valErr *ValidationError
if errors.As(err, &valErr) {
    // valErr.Field and valErr.Message are available here.
    fmt.Printf("Invalid field %q: %s\n", valErr.Field, valErr.Message)
}

Package flag

Command-line flag parsing. Simple, batteries-included, no third-party library needed for most CLI tools.

import "flag"

// Define flags with name, default value, and usage description.
host    := flag.String("host",    "localhost", "server hostname")
port    := flag.Int("port",       8080,        "server port")
verbose := flag.Bool("verbose",   false,       "enable verbose output")
timeout := flag.Duration("timeout", 30*time.Second, "request timeout")

// Parse must be called after defining all flags and before using them.
flag.Parse()

// Remaining non-flag arguments (positional args).
args := flag.Args()

fmt.Printf("Starting on %s:%d (verbose=%v, timeout=%v)\n",
    *host, *port, *verbose, *timeout)
fmt.Println("Extra args:", args)

Run with: ./app -host=example.com -port=9090 -verbose file1.txt file2.txt

Use when: Writing CLI tools that need configuration. For complex CLIs with subcommands, consider cobra (external) instead.


Package fmt

Formatted I/O. One of the most-used packages in Go.

Verbs reference:

%v     default format (structs: {field1 field2})
%+v    struct with field names ({Name:Alice Age:30})
%#v    Go syntax representation (main.Person{Name:"Alice", Age:30})
%T     type name (main.Person)
%d     integer (decimal)
%b     integer (binary)
%x     integer (hex, lowercase) / bytes (hex)
%X     integer (hex, uppercase)
%o     integer (octal)
%f     float (default precision)
%.2f   float (2 decimal places)
%e     float (scientific: 1.23e+02)
%s     string or []byte
%q     quoted string ("hello\nworld")
%p     pointer address (0xc0000b4010)
%t     boolean (true/false)
%c     character (Unicode code point)
%w     wrap an error (only valid in fmt.Errorf)

Key functions:

  • fmt.Println(a...) — print with spaces, newline
  • fmt.Printf(format, a...) — formatted print
  • fmt.Fprintf(w, format, a...) — formatted print to any Writer
  • fmt.Sprintf(format, a...) — format to a string (no I/O)
  • fmt.Errorf(format, a...) — create formatted error
  • fmt.Scan(a...) / fmt.Scanf(format, a...) — read from stdin
  • fmt.Sscanf(str, format, a...) — parse values from a string
// Sprintf: format into a string (very commonly used)
url := fmt.Sprintf("https://api.example.com/users/%d/orders?limit=%d", userID, limit)

// Fprintf: write to any Writer (files, HTTP responses, buffers)
fmt.Fprintf(w, `{"id":%d,"status":%q}`, order.ID, order.Status)

// Stringer interface: a type with String() method is printed by its method.
type Point struct { X, Y int }
func (p Point) String() string { return fmt.Sprintf("(%d, %d)", p.X, p.Y) }
p := Point{3, 4}
fmt.Println(p) // (3, 4)

Package hash/*

Non-cryptographic and cryptographic hash functions. All implement hash.Hash, a superset of io.Writer.

hash/fnv — FNV hash. Extremely fast, non-cryptographic. Used for hash tables, cache keys, sharding.

import "hash/fnv"

h := fnv.New64a()
h.Write([]byte("my-cache-key"))
sum := h.Sum64() // uint64 hash value
shardIndex := sum % uint64(numShards)

hash/crc32 / hash/crc64 — CRC checksums. Used for error detection (Ethernet, ZIP, PNG).

import "hash/crc32"

checksum := crc32.ChecksumIEEE([]byte("data to check"))

hash/adler32 — Adler-32 checksum. Used in zlib.

Cryptographic hashes live in crypto/sha256, crypto/sha512, crypto/md5, crypto/sha1 — covered in the crypto section.


Package html and html/template

html — HTML escaping only.

import "html"

safe := html.EscapeString("<script>alert('xss')</script>")
// &lt;script&gt;alert(&#39;xss&#39;)&lt;/script&gt;

html/template — Context-aware HTML templating that automatically escapes values to prevent XSS. Use this instead of text/template when generating HTML.

import "html/template"

const tmpl = `
<html><body>
  <h1>Hello, {{.Name}}!</h1>
  <p>Your score: {{.Score}}</p>
  {{if .IsAdmin}}<a href="/admin">Admin Panel</a>{{end}}
  <ul>
  {{range .Items}}<li>{{.}}</li>{{end}}
  </ul>
</body></html>`

type Data struct {
    Name    string
    Score   int
    IsAdmin bool
    Items   []string
}

t := template.Must(template.New("page").Parse(tmpl))
t.Execute(os.Stdout, Data{
    Name:    "<Alice>", // Will be escaped to &lt;Alice&gt;
    Score:   100,
    IsAdmin: true,
    Items:   []string{"Item 1", "Item 2"},
})

Template actions:

  • {{.Field}} — output field value (auto-escaped)
  • {{if .Condition}}...{{end}} — conditional
  • {{range .Slice}}...{{end}} — loop
  • {{with .Value}}...{{end}} — conditional scope
  • {{template "name" .}} — include named template
  • {{block "name" .}}...{{end}} — define overridable block

Package io

Core I/O primitives. Every I/O operation in Go ultimately uses these interfaces.

Key interfaces:

  • io.ReaderRead(p []byte) (n int, err error)
  • io.WriterWrite(p []byte) (n int, err error)
  • io.CloserClose() error
  • io.SeekerSeek(offset int64, whence int) (int64, error)
  • io.ReadWriter — Reader + Writer
  • io.ReadCloser — Reader + Closer (returned by http.Response.Body)
  • io.WriteCloser — Writer + Closer
  • io.ReadWriteSeeker — Reader + Writer + Seeker (like *os.File)
  • io.ByteReaderReadByte() (byte, error)
  • io.RuneReaderReadRune() (rune, int, error)

Key functions:

  • io.Copy(dst, src) — copy until EOF
  • io.CopyN(dst, src, n) — copy exactly n bytes
  • io.ReadAll(r) — read until EOF into a []byte
  • io.ReadFull(r, buf) — fill buf completely (error if fewer bytes)
  • io.LimitReader(r, n) — wrap reader to return at most n bytes
  • io.TeeReader(r, w) — reads from r, simultaneously writes to w
  • io.MultiReader(readers...) — concatenate multiple readers
  • io.MultiWriter(writers...) — fan-out writes to multiple writers
  • io.Pipe() — synchronous in-memory pipe; returns *io.PipeReader and *io.PipeWriter
  • io.Discard — a Writer that discards everything written to it
// TeeReader: read AND simultaneously write to a log.
var logBuf bytes.Buffer
tee := io.TeeReader(r.Body, &logBuf) // Read from tee → reads from Body AND writes to logBuf
var payload MyPayload
json.NewDecoder(tee).Decode(&payload)
fmt.Println("Raw body was:", logBuf.String())

// Pipe: connect a writer to a reader without an intermediate buffer.
pr, pw := io.Pipe()
go func() {
    json.NewEncoder(pw).Encode(myData)
    pw.Close()
}()
http.Post("https://api.example.com/data", "application/json", pr)

// LimitReader: protect against huge uploads.
limited := io.LimitReader(r.Body, 1<<20) // Max 1MB
data, err := io.ReadAll(limited)

Package io/fs

Defines interfaces for file system access, allowing code to work with any file system — real disk, embedded, in-memory, zip archives — through a common interface.

Key interface: fs.FS — a file system. Open, read, iterate.

import "io/fs"

// Functions that accept fs.FS work with any filesystem.
func countMarkdownFiles(fsys fs.FS) (int, error) {
    count := 0
    return count, fs.WalkDir(fsys, ".", func(path string, d fs.DirEntry, err error) error {
        if err != nil { return err }
        if !d.IsDir() && strings.HasSuffix(path, ".md") {
            count++
        }
        return nil
    })
}

// Use with the real OS filesystem.
count, _ := countMarkdownFiles(os.DirFS("/home/user/docs"))

// Use with embedded files (embed.FS also implements fs.FS).
//go:embed static
var staticFiles embed.FS
count2, _ := countMarkdownFiles(staticFiles)

Package log

Simple logging. Writes to stderr by default. Each log message is timestamped. Fatal calls os.Exit(1). Panic logs then panics.

import "log"

log.Println("Server started")               // 2026/02/18 10:00:00 Server started
log.Printf("Listening on port %d", 8080)
log.Fatal("could not connect to database") // Logs then calls os.Exit(1)

// Custom logger with prefix and flags.
logger := log.New(os.Stdout, "[APP] ", log.Ldate|log.Ltime|log.Lshortfile)
logger.Println("Custom logger")
// [APP] 2026/02/18 10:00:00 main.go:42: Custom logger

// Flags for formatting:
// log.Ldate     — date (2009/01/23)
// log.Ltime     — time (01:23:23)
// log.Lmicroseconds — microsecond resolution
// log.Llongfile — full file path and line number
// log.Lshortfile — just file name and line number
// log.LUTC      — use UTC instead of local time
// log.Lmsgprefix — move prefix to before the message

For structured, leveled logging in production, use log/slog (Go 1.21+) or the go.uber.org/zap library.


Package log/slog

Go 1.21+. Structured, leveled logging. The modern replacement for the log package in production services.

import "log/slog"

// Default: JSON to stdout.
slog.Info("request received",
    "method",     "GET",
    "path",       "/api/users",
    "request_id", "req-abc-123",
)

// Create a custom handler.
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
    Level:     slog.LevelDebug,
    AddSource: true, // Include file:line in every log entry
}))

// Add permanent attributes to a logger (e.g., service name, version).
appLogger := logger.With("service", "order-service", "version", "1.4.2")

// Use throughout the app.
appLogger.Info("order placed", "order_id", 1001, "amount", 299.99)
appLogger.Error("payment failed", "order_id", 1001, "error", err)

// Group attributes.
appLogger.Info("user action",
    slog.Group("user",
        slog.String("id",    "usr-42"),
        slog.String("email", "alice@example.com"),
    ),
    slog.String("action", "login"),
)

Package math

Mathematical functions for float64. All the standard mathematical operations.

Key constants: math.Pi, math.E, math.Phi, math.Sqrt2, math.MaxFloat64, math.SmallestNonzeroFloat64, math.MaxInt, math.MinInt

Key functions:

import "math"

math.Abs(-5.5)          // 5.5
math.Ceil(1.2)          // 2.0
math.Floor(1.9)         // 1.0
math.Round(1.5)         // 2.0
math.Sqrt(16.0)         // 4.0
math.Pow(2, 10)         // 1024.0
math.Log(math.E)        // 1.0
math.Log2(1024)         // 10.0
math.Log10(1000)        // 3.0
math.Sin(math.Pi / 2)   // 1.0
math.Cos(0)             // 1.0
math.Hypot(3, 4)        // 5.0 (√(3²+4²))
math.Min(3.0, 5.0)      // 3.0
math.Max(3.0, 5.0)      // 5.0
math.Mod(10.5, 3.0)     // 1.5
math.IsNaN(math.NaN())  // true
math.IsInf(math.Inf(1), 1) // true
math.Inf(1)             // +Infinity
math.Inf(-1)            // -Infinity
math.NaN()              // Not a Number

math/big — Arbitrary-precision integers, floats, and rationals. For financial calculations, cryptography, and numbers that overflow int64.

import "math/big"

// Big integer: no overflow ever.
a := new(big.Int).SetInt64(1000000000)
b := new(big.Int).SetInt64(1000000000)
product := new(big.Int).Mul(a, b)
fmt.Println(product) // 1000000000000000000

// Precise decimal arithmetic (no floating-point error).
price := new(big.Float).SetString("99.99")
tax   := new(big.Float).SetString("0.08")
total := new(big.Float).Mul(price, tax)
fmt.Printf("Tax: %.2f\n", total) // 8.00

math/rand — Pseudo-random number generation. NOT cryptographically secure. Use crypto/rand for security-sensitive applications.

import "math/rand/v2" // Go 1.22+ — improved API, better algorithm

// Global functions use a shared source (thread-safe in Go 1.20+).
n := rand.IntN(100)      // Random int in [0, 100)
f := rand.Float64()      // Random float in [0.0, 1.0)

// Shuffle a slice.
items := []string{"a", "b", "c", "d", "e"}
rand.Shuffle(len(items), func(i, j int) {
    items[i], items[j] = items[j], items[i]
})

Package mime

MIME type detection and manipulation. Used in HTTP servers and email handling.

import "mime"

// Detect MIME type from file extension.
mimeType := mime.TypeByExtension(".json")  // "application/json"
mimeType  = mime.TypeByExtension(".html")  // "text/html; charset=utf-8"
mimeType  = mime.TypeByExtension(".png")   // "image/png"

// Parse a Content-Type header.
mediaType, params, _ := mime.ParseMediaType("text/html; charset=utf-8")
fmt.Println(mediaType)        // "text/html"
fmt.Println(params["charset"]) // "utf-8"

mime/multipart — Read multipart/form-data bodies. Used for file upload handling.

import "mime/multipart"

func handleUpload(w http.ResponseWriter, r *http.Request) {
    r.ParseMultipartForm(32 << 20) // 32MB max memory
    file, header, err := r.FormFile("upload")
    if err != nil {
        http.Error(w, "no file", 400)
        return
    }
    defer file.Close()
    fmt.Printf("Received file: %s (%d bytes)\n", header.Filename, header.Size)

    dst, _ := os.Create("/uploads/" + header.Filename)
    defer dst.Close()
    io.Copy(dst, file)
}

Package net

Low-level network I/O. TCP, UDP, Unix sockets, DNS lookups. The foundation under net/http.

import "net"

// TCP server: accept connections, handle in goroutines.
ln, _ := net.Listen("tcp", ":9000")
for {
    conn, err := ln.Accept()
    if err != nil { continue }
    go handleConn(conn)
}

func handleConn(conn net.Conn) {
    defer conn.Close()
    buf := make([]byte, 4096)
    for {
        n, err := conn.Read(buf)
        if err != nil { return }
        conn.Write(buf[:n]) // Echo server
    }
}

// TCP client.
conn, _ := net.Dial("tcp", "example.com:80")
defer conn.Close()
conn.Write([]byte("GET / HTTP/1.0\r\n\r\n"))
response, _ := io.ReadAll(conn)

// UDP.
addr, _ := net.ResolveUDPAddr("udp", ":8053")
conn2, _ := net.ListenUDP("udp", addr)
buf := make([]byte, 1024)
n, remoteAddr, _ := conn2.ReadFromUDP(buf)
conn2.WriteToUDP(buf[:n], remoteAddr)

// DNS lookup.
addrs, _ := net.LookupHost("google.com")
fmt.Println(addrs) // ["142.250.80.46", ...]

ips, _ := net.LookupIP("google.com")
for _, ip := range ips {
    fmt.Println(ip, ip.IsPrivate(), ip.IsLoopback())
}

Package net/http

Go’s production-ready HTTP/1.1 and HTTP/2 client and server. No framework needed for most use cases. (Covered in depth in Part 10.)

Server-side key types:

  • http.ServeMux — route multiplexer
  • http.Server — configurable HTTP server
  • http.Handler — interface: ServeHTTP(ResponseWriter, *Request)
  • http.HandlerFunc — adapts a function to http.Handler
  • http.ResponseWriter — write the HTTP response
  • http.Request — the incoming HTTP request

Client-side key types:

  • http.Client — configurable HTTP client with timeout and transport
  • http.Transport — controls connection pooling, TLS, proxies
  • http.Response — the HTTP response (always defer resp.Body.Close())
// Middleware pattern: wrap any Handler.
func loggingMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        next.ServeHTTP(w, r)
        slog.Info("request",
            "method",   r.Method,
            "path",     r.URL.Path,
            "duration", time.Since(start),
        )
    })
}

// Apply middleware.
mux := http.NewServeMux()
mux.HandleFunc("/", myHandler)
http.ListenAndServe(":8080", loggingMiddleware(mux))

// HTTP client with timeout and connection pool configuration.
client := &http.Client{
    Timeout: 10 * time.Second,
    Transport: &http.Transport{
        MaxIdleConns:        100,
        MaxIdleConnsPerHost: 10,
        IdleConnTimeout:     90 * time.Second,
    },
}
resp, err := client.Get("https://api.example.com/data")
if err != nil { /* handle */ }
defer resp.Body.Close()

net/http/httptest — Testing HTTP handlers without a real server.

import "net/http/httptest"

func TestMyHandler(t *testing.T) {
    req := httptest.NewRequest("GET", "/users/42", nil)
    w   := httptest.NewRecorder()

    myHandler(w, req)

    resp := w.Result()
    body, _ := io.ReadAll(resp.Body)
    if resp.StatusCode != 200 {
        t.Errorf("got status %d, want 200", resp.StatusCode)
    }
    // Assert on body content.
}

Package net/url

Parse, build, and manipulate URLs.

import "net/url"

// Parse a URL into its components.
u, _ := url.Parse("https://api.example.com/v2/users?limit=10&offset=20#results")
fmt.Println(u.Scheme)   // "https"
fmt.Println(u.Host)     // "api.example.com"
fmt.Println(u.Path)     // "/v2/users"
fmt.Println(u.Fragment) // "results"

// Query parameters as a map.
params := u.Query()
fmt.Println(params.Get("limit"))  // "10"
fmt.Println(params.Get("offset")) // "20"

// Build a URL programmatically.
base, _ := url.Parse("https://api.example.com")
base.Path = "/v2/search"
q := base.Query()
q.Set("q",    "golang")
q.Set("sort", "relevance")
q.Add("tag",  "backend")
q.Add("tag",  "performance")
base.RawQuery = q.Encode()
fmt.Println(base.String())
// https://api.example.com/v2/search?q=golang&sort=relevance&tag=backend&tag=performance

// URL encoding.
encoded := url.QueryEscape("hello world & more")
fmt.Println(encoded) // "hello+world+%26+more"
decoded, _ := url.QueryUnescape(encoded)

Package os

Operating system interface: files, environment, processes, signals, standard I/O.

Key functions:

import "os"

// Files.
f, err := os.Open("file.txt")        // Open for reading only
f, err  = os.Create("file.txt")      // Create or truncate
f, err  = os.OpenFile("file.txt", os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
os.Remove("file.txt")                // Delete
os.Rename("old.txt", "new.txt")      // Rename / move
data, _ := os.ReadFile("config.json") // Read entire file
os.WriteFile("out.txt", data, 0644)   // Write entire file

// Directories.
os.Mkdir("mydir", 0755)              // Create one directory
os.MkdirAll("a/b/c", 0755)           // Create all missing directories
entries, _ := os.ReadDir(".")        // List directory entries
for _, e := range entries {
    fmt.Println(e.Name(), e.IsDir())
}
os.RemoveAll("tmpdir")               // Remove directory and all contents

// File information.
info, _ := os.Stat("file.txt")
fmt.Println(info.Name(), info.Size(), info.Mode(), info.ModTime())
_, err = os.Stat("missing.txt")
os.IsNotExist(err) // true

// Environment variables.
os.Setenv("APP_ENV", "production")
val := os.Getenv("APP_ENV")         // "production"
val  = os.Getenv("MISSING")         // "" (empty, no error)
val, ok := os.LookupEnv("APP_ENV")  // ok=true if set

// Process.
os.Exit(1)             // Exit immediately with code 1 (no defers run)
os.Getpid()            // Current process ID
os.Getppid()           // Parent process ID
os.Hostname()          // Machine hostname
args := os.Args        // Command-line arguments (os.Args[0] is the program name)

// Temporary files and directories.
f, _ := os.CreateTemp("", "myapp-*.tmp")  // Create temp file
dir, _ := os.MkdirTemp("", "myapp-*")     // Create temp directory
defer os.Remove(f.Name())
defer os.RemoveAll(dir)

os/exec — Run external programs.

import "os/exec"

// Simple command.
out, err := exec.Command("git", "rev-parse", "HEAD").Output()
fmt.Println(string(out)) // current git commit hash

// Command with stdin/stdout/stderr.
cmd := exec.Command("grep", "-r", "TODO", ".")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Run()

// Run with context (automatically killed if context is cancelled).
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
output, err := exec.CommandContext(ctx, "make", "build").Output()

os/signal — OS signal handling.

import "os/signal"
import "syscall"

quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGTERM, syscall.SIGINT)
<-quit // Block until signal received
// Perform graceful shutdown.

Package path and path/filepath

path — URL path manipulation (forward slashes, no OS specifics).

import "path"

path.Join("a", "b", "c")     // "a/b/c"
path.Dir("/a/b/c.txt")        // "/a/b"
path.Base("/a/b/c.txt")       // "c.txt"
path.Ext("/a/b/c.txt")        // ".txt"
path.Clean("a/./b/../c")      // "a/c"
path.Match("*.go", "main.go") // true, nil

path/filepath — OS file system paths (uses OS separator: / on Linux/macOS, \ on Windows).

import "path/filepath"

filepath.Join("src", "main", "app.go")     // "src/main/app.go" on Linux
filepath.Dir("/home/user/file.txt")         // "/home/user"
filepath.Base("/home/user/file.txt")        // "file.txt"
filepath.Ext("/home/user/file.txt")         // ".txt"
filepath.Abs("relative/path")              // Absolute path
filepath.Rel("/home/user", "/home/user/docs/file.txt") // "docs/file.txt"

// Walk: visit every file and directory in a tree.
filepath.WalkDir(".", func(path string, d fs.DirEntry, err error) error {
    if err != nil { return err }
    if !d.IsDir() && filepath.Ext(path) == ".go" {
        fmt.Println(path)
    }
    return nil
})

// Glob: find files matching a pattern.
matches, _ := filepath.Glob("src/**/*.go")

Package reflect

Inspect and manipulate types and values at runtime. Powerful but should be used sparingly — it bypasses type safety and is slow.

import "reflect"

// Inspect any value's type and kind.
x := 42
t := reflect.TypeOf(x)
fmt.Println(t.Name(), t.Kind()) // "int", "int"

s := struct{ Name string; Age int }{"Alice", 30}
st := reflect.TypeOf(s)
for i := 0; i < st.NumField(); i++ {
    field := st.Field(i)
    fmt.Printf("%s: %v (tag: %q)\n",
        field.Name, reflect.ValueOf(s).Field(i), field.Tag)
}

// Dynamically call a method by name.
v := reflect.ValueOf(&myObj)
method := v.MethodByName("Process")
results := method.Call([]reflect.Value{reflect.ValueOf("input")})
fmt.Println(results[0].Interface())

// Reflect is used internally by encoding/json, database/sql, and similar packages
// to handle arbitrary types. In your own code, prefer type switches and interfaces.

Package regexp

Regular expressions. Go uses RE2 syntax — no lookaheads or backreferences, but guaranteed linear time matching (no catastrophic backtracking).

import "regexp"

// Compile once, use many times. MustCompile panics if the pattern is invalid.
// Compile at package level so it compiles once at startup.
var (
    emailRe  = regexp.MustCompile(`^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$`)
    uuidRe   = regexp.MustCompile(`[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}`)
    ipv4Re   = regexp.MustCompile(`\b(\d{1,3}\.){3}\d{1,3}\b`)
)

// Match.
fmt.Println(emailRe.MatchString("alice@example.com")) // true
fmt.Println(emailRe.MatchString("not-an-email"))      // false

// Find first match.
log := "Error at 2026-02-18: connection failed"
date := regexp.MustCompile(`\d{4}-\d{2}-\d{2}`).FindString(log)
fmt.Println(date) // "2026-02-18"

// Find all matches.
text := "IPs: 192.168.1.1 and 10.0.0.1"
all := ipv4Re.FindAllString(text, -1) // -1 means find all
fmt.Println(all) // ["192.168.1.1", "10.0.0.1"]

// Capture groups.
re := regexp.MustCompile(`(\w+)\s+(\w+)`)
matches := re.FindStringSubmatch("John Doe")
fmt.Println(matches[0]) // "John Doe" (full match)
fmt.Println(matches[1]) // "John" (group 1)
fmt.Println(matches[2]) // "Doe"  (group 2)

// Named groups.
re2 := regexp.MustCompile(`(?P<year>\d{4})-(?P<month>\d{2})-(?P<day>\d{2})`)
match := re2.FindStringSubmatch("2026-02-18")
names := re2.SubexpNames()
for i, name := range names {
    if name != "" { fmt.Printf("%s: %s\n", name, match[i]) }
}

// Replace.
result := regexp.MustCompile(`\bpassword\b`).ReplaceAllString(text, "***")

// Split.
parts := regexp.MustCompile(`[\s,;]+`).Split("a, b;  c d", -1)

Package runtime

Direct interface to Go’s runtime system. Goroutine information, GC control, memory statistics, and stack traces.

import "runtime"

// Platform information.
fmt.Println(runtime.GOOS)   // "linux", "darwin", "windows"
fmt.Println(runtime.GOARCH) // "amd64", "arm64"
fmt.Println(runtime.NumCPU()) // Number of logical CPUs

// Goroutine management.
runtime.GOMAXPROCS(8) // Set number of OS threads for goroutines
n := runtime.NumGoroutine() // Current goroutine count

// Memory statistics.
var ms runtime.MemStats
runtime.ReadMemStats(&ms)
fmt.Printf("Alloc: %v MiB\n",     ms.Alloc/1024/1024)
fmt.Printf("TotalAlloc: %v MiB\n", ms.TotalAlloc/1024/1024)
fmt.Printf("Sys: %v MiB\n",        ms.Sys/1024/1024)
fmt.Printf("NumGC: %v\n",          ms.NumGC)
fmt.Printf("HeapObjects: %v\n",    ms.HeapObjects)

// Force a GC cycle (rarely needed; trust the runtime).
runtime.GC()

// Get the call stack.
buf := make([]byte, 4096)
n2 := runtime.Stack(buf, false) // false = only current goroutine
fmt.Printf("Stack:\n%s\n", buf[:n2])

// Yield the processor to other goroutines.
runtime.Gosched()

// Get caller information.
pc, file, line, ok := runtime.Caller(0) // 0 = current function
if ok {
    fn := runtime.FuncForPC(pc)
    fmt.Printf("Called from %s() in %s:%d\n", fn.Name(), file, line)
}

Package sort

Sorting and searching for slices and user-defined collections.

import "sort"

// Sort built-in slices.
nums := []int{5, 2, 8, 1, 9, 3}
sort.Ints(nums)
fmt.Println(nums) // [1 2 3 5 8 9]

strs := []string{"banana", "apple", "cherry"}
sort.Strings(strs)
fmt.Println(strs) // [apple banana cherry]

floats := []float64{3.14, 1.41, 2.71}
sort.Float64s(floats)

// Sort a slice of structs by a field.
type Product struct {
    Name  string
    Price float64
}
products := []Product{
    {"TV",    999.99},
    {"Phone", 699.99},
    {"Tablet", 399.99},
}

// Sort by price ascending.
sort.Slice(products, func(i, j int) bool {
    return products[i].Price < products[j].Price
})

// Stable sort preserves original order for equal elements.
sort.SliceStable(products, func(i, j int) bool {
    return products[i].Name < products[j].Name
})

// Check if sorted.
sort.IntsAreSorted(nums) // true

// Binary search in a sorted slice.
idx := sort.SearchInts(nums, 5) // index where 5 is (or would be)
fmt.Println(idx) // 3

// Generic sort (Go 1.21+): slices package is even cleaner.
import "slices"
slices.Sort(nums)
slices.SortFunc(products, func(a, b Product) int {
    return cmp.Compare(a.Price, b.Price)
})
idx2, found := slices.BinarySearch(nums, 5)

Package strconv

Convert between strings and basic data types. Faster and safer than fmt.Sscanf.

import "strconv"

// String to number.
n, err := strconv.Atoi("42")          // "42" → int
n64, err := strconv.ParseInt("42", 10, 64) // base 10, 64-bit
f, err := strconv.ParseFloat("3.14", 64)
b, err := strconv.ParseBool("true")   // "true", "1", "t", "TRUE" → true

// Number to string.
s := strconv.Itoa(42)                  // int → "42"
s  = strconv.FormatInt(42, 2)          // int → binary: "101010"
s  = strconv.FormatInt(255, 16)        // int → hex: "ff"
s  = strconv.FormatFloat(3.14, 'f', 2, 64) // "3.14"
s  = strconv.FormatBool(true)          // "true"

// Append to a buffer (avoids allocation vs. Sprintf).
buf := []byte("value: ")
buf = strconv.AppendInt(buf, 42, 10)   // "value: 42"

// Quote and unquote strings.
quoted := strconv.Quote("hello\nworld") // `"hello\nworld"` (escaped)
unquoted, _ := strconv.Unquote(quoted)

// Check if a string is a valid number.
_, err = strconv.ParseInt("not-a-number", 10, 64) // err != nil

Package strings

String manipulation. One of the most-used packages in all Go code.

import "strings"

// Predicates.
strings.Contains("seafood", "foo")       // true
strings.HasPrefix("seafood", "sea")      // true
strings.HasSuffix("seafood", "food")     // true
strings.EqualFold("Go", "go")            // true (case-insensitive compare)
strings.ContainsAny("hello", "aeiou")    // true (any char from second string)

// Searching.
strings.Index("chicken", "ken")          // 4 (-1 if not found)
strings.LastIndex("cabbage", "a")        // 5
strings.Count("cheese", "e")             // 3

// Transformation.
strings.ToUpper("hello")                 // "HELLO"
strings.ToLower("HELLO")                 // "hello"
strings.Title("hello world")             // "Hello World"
strings.TrimSpace("  hello  ")           // "hello"
strings.Trim("...hello...", ".")         // "hello"
strings.TrimLeft("...hello...", ".")     // "hello..."
strings.TrimRight("...hello...", ".")    // "...hello"
strings.TrimPrefix("hello-world", "hello-") // "world"
strings.TrimSuffix("hello.go", ".go")    // "hello"
strings.Replace("oink oink", "oink", "moo", 1)  // "moo oink"
strings.ReplaceAll("oink oink", "oink", "moo")   // "moo moo"

// Splitting and joining.
strings.Split("a,b,c", ",")             // ["a", "b", "c"]
strings.SplitN("a,b,c", ",", 2)         // ["a", "b,c"]
strings.Fields("  foo bar  baz  ")       // ["foo", "bar", "baz"]
strings.Join([]string{"a","b","c"}, "-") // "a-b-c"
strings.Repeat("ab", 3)                 // "ababab"

// Builder: efficient string construction (no intermediate allocations).
var sb strings.Builder
for i := 0; i < 5; i++ {
    fmt.Fprintf(&sb, "item-%d, ", i)
}
fmt.Println(sb.String())

// Reader: strings as io.Reader (no copy).
r := strings.NewReader("hello from a string")
data, _ := io.ReadAll(r)

Package sync

Synchronization primitives. The low-level tools for safe concurrent programming.

sync.Mutex — mutual exclusion lock. sync.RWMutex — multiple readers, single writer. sync.WaitGroup — wait for a collection of goroutines to finish. sync.Once — execute a function exactly once. sync.Map — concurrent-safe map (specialized use cases). sync.Pool — pool of temporary objects (reduce GC pressure). sync.Cond — condition variable (goroutines wait for a condition).

import "sync"

// sync.Once: safe lazy initialization.
var (
    instance *Service
    once     sync.Once
)

func GetService() *Service {
    once.Do(func() {
        instance = &Service{} // Called exactly once, even from multiple goroutines.
    })
    return instance
}

// sync.Pool: reuse expensive-to-allocate objects.
var bufPool = sync.Pool{
    New: func() interface{} {
        return make([]byte, 4096) // New buffer when pool is empty.
    },
}

func handleRequest(data []byte) {
    buf := bufPool.Get().([]byte) // Get from pool (or call New).
    defer bufPool.Put(buf)        // Return to pool when done.
    copy(buf, data)
    // Process buf...
}
// bufPool dramatically reduces allocations (and GC pressure) in high-throughput code.

// sync.Cond: a goroutine waits until a condition is true.
type Queue struct {
    mu    sync.Mutex
    cond  *sync.Cond
    items []int
}

func NewQueue() *Queue {
    q := &Queue{}
    q.cond = sync.NewCond(&q.mu)
    return q
}

func (q *Queue) Enqueue(item int) {
    q.mu.Lock()
    q.items = append(q.items, item)
    q.cond.Signal() // Wake one waiting goroutine.
    q.mu.Unlock()
}

func (q *Queue) Dequeue() int {
    q.mu.Lock()
    defer q.mu.Unlock()
    for len(q.items) == 0 {
        q.cond.Wait() // Release lock, sleep, re-acquire lock on wake.
    }
    item := q.items[0]
    q.items = q.items[1:]
    return item
}

Package sync/atomic

Lock-free atomic operations. Maps to single CPU instructions. Faster than a mutex for simple counters and flags.

Covered in depth in Part 9. Key types in Go 1.19+: atomic.Int64, atomic.Bool, atomic.Pointer[T], atomic.Value.


Package testing

The entire test framework. No third-party library needed for writing and running tests.

import "testing"

// Unit test.
func TestAdd(t *testing.T) {
    got  := Add(2, 3)
    want := 5
    if got != want {
        t.Errorf("Add(2,3) = %d, want %d", got, want)
    }
}

// t.Helper() marks a function as a test helper.
// When t.Fatal/Error is called inside it, the reported line number
// points to the caller, not the helper.
func assertEqual(t *testing.T, got, want int) {
    t.Helper()
    if got != want { t.Errorf("got %d, want %d", got, want) }
}

// Subtests with t.Run. Each runs independently.
func TestMultiply(t *testing.T) {
    cases := []struct{ a, b, want int }{
        {2, 3, 6}, {0, 5, 0}, {-1, 4, -4},
    }
    for _, tc := range cases {
        tc := tc
        t.Run(fmt.Sprintf("%dx%d", tc.a, tc.b), func(t *testing.T) {
            t.Parallel() // Run subtests in parallel.
            assertEqual(t, Multiply(tc.a, tc.b), tc.want)
        })
    }
}

// Benchmark.
func BenchmarkAdd(b *testing.B) {
    for i := 0; i < b.N; i++ {
        Add(2, 3)
    }
}

// TestMain: setup and teardown for the entire test binary.
func TestMain(m *testing.M) {
    // Setup: start test database, etc.
    code := m.Run() // Run all tests.
    // Teardown: close connections, etc.
    os.Exit(code)
}

// testing.TB is an interface satisfied by *testing.T, *testing.B, and *testing.F.
// Use it in helper functions that should work with any test type.
func setupDB(t testing.TB) *sql.DB {
    t.Helper()
    // ...
}

Package text/template

Text templating. Like html/template but without automatic HTML escaping. Use for generating configuration files, code, emails, reports — any text output that is not HTML.

import "text/template"

const configTmpl = `
server:
  host: {{.Host}}
  port: {{.Port}}
  workers: {{.Workers}}

database:
  {{- range .DBs}}
  - name: {{.Name}}
    host: {{.Host}}
  {{- end}}
`

type Config struct {
    Host    string
    Port    int
    Workers int
    DBs     []struct{ Name, Host string }
}

t := template.Must(template.New("config").Parse(configTmpl))
t.Execute(os.Stdout, Config{
    Host:    "0.0.0.0",
    Port:    8080,
    Workers: 4,
    DBs: []struct{ Name, Host string }{
        {"primary", "db1.internal"},
        {"replica", "db2.internal"},
    },
})

Template functions via FuncMap:

funcMap := template.FuncMap{
    "upper": strings.ToUpper,
    "join":  strings.Join,
    "add":   func(a, b int) int { return a + b },
}
t := template.Must(template.New("").Funcs(funcMap).Parse(`{{upper .Name}}`))

Package time

Time, duration, and timezone handling. One of the most consistently well-designed packages in Go.

import "time"

// Current time.
now := time.Now()
utc := now.UTC()
local := now.Local()

// Creating specific times.
t := time.Date(2026, time.February, 18, 10, 30, 0, 0, time.UTC)

// Formatting. Go uses a reference time instead of strftime codes.
// The reference time is: Mon Jan 2 15:04:05 MST 2006
fmt.Println(now.Format("2006-01-02"))            // "2026-02-18"
fmt.Println(now.Format("2006-01-02 15:04:05"))   // "2026-02-18 10:30:00"
fmt.Println(now.Format(time.RFC3339))             // "2026-02-18T10:30:00Z"
fmt.Println(now.Format(time.RFC1123))             // "Wed, 18 Feb 2026 10:30:00 UTC"

// Parsing.
t2, _ := time.Parse("2006-01-02", "2026-02-18")
t3, _ := time.Parse(time.RFC3339, "2026-02-18T10:30:00Z")

// Duration arithmetic.
d := 5 * time.Hour + 30 * time.Minute
fmt.Println(d.Hours())   // 5.5
fmt.Println(d.Minutes()) // 330

future := now.Add(24 * time.Hour)      // tomorrow
past   := now.Add(-7 * 24 * time.Hour) // 7 days ago
diff   := future.Sub(now)              // Duration between two times

// Comparison.
now.Before(future) // true
now.After(past)    // true
now.Equal(now)     // true

// Sleep, ticker, timer.
time.Sleep(100 * time.Millisecond)

ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
go func() {
    for t := range ticker.C {
        fmt.Println("Tick:", t.Format("15:04:05"))
    }
}()

timer := time.NewTimer(5 * time.Second)
<-timer.C // Block until timer fires

// time.After is convenient but leaks if never read. Prefer NewTimer.
select {
case <-time.After(1 * time.Second):
    fmt.Println("Timed out")
case result := <-workChan:
    fmt.Println("Got result:", result)
}

// Measure elapsed time.
start := time.Now()
doWork()
fmt.Println("Elapsed:", time.Since(start))

// Timezone.
loc, _ := time.LoadLocation("America/New_York")
nyTime := now.In(loc)
fmt.Println(nyTime.Format("15:04:05 MST"))

Package unicode

Unicode code point (rune) classification and conversion.

import "unicode"

unicode.IsLetter('A')    // true
unicode.IsDigit('5')     // true
unicode.IsSpace('\t')    // true
unicode.IsUpper('A')     // true
unicode.IsLower('a')     // true
unicode.IsPunct('.')     // true
unicode.ToUpper('a')     // 'A'
unicode.ToLower('A')     // 'a'

// Work with Unicode strings using range (gives runes, not bytes).
s := "Héllo Wörld"
for i, r := range s {
    fmt.Printf("index %d, rune %c (%d)\n", i, r, r)
}

// len(s) gives byte count. Use utf8.RuneCountInString for character count.
import "unicode/utf8"
fmt.Println(len(s))                          // byte count (may be > char count for non-ASCII)
fmt.Println(utf8.RuneCountInString(s))       // character count
fmt.Println(utf8.ValidString(s))             // true if valid UTF-8

Package unsafe

Direct memory access. Bypasses Go’s type system. Use with extreme care.

import "unsafe"

// Size of a type in bytes.
fmt.Println(unsafe.Sizeof(int64(0)))  // 8
fmt.Println(unsafe.Sizeof(bool(false))) // 1

// Offset of a field within a struct (for binary protocol encoding).
type Header struct {
    Version uint8
    Flags   uint16
    Length  uint32
}
fmt.Println(unsafe.Offsetof(Header{}.Flags))  // 1 (or 2 with padding)
fmt.Println(unsafe.Offsetof(Header{}.Length)) // 4

// unsafe.Pointer: convert between pointer types.
// Rarely needed. Use only for CGo, calling C libraries, or zero-copy conversions.
x := uint64(0xDEADBEEF)
// View the same memory as [8]byte without copying.
b := (*[8]byte)(unsafe.Pointer(&x))
fmt.Printf("%x\n", *b) // ef be ad de 00 00 00 00 (little-endian)

unsafe should be treated as a last resort. The garbage collector assumes all pointer operations follow Go’s rules. Breaking those rules via unsafe can corrupt memory in ways that are impossible to debug.


Standard Library Packages at a Glance

Here is every remaining package in a quick-reference format:

bufio          — Buffered I/O for Readers and Writers
builtin        — Built-in Go functions (len, cap, make, new, append, close, delete, panic, recover)
cmp            — Generic comparison helpers (Go 1.21+)
embed          — Embed files into Go binaries at compile time with //go:embed
expvar         — Exported runtime variables (for /debug/vars endpoint)
image          — 2D image handling
image/color    — Color models (RGBA, NRGBA, Gray, YCbCr, etc.)
image/draw     — Image composition
image/jpeg     — JPEG encode/decode
image/png      — PNG encode/decode
image/gif      — GIF decode
index/suffixarray — In-memory suffix array for fast string searching
iter           — Iterator types for range-over functions (Go 1.23+)
maps           — Generic map utilities: Clone, Copy, Delete, Keys, Values (Go 1.21+)
math/cmplx     — Complex number functions
net/mail       — Parse email addresses and messages
net/rpc        — Remote procedure calls (deprecated in favor of gRPC)
net/smtp       — SMTP email client
net/textproto  — Text-based network protocols (HTTP, SMTP headers)
plugin         — Load shared libraries as Go plugins at runtime
slices         — Generic slice utilities: Sort, Contains, Index, Reverse (Go 1.21+)
strings        — String manipulation (covered in full above)
syscall        — Raw system call interface (OS-specific, prefer os package)
testing/fstest — In-memory filesystem for testing fs.FS code
testing/iotest — io.Reader/Writer implementations that exercise error paths
testing/quick  — Property-based testing with random inputs
text/scanner   — General-purpose scanner for text parsing
text/tabwriter — Text alignment using elastic tabstops
unicode/utf16  — UTF-16 encoding and decoding
unicode/utf8   — UTF-8 encoding, decoding, and validation

The embed Package: Compile-Time File Inclusion

embed deserves special attention because it is used constantly in production.

import _ "embed"

// Embed a single file as a string.
//go:embed config/default.yaml
var defaultConfig string

// Embed a single file as bytes.
//go:embed certs/root-ca.pem
var rootCA []byte

// Embed an entire directory as fs.FS.
//go:embed static
var staticFiles embed.FS

// Use embedded files.
func main() {
    // Serve embedded static files over HTTP.
    http.Handle("/static/", http.FileServer(http.FS(staticFiles)))

    // Read the config from the embedded string.
    fmt.Println(defaultConfig)

    // Read individual files from the embedded FS.
    data, _ := staticFiles.ReadFile("static/index.html")
    fmt.Println(string(data))
}

The binary produced by go build contains all embedded files baked in. Deploy one binary. No external config files needed. No static asset directories to manage.


The slices and maps Packages (Go 1.21+)

Go 1.21 added two utility packages that fill long-standing gaps.

slices:

import "slices"

nums := []int{5, 2, 8, 1, 9}

slices.Sort(nums)                         // [1 2 5 8 9]
slices.Contains(nums, 5)                  // true
idx, found := slices.BinarySearch(nums, 5) // 2, true
slices.Reverse(nums)                      // [9 8 5 2 1]
slices.Max(nums)                          // 9
slices.Min(nums)                          // 1
clone := slices.Clone(nums)               // independent copy
idx2 := slices.Index(nums, 8)             // 1 (first occurrence)
slices.Compact([]int{1,1,2,2,3})          // [1 2 3] (remove consecutive dupes)

maps:

import "maps"

m := map[string]int{"a": 1, "b": 2, "c": 3}

keys   := slices.Sorted(maps.Keys(m))   // ["a", "b", "c"] sorted
values := maps.Values(m)                 // [1, 2, 3] (unordered)
clone  := maps.Clone(m)                  // independent copy
maps.Copy(dst, src)                      // copy all entries from src into dst
maps.DeleteFunc(m, func(k string, v int) bool {
    return v < 2 // delete entries where value < 2
})
maps.Equal(m1, m2)                       // true if same key-value pairs

Quick Reference: The Essential Commands

# Development
go run main.go                  # Build and run
go build -o app .               # Build binary
go build -race .                # Build with race detector
go vet ./...                    # Static analysis
gofmt -l ./...                  # List unformatted files
gofmt -w ./...                  # Format all files

# Testing
go test ./...                   # Run all tests
go test -v ./...                # Verbose output
go test -race ./...             # With race detector
go test -cover ./...            # Coverage percentage
go test -coverprofile=c.out ./... && go tool cover -html=c.out  # Coverage HTML
go test -bench=. -benchmem ./... # Run benchmarks with memory stats
go test -run TestName ./...     # Run specific tests

# Profiling
go tool pprof profile.out       # Analyze CPU/memory profile
go tool trace trace.out         # Analyze execution trace
GODEBUG=gctrace=1 ./app         # GC trace output

# Build information
go build -gcflags="-m" ./...    # Show inlining and escape decisions
go build -ldflags="-X main.version=$(git rev-parse HEAD)" . # Embed version
GOOS=linux GOARCH=amd64 go build .  # Cross compile

# Modules
go mod init module/path         # Initialize module
go mod tidy                     # Remove unused, add missing dependencies
go mod vendor                   # Copy dependencies to vendor/
go mod graph                    # Print module dependency graph
go list -m all                  # List all dependencies

# Documentation
go doc fmt.Println              # Show doc for specific symbol
go doc -all fmt                 # Show all docs for package
godoc -http=:6060               # Local documentation server

Tags

#go #golang #internals #compiler #goroutines #concurrency #memory #garbage-collector #scheduler #gmp #performance #backend #systems #runtime #channels #profiling #tools #senior #architecture #cpu