Go Notes API: BDD, TDD, DDD, Hexagonal Architecture with Gin, gRPC, Redis & PostgreSQL

Go Notes API: BDD, TDD, DDD, Hexagonal Architecture with Gin, gRPC, Redis & PostgreSQL

Build a production-ready Notes API in Go 1.24+ using BDD, TDD, DDD, Hexagonal Architecture, Gin, gRPC, Redis, PostgreSQL, JWT auth, Docker, FX, and Zap. Step by step.

By Omar Flores

Imagine you’re building a house. Before the foundation is poured, a team of engineers has already run stress tests on the soil, modeled water drainage, and walked through every room on paper. They argued about where walls should go. They drew and erased three versions of the staircase. None of that work is visible in the final house — but every decision the house survives over the next fifty years was made in those early arguments.

Software works the same way. The methodologies and architectures that feel like overhead at the start — TDD, BDD, DDD, Hexagonal — are the stress tests and drainage plans. They are the conversations you have with your future self and your future team. A notes API seems trivial. But a notes API built without structure becomes the codebase no one wants to touch in year two.

This guide builds a complete, production-ready Notes API in Go. It covers registration, login with JWT, secure note management, a REST API with Gin, a gRPC endpoint, Redis caching, PostgreSQL persistence, Docker for local development, and a full testing suite with TDD and BDD scenarios. Every file. Every command. Nothing left as an exercise.


What We Are Building

A notes management system with two transport layers, a security boundary, and real persistence. Users register with an email and password. They log in and receive a JWT token. With that token, they create, read, update, and delete notes. The same core logic serves both REST clients and gRPC clients.

Technology map:

ConcernTool
HTTP RESTgin-gonic/gin
gRPC transportgoogle.golang.org/grpc + protobuf
PostgreSQL accessjackc/pgx/v5
Redis cachinggo-redis/redis/v9
Authenticationgolang-jwt/jwt/v5 + bcrypt
Dependency injectiongo.uber.org/fx
Structured logginggo.uber.org/zap
UUIDsgoogle/uuid
Validationgo-playground/validator/v10
DB migrationspressly/goose/v3
BDD scenarioscucumber/godog
ContainersDocker + docker-compose

Domain scope: two aggregates — User and Note. A User owns zero or more Notes. A Note has a title, body, and optional tags. All identifiers are UUIDs.

Architecture: Hexagonal (Ports and Adapters) with Domain-Driven Design layering. The domain knows nothing about databases, HTTP, or gRPC. The application layer orchestrates use cases. The adapter layer connects the world.


The Three Methodologies and How They Fit Together

A lot of guides treat BDD, TDD, and DDD as separate topics — three different courses, three different books. In practice, they form one cohesive way of working, each layer reinforcing the others.

Domain-Driven Design: putting the problem first

DDD (Domain-Driven Design) answers the question: what is the system about? Eric Evans introduced the concept in his 2003 book, and the core insight has not aged: the code should speak the language of the business. Not user_note_entity_record, but Note. Not handle_creation_request, but Create. The vocabulary you use in code should be the same vocabulary a product manager or a domain expert uses in a meeting — Evans called this the Ubiquitous Language.

DDD also introduces the idea of Bounded Contexts — explicit boundaries around parts of the system where a particular language and model apply. In this project, we have a simple context: notes belong to users. In a larger system, you might have separate contexts for billing, notifications, and content — each with its own model of what a “user” means, and explicit translations between them.

The building blocks DDD gives you — entities, value objects, aggregates, repositories, domain services — are not abstract patterns to sprinkle on code. They are tools for expressing precisely why a rule exists. An Email value object does not exist because it is elegant. It exists because the business rule “emails must be valid and normalized” belongs to the domain model, not to a validator library or a database constraint.

TDD (Test-Driven Development) answers the question: does the system do what I think it does? Kent Beck formalized TDD with the Red-Green-Refactor cycle: write a failing test (red), write the minimum code to pass it (green), then clean the code without breaking the test (refactor). The tests are not an afterthought — they are the specification that drove the code into existence. In this guide, domain entities and application use cases are written test-first.

The psychological effect of TDD is underrated. When you write the test before the code, you are forced to think about the interface — what does the caller need? — before you think about the implementation. This produces simpler, more usable interfaces than designing implementation-first and then writing tests to match. It also produces the minimum code that satisfies the requirement, nothing more.

TDD works at multiple levels. Unit tests verify individual functions and methods in isolation. Integration tests verify that components work together. I’ve seen teams apply TDD at the unit level and skip integration tests entirely — and I’ve seen the opposite. Neither extreme is correct. The domain and application layers deserve unit tests because the business rules are there. The adapters deserve integration tests because the SQL and Redis commands are there.

Behavior-Driven Development: the language of outcomes

BDD (Behavior-Driven Development) answers the question: does the system behave the way the user expects? Dan North introduced it as a refinement of TDD — specifically to address the question of what to test and how to name tests so that non-technical stakeholders could read them. The Gherkin language (Given, When, Then) came out of that work.

A BDD scenario is a conversation artifact before it is a test artifact. The practice of writing scenarios together — a developer, a tester, and a business analyst — is called the Three Amigos technique. The three perspectives catch different gaps: the developer sees technical edge cases, the tester sees failure modes, the business analyst sees missing requirements. Writing scenarios before any code starts the conversation at the right time.

The connection is layered: DDD tells you what concepts to model, TDD tells you those concepts work correctly in isolation, and BDD confirms the entire system delivers the behavior a user would observe. One does not replace the others.


Hexagonal Architecture: The Blueprint

Alistair Cockburn described Hexagonal Architecture in 2005 with a deceptively simple motivation: allow an application to work equally well when run by users, by automated tests, or by other programs. To achieve that, the application must be completely ignorant of how it is being driven and where it stores its data. The word “hexagonal” is not about six sides having special meaning \u2014 it is a visual metaphor for the idea that many actors can interact with the system through different ports, each equally valid, none privileged.

The architecture is also called Ports and Adapters, which is the more descriptive name. A port is an interface \u2014 a contract defined in the core of the system. A primary port (input port) is how the outside world initiates an action: a REST request, a gRPC call, a scheduled job, a CLI command. A secondary port (output port) is how the core delegates to the outside world: storing data, sending emails, publishing events. The core never calls infrastructure directly; it calls port interfaces and lets the infrastructure adapters implement them.

This architecture is sometimes confused with Clean Architecture (Robert Martin, 2012) and Onion Architecture (Jeffrey Palermo, 2008). All three share the same dependency rule: dependencies point inward. The differences are in how many layers they define. Hexagonal is the leanest: just core and adapters. Clean Architecture adds use-case and interface-adapter rings. For most backend services, hexagonal is expressive enough without the overhead of more rings.

The practical consequence you feel every day: your domain and application code has zero imports from pgx, gin, redis, or any infrastructure package. Go enforces this mechanically \u2014 if you accidentally import github.com/gin-gonic/gin from inside internal/domain, the build fails. The architecture is not just a convention; it is a constraint the compiler upholds.

Hexagonal Architecture (also known as Ports and Adapters) has one rule: the core of your system \u2014 the domain and the application use cases \u2014 must not depend on anything external. No database libraries. No HTTP frameworks. No caches. The core defines interfaces (ports) that express what it needs. Everything else (adapters) implements those interfaces.

The result: your business logic can be tested without starting a database. Your application can swap PostgreSQL for MongoDB without touching a single use case. Your gRPC adapter and your HTTP adapter both call the same service interface.

The project will have this folder structure. Create it slowly and deliberately — understanding why each folder exists matters more than just making the commands run.

notes-api/
├── cmd/
│   └── server/
│       └── main.go                  ← wires everything with FX, starts the server
├── db/
│   └── migrations/
│       ├── 00001_create_users.sql
│       └── 00002_create_notes.sql
├── features/                        ← Gherkin BDD scenarios
│   ├── auth.feature
│   └── notes.feature
├── internal/
│   ├── domain/                      ← entities, value objects, errors, interfaces
│   │   ├── user.go
│   │   ├── user_test.go
│   │   ├── note.go
│   │   ├── note_test.go
│   │   └── errors.go
│   ├── ports/
│   │   ├── input/                   ← what the outside world calls
│   │   │   ├── auth_service.go
│   │   │   └── note_service.go
│   │   └── output/                  ← what the domain needs from infrastructure
│   │       ├── user_repository.go
│   │       ├── note_repository.go
│   │       └── cache.go
│   ├── application/                 ← use cases — orchestrates domain + ports
│   │   ├── auth_service.go
│   │   ├── auth_service_test.go
│   │   ├── note_service.go
│   │   └── note_service_test.go
│   └── adapters/
│       ├── http/                    ← Gin handlers, middleware, DTOs
│       │   ├── dto.go
│       │   ├── auth_handler.go
│       │   ├── note_handler.go
│       │   └── middleware.go
│       ├── grpc/                    ← gRPC server adapter
│       │   └── note_server.go
│       ├── postgres/                ← pgx repository implementations
│       │   ├── user_repository.go
│       │   └── note_repository.go
│       └── redis/                   ← Redis cache implementation
│           └── note_cache.go
├── proto/
│   └── notes/
│       └── v1/
│           └── notes.proto
├── gen/
│   └── notes/
│       └── v1/                      ← generated protobuf code
├── docker-compose.yml
├── Dockerfile
├── .env.example
└── go.mod

The dependency rule flows inward: adapters depend on ports, application depends on ports and domain, domain depends on nothing external. This is not just pattern — it is an enforced boundary enforced by Go’s import system.


Step 1: Bootstrap the Project

Start here. Not with a file editor — with a terminal. Every command below is intentional.

Initialize the repository

mkdir notes-api
cd notes-api
git init
echo "# Notes API" > README.md
git add README.md
git commit -m "chore: initialize repository"

Create the Go module

Use a module path that matches your GitHub username or organization. If you are following along, replace yourusername with your actual GitHub handle.

go mod init github.com/yourusername/notes-api

Create the folder structure

Run these commands to build the full tree at once:

mkdir -p cmd/server
mkdir -p db/migrations
mkdir -p features
mkdir -p internal/domain
mkdir -p internal/ports/input
mkdir -p internal/ports/output
mkdir -p internal/application
mkdir -p internal/adapters/http
mkdir -p internal/adapters/grpc
mkdir -p internal/adapters/postgres
mkdir -p internal/adapters/redis
mkdir -p proto/notes/v1
mkdir -p gen/notes/v1

Install all dependencies upfront

Seeing the full dependency list before writing code helps you understand the shape of the project before you are inside it.

go get github.com/gin-gonic/gin@v1.10.0
go get github.com/jackc/pgx/v5
go get github.com/go-redis/redis/v9
go get google.golang.org/grpc
go get google.golang.org/protobuf
go get github.com/golang-jwt/jwt/v5
go get golang.org/x/crypto
go get go.uber.org/fx
go get go.uber.org/zap
go get github.com/google/uuid
go get github.com/go-playground/validator/v10
go get github.com/pressly/goose/v3
go get github.com/cucumber/godog
go mod tidy

Create the .env.example file

Never hardcode secrets. Create this file now so every developer knows what environment variables are required.

# .env.example
DATABASE_URL=postgres://notes_user:notes_pass@localhost:5432/notes_db?sslmode=disable
REDIS_ADDR=localhost:6379
REDIS_PASSWORD=
JWT_SECRET=your-very-long-random-secret-change-this-in-production
PORT=8080
GRPC_PORT=50051

Copy it to .env for local development:

cp .env.example .env

Add .env to .gitignore — you never commit secrets:

echo ".env" >> .gitignore
echo "gen/" >> .gitignore
git add .
git commit -m "chore: project structure, dependencies, and env configuration"

Step 2: The Domain Layer

The domain is the heart of the system. It has no imports from external packages — only the Go standard library. No pgx, no gin, no redis. This is the boundary that hexagonal architecture enforces.

What lives in the domain

DDD gives you a precise vocabulary for classifying things in the domain. Understanding these distinctions is more important than remembering their names — they tell you where a rule belongs.

Entities are objects that have a continuous identity over time. A User is an entity: the same user can change their email, change their password, accumulate notes — but they remain the same user. What makes them the same? Their identifier. You compare entities by ID, not by field values. If two User structs have the same ID, they represent the same user regardless of whether their other fields match. This matters in persistence: when you update a user in the database, you find them by ID and overwrite fields — the identity persists.

Value objects are objects defined entirely by their attributes. An Email value has no identity of its own — it is just a value. Two email values with the same string are identical. You compare value objects by value, not by reference. The key property of a value object is immutability: once created, its value never changes. If you want a different email, you create a new Email value. This makes value objects safe to share and easy to reason about — there are no surprise mutations.

The deeper point: value objects encapsulate business rules as type guarantees. A raw string for an email could be anything. An Email value object carries the proof that the value passed validation at creation time. Any function that accepts Email does not need to validate it again. This is what Evans calls making illegal states unrepresentable.

Aggregate roots are entities that own a cluster of related objects and enforce the invariants across all of them. A User is an aggregate root that owns a set of Notes. The rule: the outside world may only interact with a Note through its User — you never fetch a Note by ID without first knowing which User owns it. This is how authorization and ownership are enforced at the model level, not at the controller level. In this project, the ownership check happens in the application service — GetByID always verifies note.UserID == userID before returning.

Domain errors are the domain’s vocabulary of failure. The domain does not throw HTTP 404 — it says ErrNoteNotFound. The HTTP adapter translates that into a 404. The gRPC adapter translates it into codes.NotFound. The domain speaks its own language; the adapters translate.

errors.go

Sentinel errors are package-level variables that represent specific failure conditions. The standard Go pattern for them is errors.New("description"). They work with errors.Is(), which walks the error chain — so if an error is wrapped with fmt.Errorf("context: %w", err), errors.Is(err, ErrNoteNotFound) still returns true.

Choosing good error names is part of designing the domain language. ErrUnauthorized is more expressive than ErrForbidden because it communicates the business concept (you are not authorized to perform this action) rather than the HTTP concept. The adapters convert this to whatever status code the protocol requires — but the domain remains protocol-agnostic.

Define the sentinel errors your domain can produce. These errors travel up from the domain layer through the application layer to the adapters, which translate them into HTTP status codes or gRPC status codes. The domain does not know about HTTP. It just knows that something went wrong.

// internal/domain/errors.go
package domain

import "errors"

var (
	ErrUserNotFound      = errors.New("user not found")
	ErrEmailAlreadyTaken = errors.New("email already taken")
	ErrInvalidEmail      = errors.New("invalid email format")
	ErrEmptyPassword     = errors.New("password cannot be empty")
	ErrPasswordTooShort  = errors.New("password must be at least 8 characters")
	ErrNoteNotFound      = errors.New("note not found")
	ErrEmptyTitle        = errors.New("note title cannot be empty")
	ErrTitleTooLong      = errors.New("note title cannot exceed 200 characters")
	ErrUnauthorized      = errors.New("access denied")
)

user.go — The User entity

The User entity owns its own validation. The Email type is a value object — it is not just a string, it carries the guarantee that its value is well-formed. This distinction matters at scale: you never have to wonder whether an email in your system is valid because the type system enforces it.

Notice that NewEmail normalizes the input by converting to lowercase and trimming whitespace. This is not just convenient — it is a business rule. The domain says “emails are case-insensitive and whitespace-free.” Encoding that rule in the constructor means it applies everywhere, automatically, without relying on anyone remembering to call .ToLower() before saving.

The NewUser constructor is a factory function. It is the only way to create a valid User — the struct fields are unexported enough that a zero-value User{} cannot accidentally enter the system without going through the factory. In Go, you cannot make struct fields truly private to prevent construction (unlike some languages), but the convention of using factory functions is still the right approach: it centralizes the invariant enforcement in one place.

// internal/domain/user.go
package domain

import (
	"strings"
	"time"

	"github.com/google/uuid"
)

// Email is a value object. It can only be created through NewEmail,
// which validates the format. A variable of type Email is always valid.
type Email struct {
	value string
}

func NewEmail(raw string) (Email, error) {
	normalized := strings.ToLower(strings.TrimSpace(raw))
	if normalized == "" || !strings.Contains(normalized, "@") || !strings.Contains(normalized, ".") {
		return Email{}, ErrInvalidEmail
	}
	return Email{value: normalized}, nil
}

func (e Email) String() string { return e.value }

// User is an aggregate root. It owns the invariant that an email is valid
// and that the password hash is not empty.
type User struct {
	ID           uuid.UUID
	Email        Email
	PasswordHash string
	CreatedAt    time.Time
}

func NewUser(email Email, passwordHash string) (User, error) {
	if passwordHash == "" {
		return User{}, ErrEmptyPassword
	}
	return User{
		ID:           uuid.New(),
		Email:        email,
		PasswordHash: passwordHash,
		CreatedAt:    time.Now().UTC(),
	}, nil
}

user_test.go — Test-first discipline

Write these tests before writing the domain code if you want pure TDD. The tests below should pass against the implementation above. They document the rules of the domain in executable form.

// internal/domain/user_test.go
package domain_test

import (
	"testing"

	"github.com/yourusername/notes-api/internal/domain"
)

func TestNewEmail_ValidEmail(t *testing.T) {
	email, err := domain.NewEmail("User@Example.COM")
	if err != nil {
		t.Fatalf("expected no error, got %v", err)
	}
	// Value objects normalize their input — uppercase becomes lowercase
	if email.String() != "user@example.com" {
		t.Errorf("expected normalized email, got %q", email.String())
	}
}

func TestNewEmail_EmptyEmail_ReturnsError(t *testing.T) {
	_, err := domain.NewEmail("")
	if err != domain.ErrInvalidEmail {
		t.Errorf("expected ErrInvalidEmail, got %v", err)
	}
}

func TestNewEmail_MissingAt_ReturnsError(t *testing.T) {
	_, err := domain.NewEmail("notanemail.com")
	if err != domain.ErrInvalidEmail {
		t.Errorf("expected ErrInvalidEmail, got %v", err)
	}
}

func TestNewUser_EmptyHash_ReturnsError(t *testing.T) {
	email, _ := domain.NewEmail("test@example.com")
	_, err := domain.NewUser(email, "")
	if err != domain.ErrEmptyPassword {
		t.Errorf("expected ErrEmptyPassword, got %v", err)
	}
}

func TestNewUser_ValidInput_CreatesUser(t *testing.T) {
	email, _ := domain.NewEmail("test@example.com")
	user, err := domain.NewUser(email, "$2a$10$hashedvalue")
	if err != nil {
		t.Fatalf("expected no error, got %v", err)
	}
	if user.ID.String() == "" {
		t.Error("expected UUID to be set")
	}
}

note.go — The Note entity

The Note entity belongs to a User. The UserID field is the ownership reference. Notice that Note does not hold a pointer to User — aggregate roots reference each other by ID, not by object. This keeps aggregates independent and prevents cascading loads.

This is one of the most important DDD rules in practice. If Note held a *User pointer, loading a note would require loading the user too. And if User held a slice of *Note, loading a user would require loading all their notes. You end up with cascading eager loads that destroy performance and tight coupling that makes the model inflexible. ID references give you the boundary without sacrificing the relationship.

The Update method is a domain operation — not just a setter. It applies all three fields together and updates UpdatedAt atomically. A caller cannot update the title without also providing the body and tags. This prevents partial updates that leave the note in an inconsistent state. If your business rules were more complex — say, “a note with the tag archived cannot have its title changed” — that rule would live inside this method, enforced by the domain model, not by a handler or a validator.

// internal/domain/note.go
package domain

import (
	"strings"
	"time"

	"github.com/google/uuid"
)

// Title is a value object for note titles.
// Creating one validates the business rules: not empty, not too long.
type Title struct {
	value string
}

func NewTitle(raw string) (Title, error) {
	trimmed := strings.TrimSpace(raw)
	if trimmed == "" {
		return Title{}, ErrEmptyTitle
	}
	if len(trimmed) > 200 {
		return Title{}, ErrTitleTooLong
	}
	return Title{value: trimmed}, nil
}

func (t Title) String() string { return t.value }

// Note is an aggregate root owned by a User.
type Note struct {
	ID        uuid.UUID
	Title     Title
	Body      string
	Tags      []string
	UserID    uuid.UUID
	CreatedAt time.Time
	UpdatedAt time.Time
}

func NewNote(title Title, body string, tags []string, userID uuid.UUID) Note {
	now := time.Now().UTC()
	return Note{
		ID:        uuid.New(),
		Title:     title,
		Body:      body,
		Tags:      tags,
		UserID:    userID,
		CreatedAt: now,
		UpdatedAt: now,
	}
}

func (n *Note) Update(title Title, body string, tags []string) {
	n.Title = title
	n.Body = body
	n.Tags = tags
	n.UpdatedAt = time.Now().UTC()
}

note_test.go

// internal/domain/note_test.go
package domain_test

import (
	"testing"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/domain"
)

func TestNewTitle_EmptyString_ReturnsError(t *testing.T) {
	_, err := domain.NewTitle("   ")
	if err != domain.ErrEmptyTitle {
		t.Errorf("expected ErrEmptyTitle, got %v", err)
	}
}

func TestNewTitle_TooLong_ReturnsError(t *testing.T) {
	long := make([]byte, 201)
	for i := range long {
		long[i] = 'a'
	}
	_, err := domain.NewTitle(string(long))
	if err != domain.ErrTitleTooLong {
		t.Errorf("expected ErrTitleTooLong, got %v", err)
	}
}

func TestNewTitle_ValidTitle_TrimsWhitespace(t *testing.T) {
	title, err := domain.NewTitle("  My Note  ")
	if err != nil {
		t.Fatalf("unexpected error: %v", err)
	}
	if title.String() != "My Note" {
		t.Errorf("expected trimmed title, got %q", title.String())
	}
}

func TestNewNote_AssignsUUIDAndTimestamps(t *testing.T) {
	title, _ := domain.NewTitle("Test Note")
	userID := uuid.New()
	note := domain.NewNote(title, "some body", nil, userID)

	if note.ID == uuid.Nil {
		t.Error("expected non-nil UUID")
	}
	if note.CreatedAt.IsZero() {
		t.Error("expected CreatedAt to be set")
	}
	if note.UserID != userID {
		t.Error("expected UserID to match")
	}
}

Run the domain tests:

go test ./internal/domain/... -v

All tests should pass. The domain is complete and verified. Now commit:

git add .
git commit -m "feat(domain): user and note entities with value objects and tests"

Step 3: Ports — The Contracts

Ports are Go interfaces. They live in internal/ports/. The domain layer and the application layer depend on these interfaces, never on concrete implementations. The adapters implement them.

Why interfaces are the right abstraction here

This is the Dependency Inversion Principle (DIP) \u2014 the D in SOLID. It states that high-level modules should not depend on low-level modules; both should depend on abstractions. The high-level module here is your application use case. The low-level module is PostgreSQL. The abstraction is the repository interface.

Without DIP, your application service imports pgx and calls pool.QueryRow directly. The application is now permanently coupled to PostgreSQL. You cannot test it without a database. You cannot swap databases without rewriting the service. You cannot run the same test suite in CI without a Postgres container.

With DIP, the application service imports output.NoteRepository and calls repo.FindByID. The interface is defined in the inner layer \u2014 the application’s terms, not PostgreSQL’s terms. The postgres adapter implements that interface in the outer layer. The direction of the dependency is inverted: instead of the application depending on the database, the database adapter depends on the interface the application defined.

This is also called Inversion of Control (IoC). Control over which database gets used is removed from the application service and handed to the composition root \u2014 the main.go that wires everything together. The service never calls pgxpool.New. It never knows how the repository works. It just knows the interface is satisfied.

Primary ports vs secondary ports

It helps to think about the two directions separately.

Input ports (internal/ports/input/) are the primary ports \u2014 they describe what actions the system can perform. A REST request, a gRPC call, and a CLI command all call the same input port interface. The interface is defined in the terms of the use case, not the transport. AuthService.Register does not mention HTTP methods or JSON. It takes a RegisterInput struct and returns a domain.User. The transport adapter is responsible for reading the HTTP request and building that struct.

Output ports (internal/ports/output/) are the secondary ports \u2014 they describe what the system needs from the world. A repository, a cache, an email sender, a file store. Each is expressed as an interface that speaks domain language. NoteRepository.Save takes a domain.Note, not a database row. The adapter is responsible for the translation.

This separation keeps the domain and application layers honest. Neither layer ever mentions a network protocol or a storage driver. Their entire surface area is Go types that belong to the domain.

Think of ports as the electrical outlet standard. Your appliance (application) plugs into the standard (port). Any outlet that matches the standard (adapter) works \u2014 whether it is a real wall socket or a travel adapter.

Output ports (what the application needs from infrastructure)

// internal/ports/output/user_repository.go
package output

import (
	"context"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/domain"
)

type UserRepository interface {
	Save(ctx context.Context, user domain.User) error
	FindByEmail(ctx context.Context, email domain.Email) (domain.User, error)
	FindByID(ctx context.Context, id uuid.UUID) (domain.User, error)
}
// internal/ports/output/note_repository.go
package output

import (
	"context"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/domain"
)

type NoteRepository interface {
	Save(ctx context.Context, note domain.Note) error
	FindByID(ctx context.Context, id uuid.UUID) (domain.Note, error)
	FindAllByUserID(ctx context.Context, userID uuid.UUID) ([]domain.Note, error)
	Update(ctx context.Context, note domain.Note) error
	Delete(ctx context.Context, id uuid.UUID) error
}
// internal/ports/output/cache.go
package output

import (
	"context"
	"time"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/domain"
)

type NoteCache interface {
	GetNote(ctx context.Context, id uuid.UUID) (domain.Note, error)
	SetNote(ctx context.Context, note domain.Note, ttl time.Duration) error
	InvalidateNote(ctx context.Context, id uuid.UUID) error
	InvalidateUserNotes(ctx context.Context, userID uuid.UUID) error
}

Input ports (what the outside world can call)

// internal/ports/input/auth_service.go
package input

import (
	"context"

	"github.com/yourusername/notes-api/internal/domain"
)

type RegisterInput struct {
	Email    string
	Password string
}

type LoginInput struct {
	Email    string
	Password string
}

type AuthTokens struct {
	AccessToken string
}

type AuthService interface {
	Register(ctx context.Context, input RegisterInput) (domain.User, error)
	Login(ctx context.Context, input LoginInput) (AuthTokens, error)
}
// internal/ports/input/note_service.go
package input

import (
	"context"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/domain"
)

type CreateNoteInput struct {
	Title  string
	Body   string
	Tags   []string
	UserID uuid.UUID
}

type UpdateNoteInput struct {
	Title string
	Body  string
	Tags  []string
}

type NoteService interface {
	Create(ctx context.Context, input CreateNoteInput) (domain.Note, error)
	GetByID(ctx context.Context, id uuid.UUID, userID uuid.UUID) (domain.Note, error)
	ListByUser(ctx context.Context, userID uuid.UUID) ([]domain.Note, error)
	Update(ctx context.Context, id uuid.UUID, userID uuid.UUID, input UpdateNoteInput) (domain.Note, error)
	Delete(ctx context.Context, id uuid.UUID, userID uuid.UUID) error
}

Commit:

git add .
git commit -m "feat(ports): input and output port interfaces"

Step 4: Application Layer with TDD

The application layer implements the input ports. Each use case is a function that coordinates: validate input, call the repository, apply domain logic, return the result. The application layer tests prove that the use cases work correctly — with mock repositories, no real database required.

What an application service is and is not

In DDD, there are two kinds of services you will encounter: application services and domain services. Getting them confused is one of the most common mistakes in codebases that try to apply DDD.

A domain service encapsulates domain logic that does not naturally fit inside a single entity or value object. For example, a TransferFunds operation in a banking domain involves two accounts — it belongs in a domain service because it is domain logic that spans two aggregates. Domain services live in internal/domain/. They have no infrastructure dependencies.

An application service orchestrates a use case. It does not contain domain logic. It calls domain objects and domain services. It calls repository ports to load and persist data. It coordinates the sequence of operations required to fulfill a request. Application services live in internal/application/. They depend on ports, never on infrastructure.

The common mistake: putting domain logic in application services because it is convenient. “I need to check if the email is already taken, let me put that in the service.” That logic belongs in the domain — at the very least as an explicit invariant on the entity. Application services are coordinators, not decision-makers.

A practical test: if you cannot unit-test an application service by mocking its repositories, it is doing too much. If the service needs to run real SQL to decide whether a business rule applies, the rule is in the wrong layer.

The Red-Green-Refactor cycle in practice

TDD on the application layer looks like this. You start with the test: write TestRegister_DuplicateEmail_ReturnsError before writing the Register function. The test fails because Register does not exist yet — that is the red state. You write the minimum implementation to make it pass — that is the green state. You then look at the code and ask: is there duplication? Are the names clear? Can I simplify the logic? That is the refactor phase.

The discipline is resisting the temptation to skip ahead. When you are in the red phase, you write only enough test to fail. When you are in the green phase, you write only enough code to pass. This keeps the feedback loop tight and the commits small. A commit that says “feat: register returns error for duplicate email” is more useful than a commit that says “feat: implement auth service.”

auth_service.go

The auth service handles registration and login. It uses bcrypt for password hashing and signs JWT tokens. Notice the service depends on output.UserRepository and a JWTSecret string — both are injected, not created here.

Why bcrypt? Passwords must never be stored in plain text or with reversible encryption. bcrypt is a deliberately slow hashing algorithm: it applies a configurable cost factor that controls how many iterations of the underlying Blowfish cipher run. bcrypt.DefaultCost is 10 as of this writing — meaning 2¹⁰ (1024) iterations. The slowness is intentional. An attacker with the database dump cannot hash millions of guesses per second because each guess takes milliseconds. When hardware improves, you increase the cost factor on next login. No other commonly available option gives you that adaptive property.

Why JWT? JSON Web Tokens are stateless: all the information the server needs to identify the caller is encoded in the token itself. The server does not need to look up a session in a database on every request — it only validates the cryptographic signature. This makes horizontal scaling trivial: any server instance with the same JWT_SECRET can validate any token, with no shared session store required.

A JWT has three parts: a header (algorithm), a payload (claims), and a signature. The payload includes sub (the user’s UUID), exp (expiry time), and iat (issued-at time). The server signs the payload with the secret key using HMAC-SHA256. If an attacker tampers with the payload, the signature check fails. The secret key must be kept secret — anyone who knows it can forge valid tokens for any user.

A critical security decision in the login function: when the email does not exist, the service returns ErrUnauthorized — the same error returned when the password is wrong. This prevents user enumeration: an attacker cannot tell by the error message whether the email is registered. Always return identical responses for “email not found” and “wrong password.”

// internal/application/auth_service.go
package application

import (
	"context"
	"fmt"
	"time"

	"github.com/golang-jwt/jwt/v5"
	"golang.org/x/crypto/bcrypt"

	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
	"github.com/yourusername/notes-api/internal/ports/output"
)

type authService struct {
	users     output.UserRepository
	jwtSecret string
}

func NewAuthService(users output.UserRepository, jwtSecret string) input.AuthService {
	return &authService{users: users, jwtSecret: jwtSecret}
}

func (s *authService) Register(ctx context.Context, in input.RegisterInput) (domain.User, error) {
	email, err := domain.NewEmail(in.Email)
	if err != nil {
		return domain.User{}, err
	}

	// Check uniqueness before hashing — fail fast
	_, findErr := s.users.FindByEmail(ctx, email)
	if findErr == nil {
		return domain.User{}, domain.ErrEmailAlreadyTaken
	}

	if len(in.Password) < 8 {
		return domain.User{}, domain.ErrPasswordTooShort
	}

	hash, err := bcrypt.GenerateFromPassword([]byte(in.Password), bcrypt.DefaultCost)
	if err != nil {
		return domain.User{}, fmt.Errorf("hashing password: %w", err)
	}

	user, err := domain.NewUser(email, string(hash))
	if err != nil {
		return domain.User{}, err
	}

	if err := s.users.Save(ctx, user); err != nil {
		return domain.User{}, fmt.Errorf("saving user: %w", err)
	}

	return user, nil
}

func (s *authService) Login(ctx context.Context, in input.LoginInput) (input.AuthTokens, error) {
	email, err := domain.NewEmail(in.Email)
	if err != nil {
		return input.AuthTokens{}, domain.ErrInvalidEmail
	}

	user, err := s.users.FindByEmail(ctx, email)
	if err != nil {
		// Do not leak whether the email exists — return a generic error
		return input.AuthTokens{}, domain.ErrUnauthorized
	}

	if err := bcrypt.CompareHashAndPassword([]byte(user.PasswordHash), []byte(in.Password)); err != nil {
		return input.AuthTokens{}, domain.ErrUnauthorized
	}

	token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
		"sub": user.ID.String(),
		"exp": time.Now().Add(24 * time.Hour).Unix(),
		"iat": time.Now().Unix(),
	})

	signed, err := token.SignedString([]byte(s.jwtSecret))
	if err != nil {
		return input.AuthTokens{}, fmt.Errorf("signing token: %w", err)
	}

	return input.AuthTokens{AccessToken: signed}, nil
}

auth_service_test.go

These tests drive a mock repository. No database is involved. The tests verify that the service applies the right business rules — not that SQL executes correctly; that is the adapter’s job.

// internal/application/auth_service_test.go
package application_test

import (
	"context"
	"errors"
	"testing"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/application"
	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
)

// mockUserRepo is a simple in-memory mock. No mocking framework needed.
type mockUserRepo struct {
	users  map[string]domain.User
	saveErr error
}

func newMockUserRepo() *mockUserRepo {
	return &mockUserRepo{users: make(map[string]domain.User)}
}

func (m *mockUserRepo) Save(_ context.Context, u domain.User) error {
	if m.saveErr != nil {
		return m.saveErr
	}
	m.users[u.Email.String()] = u
	return nil
}

func (m *mockUserRepo) FindByEmail(_ context.Context, e domain.Email) (domain.User, error) {
	u, ok := m.users[e.String()]
	if !ok {
		return domain.User{}, domain.ErrUserNotFound
	}
	return u, nil
}

func (m *mockUserRepo) FindByID(_ context.Context, id uuid.UUID) (domain.User, error) {
	for _, u := range m.users {
		if u.ID == id {
			return u, nil
		}
	}
	return domain.User{}, domain.ErrUserNotFound
}

func TestRegister_NewUser_Succeeds(t *testing.T) {
	repo := newMockUserRepo()
	svc := application.NewAuthService(repo, "test-secret")

	user, err := svc.Register(context.Background(), input.RegisterInput{
		Email:    "alice@example.com",
		Password: "securepass123",
	})

	if err != nil {
		t.Fatalf("expected success, got %v", err)
	}
	if user.ID == uuid.Nil {
		t.Error("expected UUID to be assigned")
	}
	if user.Email.String() != "alice@example.com" {
		t.Errorf("expected email alice@example.com, got %s", user.Email)
	}
}

func TestRegister_DuplicateEmail_ReturnsError(t *testing.T) {
	repo := newMockUserRepo()
	svc := application.NewAuthService(repo, "test-secret")

	svc.Register(context.Background(), input.RegisterInput{
		Email: "alice@example.com", Password: "securepass123",
	})

	_, err := svc.Register(context.Background(), input.RegisterInput{
		Email: "alice@example.com", Password: "anotherpass123",
	})

	if !errors.Is(err, domain.ErrEmailAlreadyTaken) {
		t.Errorf("expected ErrEmailAlreadyTaken, got %v", err)
	}
}

func TestRegister_ShortPassword_ReturnsError(t *testing.T) {
	svc := application.NewAuthService(newMockUserRepo(), "test-secret")
	_, err := svc.Register(context.Background(), input.RegisterInput{
		Email: "bob@example.com", Password: "short",
	})
	if !errors.Is(err, domain.ErrPasswordTooShort) {
		t.Errorf("expected ErrPasswordTooShort, got %v", err)
	}
}

func TestLogin_WrongPassword_ReturnsUnauthorized(t *testing.T) {
	repo := newMockUserRepo()
	svc := application.NewAuthService(repo, "test-secret")

	svc.Register(context.Background(), input.RegisterInput{
		Email: "carol@example.com", Password: "correctpassword",
	})

	_, err := svc.Login(context.Background(), input.LoginInput{
		Email: "carol@example.com", Password: "wrongpassword",
	})

	if !errors.Is(err, domain.ErrUnauthorized) {
		t.Errorf("expected ErrUnauthorized, got %v", err)
	}
}

func TestLogin_ValidCredentials_ReturnsToken(t *testing.T) {
	repo := newMockUserRepo()
	svc := application.NewAuthService(repo, "test-secret")

	svc.Register(context.Background(), input.RegisterInput{
		Email: "dave@example.com", Password: "validpassword",
	})

	tokens, err := svc.Login(context.Background(), input.LoginInput{
		Email: "dave@example.com", Password: "validpassword",
	})

	if err != nil {
		t.Fatalf("expected success, got %v", err)
	}
	if tokens.AccessToken == "" {
		t.Error("expected non-empty access token")
	}
}

note_service.go

The note service adds caching logic. On a cache hit, it skips the database entirely. On a cache miss, it loads from the database and warms the cache. On writes, it invalidates the relevant cache entries. This is the cache-aside pattern, and it lives in the application layer — not in the Redis adapter, and not in the PostgreSQL adapter.

The placement of the cache logic matters. If you put it in the Redis adapter, the adapter makes policy decisions about when to cache — but the adapter is supposed to be a mechanical translation, not a decision-maker. If you put it in the PostgreSQL adapter, you have a database adapter with a Redis dependency, which is a strange coupling. The application service is the right place: it knows the use case, it knows when data changes, it knows when the cache should be warm or cold.

Notice that cache errors are silently ignored with _ = s.cache.SetNote(...). This is deliberate. The cache is a performance optimization, not a source of truth. If Redis is down, the system should still serve data from PostgreSQL — slowly, but correctly. Returning an error when the cache fails to warm would break a perfectly functional operation for an infrastructure blip. Cache-aside is a best-effort strategy by design.

// internal/application/note_service.go
package application

import (
	"context"
	"errors"
	"fmt"
	"time"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
	"github.com/yourusername/notes-api/internal/ports/output"
)

const noteCacheTTL = 5 * time.Minute

type noteService struct {
	notes output.NoteRepository
	cache output.NoteCache
}

func NewNoteService(notes output.NoteRepository, cache output.NoteCache) input.NoteService {
	return &noteService{notes: notes, cache: cache}
}

func (s *noteService) Create(ctx context.Context, in input.CreateNoteInput) (domain.Note, error) {
	title, err := domain.NewTitle(in.Title)
	if err != nil {
		return domain.Note{}, err
	}

	note := domain.NewNote(title, in.Body, in.Tags, in.UserID)

	if err := s.notes.Save(ctx, note); err != nil {
		return domain.Note{}, fmt.Errorf("saving note: %w", err)
	}

	// Warm the single-item cache; ignore errors — cache is best-effort
	_ = s.cache.SetNote(ctx, note, noteCacheTTL)
	// Invalidate the user's list cache so the next ListByUser fetches fresh data
	_ = s.cache.InvalidateUserNotes(ctx, in.UserID)

	return note, nil
}

func (s *noteService) GetByID(ctx context.Context, id uuid.UUID, userID uuid.UUID) (domain.Note, error) {
	// Try cache first
	cached, err := s.cache.GetNote(ctx, id)
	if err == nil {
		// Enforce ownership even on cache hits
		if cached.UserID != userID {
			return domain.Note{}, domain.ErrUnauthorized
		}
		return cached, nil
	}

	note, err := s.notes.FindByID(ctx, id)
	if err != nil {
		return domain.Note{}, err
	}
	if note.UserID != userID {
		return domain.Note{}, domain.ErrUnauthorized
	}

	_ = s.cache.SetNote(ctx, note, noteCacheTTL)
	return note, nil
}

func (s *noteService) ListByUser(ctx context.Context, userID uuid.UUID) ([]domain.Note, error) {
	notes, err := s.notes.FindAllByUserID(ctx, userID)
	if err != nil {
		return nil, fmt.Errorf("listing notes: %w", err)
	}
	return notes, nil
}

func (s *noteService) Update(ctx context.Context, id uuid.UUID, userID uuid.UUID, in input.UpdateNoteInput) (domain.Note, error) {
	note, err := s.notes.FindByID(ctx, id)
	if err != nil {
		return domain.Note{}, err
	}
	if note.UserID != userID {
		return domain.Note{}, domain.ErrUnauthorized
	}

	title, err := domain.NewTitle(in.Title)
	if err != nil {
		return domain.Note{}, err
	}

	note.Update(title, in.Body, in.Tags)

	if err := s.notes.Update(ctx, note); err != nil {
		return domain.Note{}, fmt.Errorf("updating note: %w", err)
	}

	_ = s.cache.InvalidateNote(ctx, id)
	_ = s.cache.InvalidateUserNotes(ctx, userID)

	return note, nil
}

func (s *noteService) Delete(ctx context.Context, id uuid.UUID, userID uuid.UUID) error {
	note, err := s.notes.FindByID(ctx, id)
	if err != nil {
		if errors.Is(err, domain.ErrNoteNotFound) {
			return domain.ErrNoteNotFound
		}
		return fmt.Errorf("finding note to delete: %w", err)
	}
	if note.UserID != userID {
		return domain.ErrUnauthorized
	}

	if err := s.notes.Delete(ctx, id); err != nil {
		return fmt.Errorf("deleting note: %w", err)
	}

	_ = s.cache.InvalidateNote(ctx, id)
	_ = s.cache.InvalidateUserNotes(ctx, userID)

	return nil
}

note_service_test.go

// internal/application/note_service_test.go
package application_test

import (
	"context"
	"errors"
	"testing"
	"time"

	"github.com/google/uuid"
	"github.com/yourusername/notes-api/internal/application"
	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
)

// mockNoteRepo stores notes in memory
type mockNoteRepo struct {
	notes map[uuid.UUID]domain.Note
}

func newMockNoteRepo() *mockNoteRepo {
	return &mockNoteRepo{notes: make(map[uuid.UUID]domain.Note)}
}

func (m *mockNoteRepo) Save(_ context.Context, n domain.Note) error {
	m.notes[n.ID] = n
	return nil
}
func (m *mockNoteRepo) FindByID(_ context.Context, id uuid.UUID) (domain.Note, error) {
	n, ok := m.notes[id]
	if !ok {
		return domain.Note{}, domain.ErrNoteNotFound
	}
	return n, nil
}
func (m *mockNoteRepo) FindAllByUserID(_ context.Context, uid uuid.UUID) ([]domain.Note, error) {
	var result []domain.Note
	for _, n := range m.notes {
		if n.UserID == uid {
			result = append(result, n)
		}
	}
	return result, nil
}
func (m *mockNoteRepo) Update(_ context.Context, n domain.Note) error {
	m.notes[n.ID] = n
	return nil
}
func (m *mockNoteRepo) Delete(_ context.Context, id uuid.UUID) error {
	delete(m.notes, id)
	return nil
}

// noopCache satisfies the NoteCache interface without doing anything
type noopCache struct{}

func (noopCache) GetNote(_ context.Context, _ uuid.UUID) (domain.Note, error) {
	return domain.Note{}, errors.New("cache miss")
}
func (noopCache) SetNote(_ context.Context, _ domain.Note, _ time.Duration) error { return nil }
func (noopCache) InvalidateNote(_ context.Context, _ uuid.UUID) error             { return nil }
func (noopCache) InvalidateUserNotes(_ context.Context, _ uuid.UUID) error        { return nil }

func TestCreateNote_ValidInput_Succeeds(t *testing.T) {
	svc := application.NewNoteService(newMockNoteRepo(), noopCache{})
	userID := uuid.New()

	note, err := svc.Create(context.Background(), input.CreateNoteInput{
		Title:  "My First Note",
		Body:   "Some content here",
		Tags:   []string{"work"},
		UserID: userID,
	})

	if err != nil {
		t.Fatalf("unexpected error: %v", err)
	}
	if note.ID == uuid.Nil {
		t.Error("expected note ID to be set")
	}
	if note.UserID != userID {
		t.Error("expected UserID to match")
	}
}

func TestCreateNote_EmptyTitle_ReturnsError(t *testing.T) {
	svc := application.NewNoteService(newMockNoteRepo(), noopCache{})
	_, err := svc.Create(context.Background(), input.CreateNoteInput{
		Title:  "  ",
		Body:   "body",
		UserID: uuid.New(),
	})
	if !errors.Is(err, domain.ErrEmptyTitle) {
		t.Errorf("expected ErrEmptyTitle, got %v", err)
	}
}

func TestGetByID_WrongUser_ReturnsUnauthorized(t *testing.T) {
	repo := newMockNoteRepo()
	svc := application.NewNoteService(repo, noopCache{})
	ownerID := uuid.New()
	attackerID := uuid.New()

	note, _ := svc.Create(context.Background(), input.CreateNoteInput{
		Title: "Private Note", Body: "secret", UserID: ownerID,
	})

	_, err := svc.GetByID(context.Background(), note.ID, attackerID)
	if !errors.Is(err, domain.ErrUnauthorized) {
		t.Errorf("expected ErrUnauthorized, got %v", err)
	}
}

func TestDelete_WrongUser_ReturnsUnauthorized(t *testing.T) {
	repo := newMockNoteRepo()
	svc := application.NewNoteService(repo, noopCache{})
	ownerID := uuid.New()

	note, _ := svc.Create(context.Background(), input.CreateNoteInput{
		Title: "Note", Body: "body", UserID: ownerID,
	})

	err := svc.Delete(context.Background(), note.ID, uuid.New())
	if !errors.Is(err, domain.ErrUnauthorized) {
		t.Errorf("expected ErrUnauthorized, got %v", err)
	}
}

Run all application tests:

go test ./internal/... -v

Commit:

git add .
git commit -m "feat(application): auth and note services with TDD tests"

Step 5: Database Migrations

Migrations are SQL files that build the database schema. Never create tables manually — always through migrations. The tool goose manages the migration state and runs files in order.

The problem migrations solve

A database schema is shared state between your application and your storage engine. If two developers change the schema manually with different SQL clients, neither knows what the other did. If you deploy to production by SSHing in and running ALTER TABLE, the change is invisible to the rest of the team and impossible to rollback reliably.

Migrations are the answer: schema changes are tracked as versioned SQL files committed to git alongside the application code. The migration tool records which version each database instance is at in a goose_db_version table. When you deploy, the application runs pending migrations automatically. When you rollback, you run goose down.

This pattern is called database as code. The schema is no longer a configuration artifact managed out-of-band — it is a first-class part of the codebase with the same version control discipline as everything else.

Sequential numbering (00001_, 00002_) is intentional. goose applies migrations in lexicographic order. Using zero-padded numbers ensures alphabetical order matches creation order. If you use timestamps (also a valid goose format), you get collision-free numbering across parallel branches.

Always write the Down section. Down migrations are your rollback plan. In production, a bad deployment that corrupts the schema can be reversed with goose down if the Down SQL is correct. The discipline of writing Down migrations also forces you to think about whether a change is reversible. Dropping a column is not reversible without data loss — that is a signal that the migration needs a more careful strategy, such as a deprecation period.

Create the migration files:

-- db/migrations/00001_create_users.sql
-- +goose Up
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

CREATE TABLE users (
    id         UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    email      TEXT NOT NULL UNIQUE,
    password_hash TEXT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_users_email ON users(email);

-- +goose Down
DROP TABLE IF EXISTS users;
-- db/migrations/00002_create_notes.sql
-- +goose Up
CREATE TABLE notes (
    id         UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    title      TEXT NOT NULL,
    body       TEXT NOT NULL DEFAULT '',
    tags       TEXT[] NOT NULL DEFAULT '{}',
    user_id    UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_notes_user_id ON notes(user_id);

-- +goose Down
DROP TABLE IF EXISTS notes;

The -- +goose Up and -- +goose Down comments are how goose knows which SQL to run for each direction. The Down section is your rollback plan. Always write it.


Step 6: PostgreSQL Adapters

The PostgreSQL adapter implements the output ports using pgx/v5. It translates between domain types and database rows. The domain entity never touches a pgx type — that translation happens entirely inside this adapter.

The Repository pattern and the impedance mismatch problem

The Repository pattern creates an abstraction over persistence. From the perspective of the domain and application layers, the repository is a collection: you put things in, you get things out, by ID or by criteria. There is no SQL, no connection management, no type scanning. The repository interface is pure domain language.

Behind the interface, the adapter deals with the impedance mismatch — the fundamental difference between how object-oriented code structures data (objects, value types, inheritance) and how relational databases structure data (rows, columns, joins, NULLs). A domain.Note has a Title field of type Title (a value object). The database has a title column of type TEXT. The adapter bridges that gap in scanNote: it reads a string from the database and constructs a Title value object, propagating any validation errors.

Why pgx instead of an ORM? pgx/v5 is a direct PostgreSQL driver — not an ORM. It gives you full control over your SQL. ORMs abstract SQL away but they also make it easy to accidentally generate N+1 queries, load more data than you need, or write queries that look innocent but perform badly at scale. With pgx, the SQL is explicit and auditable. You know exactly what query runs on every operation.

Connection pooling with pgxpool.Pool is essential for production. HTTP servers receive many concurrent requests. If each request opens a dedicated database connection, you exhaust PostgreSQL’s connection limit quickly. A pool maintains a fixed set of connections that requests borrow and return. pgxpool.New creates a pool with sensible defaults — you can tune MaxConns, MinConns, and MaxConnIdleTime for your traffic profile.

postgres/user_repository.go

Before looking at the code, understand the contract: this struct must satisfy output.UserRepository exactly. The Go compiler will enforce this at compile time on the line var _ output.UserRepository = (*UserRepository)(nil). That is a static assertion — it produces a compile error if the interface is not fully implemented.

// internal/adapters/postgres/user_repository.go
package postgres

import (
	"context"
	"errors"

	"github.com/google/uuid"
	"github.com/jackc/pgx/v5"
	"github.com/jackc/pgx/v5/pgxpool"

	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/output"
)

// Compile-time interface check — if this fails, the code won't compile
var _ output.UserRepository = (*UserRepository)(nil)

type UserRepository struct {
	pool *pgxpool.Pool
}

func NewUserRepository(pool *pgxpool.Pool) *UserRepository {
	return &UserRepository{pool: pool}
}

func (r *UserRepository) Save(ctx context.Context, u domain.User) error {
	_, err := r.pool.Exec(ctx,
		`INSERT INTO users (id, email, password_hash, created_at)
		 VALUES ($1, $2, $3, $4)`,
		u.ID, u.Email.String(), u.PasswordHash, u.CreatedAt,
	)
	return err
}

func (r *UserRepository) FindByEmail(ctx context.Context, email domain.Email) (domain.User, error) {
	row := r.pool.QueryRow(ctx,
		`SELECT id, email, password_hash, created_at FROM users WHERE email = $1`,
		email.String(),
	)
	return scanUser(row)
}

func (r *UserRepository) FindByID(ctx context.Context, id uuid.UUID) (domain.User, error) {
	row := r.pool.QueryRow(ctx,
		`SELECT id, email, password_hash, created_at FROM users WHERE id = $1`,
		id,
	)
	return scanUser(row)
}

func scanUser(row pgx.Row) (domain.User, error) {
	var (
		id           uuid.UUID
		emailStr     string
		passwordHash string
		createdAt    interface{}
	)

	var u domain.User
	if err := row.Scan(&id, &emailStr, &passwordHash, &createdAt); err != nil {
		if errors.Is(err, pgx.ErrNoRows) {
			return domain.User{}, domain.ErrUserNotFound
		}
		return domain.User{}, err
	}

	email, err := domain.NewEmail(emailStr)
	if err != nil {
		return domain.User{}, err
	}

	u.ID = id
	u.Email = email
	u.PasswordHash = passwordHash
	return u, nil
}

postgres/note_repository.go

// internal/adapters/postgres/note_repository.go
package postgres

import (
	"context"
	"errors"
	"time"

	"github.com/google/uuid"
	"github.com/jackc/pgx/v5"
	"github.com/jackc/pgx/v5/pgxpool"

	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/output"
)

var _ output.NoteRepository = (*NoteRepository)(nil)

type NoteRepository struct {
	pool *pgxpool.Pool
}

func NewNoteRepository(pool *pgxpool.Pool) *NoteRepository {
	return &NoteRepository{pool: pool}
}

func (r *NoteRepository) Save(ctx context.Context, n domain.Note) error {
	_, err := r.pool.Exec(ctx,
		`INSERT INTO notes (id, title, body, tags, user_id, created_at, updated_at)
		 VALUES ($1, $2, $3, $4, $5, $6, $7)`,
		n.ID, n.Title.String(), n.Body, n.Tags, n.UserID, n.CreatedAt, n.UpdatedAt,
	)
	return err
}

func (r *NoteRepository) FindByID(ctx context.Context, id uuid.UUID) (domain.Note, error) {
	row := r.pool.QueryRow(ctx,
		`SELECT id, title, body, tags, user_id, created_at, updated_at
		 FROM notes WHERE id = $1`,
		id,
	)
	return scanNote(row)
}

func (r *NoteRepository) FindAllByUserID(ctx context.Context, userID uuid.UUID) ([]domain.Note, error) {
	rows, err := r.pool.Query(ctx,
		`SELECT id, title, body, tags, user_id, created_at, updated_at
		 FROM notes WHERE user_id = $1 ORDER BY created_at DESC`,
		userID,
	)
	if err != nil {
		return nil, err
	}
	defer rows.Close()

	var notes []domain.Note
	for rows.Next() {
		note, err := scanNote(rows)
		if err != nil {
			return nil, err
		}
		notes = append(notes, note)
	}
	return notes, rows.Err()
}

func (r *NoteRepository) Update(ctx context.Context, n domain.Note) error {
	_, err := r.pool.Exec(ctx,
		`UPDATE notes SET title=$1, body=$2, tags=$3, updated_at=$4 WHERE id=$5`,
		n.Title.String(), n.Body, n.Tags, n.UpdatedAt, n.ID,
	)
	return err
}

func (r *NoteRepository) Delete(ctx context.Context, id uuid.UUID) error {
	_, err := r.pool.Exec(ctx, `DELETE FROM notes WHERE id=$1`, id)
	return err
}

type scannable interface {
	Scan(dest ...any) error
}

func scanNote(row scannable) (domain.Note, error) {
	var (
		id        uuid.UUID
		titleStr  string
		body      string
		tags      []string
		userID    uuid.UUID
		createdAt time.Time
		updatedAt time.Time
	)
	if err := row.Scan(&id, &titleStr, &body, &tags, &userID, &createdAt, &updatedAt); err != nil {
		if errors.Is(err, pgx.ErrNoRows) {
			return domain.Note{}, domain.ErrNoteNotFound
		}
		return domain.Note{}, err
	}

	title, err := domain.NewTitle(titleStr)
	if err != nil {
		return domain.Note{}, err
	}

	return domain.Note{
		ID:        id,
		Title:     title,
		Body:      body,
		Tags:      tags,
		UserID:    userID,
		CreatedAt: createdAt,
		UpdatedAt: updatedAt,
	}, nil
}

Step 7: Redis Cache Adapter

The Redis adapter implements output.NoteCache. It serializes domain.Note to JSON for storage. Notice the key naming scheme: note:{id} for single notes and user_notes:{userID} as a tag prefix for list invalidation.

Why Redis and what it actually does

Redis is an in-memory data store. “In memory” means reads and writes happen at memory speed — microseconds — rather than disk speed — milliseconds. For a PostgreSQL query that takes 5–20ms, a Redis hit at 0.1ms is 50–200x faster. At scale, that difference determines whether your API can handle 1,000 requests per second or 10,000.

But Redis is not a database replacement. It is a cache: data that is too expensive to compute or fetch on every request, but not so critical that losing it would break the system. If the Redis instance crashes and you restart it empty, the system falls back to PostgreSQL automatically — slower, but correct. That is the test for whether something belongs in a cache: can the system recover without the cache? If yes, it is a cache. If no, it is a database.

Caching strategies

There are three common strategies, and understanding when each applies matters:

Cache-aside (what this project uses): the application checks the cache, falls back to the database on miss, and writes to the cache after loading. The cache is populated lazily — only for data that is actually requested. Writes go to the database first, then the cache is invalidated. This is the safest strategy because the database is always the source of truth.

Write-through: writes go to the cache and the database simultaneously. The cache is always warm after a write. The cost is that every write takes twice as long (two round trips). Use this when reads are extremely frequent and you cannot afford cache misses after a write.

Write-behind (also called write-back): writes go to the cache immediately and are flushed to the database asynchronously. This maximizes write throughput at the cost of durability — if Redis crashes before the flush, the data is lost. Appropriate only for non-critical, high-volume writes like analytics events or activity logs.

TTL and the cache invalidation problem

A TTL (Time To Live) is the maximum time a cached value can be stale. In this project, noteCacheTTL = 5 * time.Minute. After 5 minutes, Redis automatically expires the key. Even if invalidation calls fail for any reason, the worst case is 5 minutes of stale data.

The most dangerous cache bug is forgetting to invalidate. A note is updated in the database, but the old version sits in Redis for 5 minutes. The user who just updated their note sees the old version on the next read — and thinks their update failed. They edit again. The cycle repeats. The cache has turned a successful write into an apparent failure.

The solution in this project: on every write (create, update, delete), invalidate the relevant cache keys before returning. The invalidation is best-effort (errors are ignored) but the logic is always executed.

// internal/adapters/redis/note_cache.go
package redis

import (
	"context"
	"encoding/json"
	"errors"
	"fmt"
	"time"

	goredis "github.com/go-redis/redis/v9"
	"github.com/google/uuid"

	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/output"
)

var _ output.NoteCache = (*NoteCache)(nil)

type NoteCache struct {
	client *goredis.Client
}

func NewNoteCache(client *goredis.Client) *NoteCache {
	return &NoteCache{client: client}
}

type cachedNote struct {
	ID        string    `json:"id"`
	Title     string    `json:"title"`
	Body      string    `json:"body"`
	Tags      []string  `json:"tags"`
	UserID    string    `json:"user_id"`
	CreatedAt time.Time `json:"created_at"`
	UpdatedAt time.Time `json:"updated_at"`
}

func noteKey(id uuid.UUID) string {
	return fmt.Sprintf("note:%s", id.String())
}

func userNotesKey(userID uuid.UUID) string {
	return fmt.Sprintf("user_notes:%s", userID.String())
}

func (c *NoteCache) GetNote(ctx context.Context, id uuid.UUID) (domain.Note, error) {
	val, err := c.client.Get(ctx, noteKey(id)).Result()
	if errors.Is(err, goredis.Nil) {
		return domain.Note{}, errors.New("cache miss")
	}
	if err != nil {
		return domain.Note{}, fmt.Errorf("redis get: %w", err)
	}

	var cn cachedNote
	if err := json.Unmarshal([]byte(val), &cn); err != nil {
		return domain.Note{}, fmt.Errorf("unmarshal cached note: %w", err)
	}

	title, err := domain.NewTitle(cn.Title)
	if err != nil {
		return domain.Note{}, err
	}

	noteID, _ := uuid.Parse(cn.ID)
	userID, _ := uuid.Parse(cn.UserID)

	return domain.Note{
		ID:        noteID,
		Title:     title,
		Body:      cn.Body,
		Tags:      cn.Tags,
		UserID:    userID,
		CreatedAt: cn.CreatedAt,
		UpdatedAt: cn.UpdatedAt,
	}, nil
}

func (c *NoteCache) SetNote(ctx context.Context, note domain.Note, ttl time.Duration) error {
	cn := cachedNote{
		ID:        note.ID.String(),
		Title:     note.Title.String(),
		Body:      note.Body,
		Tags:      note.Tags,
		UserID:    note.UserID.String(),
		CreatedAt: note.CreatedAt,
		UpdatedAt: note.UpdatedAt,
	}
	data, err := json.Marshal(cn)
	if err != nil {
		return fmt.Errorf("marshal note: %w", err)
	}
	return c.client.Set(ctx, noteKey(note.ID), data, ttl).Err()
}

func (c *NoteCache) InvalidateNote(ctx context.Context, id uuid.UUID) error {
	return c.client.Del(ctx, noteKey(id)).Err()
}

func (c *NoteCache) InvalidateUserNotes(ctx context.Context, userID uuid.UUID) error {
	return c.client.Del(ctx, userNotesKey(userID)).Err()
}

Step 8: The HTTP API with Gin

The Gin adapter translates HTTP requests into calls to the input port services, and service responses back into HTTP responses. It never accesses the database, the domain, or any infrastructure directly — only the service interfaces.

REST as a set of constraints, not a format

REST (Representational State Transfer) was described by Roy Fielding in his 2000 dissertation. It is not a specification — it is a set of architectural constraints. The ones that matter most for a backend API:

Statelessness: each request carries all the information the server needs to process it. There is no server-side session. The JWT in the Authorization header is the complete identity context. This is why horizontal scaling is trivial: any server replica can handle any request because no state is stored on the server between requests.

Uniform interface: standard HTTP methods have defined semantics. POST creates. GET reads. PUT or PATCH updates. DELETE removes. Following these conventions means any developer who has worked with HTTP can reason about your API immediately. Deviating from them — using POST /notes/delete instead of DELETE /notes/{id} — breaks the mental model without gaining anything.

HTTP status codes have meaning. Using the right status code is not pedantry — it is the API’s protocol-level vocabulary. A client that receives 409 Conflict knows the request was valid but conflicted with existing state. Use 201 Created for new resources, not 200 OK. Use 204 No Content for successful deletes. Use 401 Unauthorized for missing auth, and 403 Forbidden for authenticated-but-not-allowed. Some clients make routing decisions based on status codes alone — return the right one.

DTOs at the boundary

The HTTP handlers define CreateNoteRequest, UpdateNoteRequest, and similar structs. These are Data Transfer Objects (DTOs) — dumb containers that represent the shape of the API payload. They are deliberately separate from domain entities.

This separation matters when the API evolves. If you rename a JSON field from text to content, you change the DTO. The domain model is unaffected. If you add a field to the domain entity that should not be exposed to clients, you exclude it from the DTO. The API surface area and the domain model have different lifecycles and different audiences — coupling them forces you to compromise both.

Middleware as cross-cutting concerns

Authentication, logging, rate limiting, and CORS all apply to many routes but have nothing to do with any specific business operation. These are cross-cutting concerns — they cut across the application rather than belonging to one feature.

Gin handles them as middleware functions that run before (and optionally after) the handler. c.Next() passes control to the next function in the chain. c.Abort() stops the chain — used in auth middleware to return a 401 and prevent the handler from running at all. Middleware stacks compose: apply the JWT middleware to a router group and every route in that group is automatically protected without any extra code.

http/dto.go

DTOs (Data Transfer Objects) are the shapes of data at the HTTP boundary. They are separate from domain entities intentionally. The JSON field names belong to the API contract, not to the domain model.

// internal/adapters/http/dto.go
package http

import "github.com/google/uuid"

type RegisterRequest struct {
	Email    string `json:"email"    validate:"required,email"`
	Password string `json:"password" validate:"required,min=8"`
}

type LoginRequest struct {
	Email    string `json:"email"    validate:"required,email"`
	Password string `json:"password" validate:"required"`
}

type AuthResponse struct {
	AccessToken string `json:"access_token"`
}

type CreateNoteRequest struct {
	Title string   `json:"title" validate:"required,max=200"`
	Body  string   `json:"body"`
	Tags  []string `json:"tags"`
}

type UpdateNoteRequest struct {
	Title string   `json:"title" validate:"required,max=200"`
	Body  string   `json:"body"`
	Tags  []string `json:"tags"`
}

type NoteResponse struct {
	ID        uuid.UUID `json:"id"`
	Title     string    `json:"title"`
	Body      string    `json:"body"`
	Tags      []string  `json:"tags"`
	UserID    uuid.UUID `json:"user_id"`
	CreatedAt string    `json:"created_at"`
	UpdatedAt string    `json:"updated_at"`
}

type ErrorResponse struct {
	Error string `json:"error"`
}

http/middleware.go

The JWT middleware extracts the token from the Authorization header, validates it, and writes the user ID into the Gin context. Every protected route reads the user ID from the context — it never trusts the request body or query parameters for identity.

// internal/adapters/http/middleware.go
package http

import (
	"net/http"
	"strings"

	"github.com/gin-gonic/gin"
	"github.com/golang-jwt/jwt/v5"
	"github.com/google/uuid"
)

const ctxUserIDKey = "userID"

func JWTMiddleware(jwtSecret string) gin.HandlerFunc {
	return func(c *gin.Context) {
		authHeader := c.GetHeader("Authorization")
		if authHeader == "" {
			c.AbortWithStatusJSON(http.StatusUnauthorized, ErrorResponse{Error: "missing authorization header"})
			return
		}

		parts := strings.SplitN(authHeader, " ", 2)
		if len(parts) != 2 || strings.ToLower(parts[0]) != "bearer" {
			c.AbortWithStatusJSON(http.StatusUnauthorized, ErrorResponse{Error: "invalid authorization header format"})
			return
		}

		tokenStr := parts[1]
		token, err := jwt.Parse(tokenStr, func(t *jwt.Token) (interface{}, error) {
			if _, ok := t.Method.(*jwt.SigningMethodHMAC); !ok {
				return nil, jwt.ErrSignatureInvalid
			}
			return []byte(jwtSecret), nil
		})

		if err != nil || !token.Valid {
			c.AbortWithStatusJSON(http.StatusUnauthorized, ErrorResponse{Error: "invalid or expired token"})
			return
		}

		claims, ok := token.Claims.(jwt.MapClaims)
		if !ok {
			c.AbortWithStatusJSON(http.StatusUnauthorized, ErrorResponse{Error: "invalid token claims"})
			return
		}

		sub, ok := claims["sub"].(string)
		if !ok {
			c.AbortWithStatusJSON(http.StatusUnauthorized, ErrorResponse{Error: "invalid token subject"})
			return
		}

		userID, err := uuid.Parse(sub)
		if err != nil {
			c.AbortWithStatusJSON(http.StatusUnauthorized, ErrorResponse{Error: "invalid user ID in token"})
			return
		}

		c.Set(ctxUserIDKey, userID)
		c.Next()
	}
}

func getUserID(c *gin.Context) (uuid.UUID, bool) {
	val, exists := c.Get(ctxUserIDKey)
	if !exists {
		return uuid.Nil, false
	}
	id, ok := val.(uuid.UUID)
	return id, ok
}

http/auth_handler.go

// internal/adapters/http/auth_handler.go
package http

import (
	"errors"
	"net/http"

	"github.com/gin-gonic/gin"
	"github.com/go-playground/validator/v10"

	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
)

type AuthHandler struct {
	auth      input.AuthService
	validator *validator.Validate
}

func NewAuthHandler(auth input.AuthService) *AuthHandler {
	return &AuthHandler{auth: auth, validator: validator.New()}
}

func (h *AuthHandler) Register(c *gin.Context) {
	var req RegisterRequest
	if err := c.ShouldBindJSON(&req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid request body"})
		return
	}
	if err := h.validator.Struct(req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: err.Error()})
		return
	}

	user, err := h.auth.Register(c.Request.Context(), input.RegisterInput{
		Email:    req.Email,
		Password: req.Password,
	})
	if err != nil {
		status, msg := domainErrorToHTTP(err)
		c.JSON(status, ErrorResponse{Error: msg})
		return
	}

	c.JSON(http.StatusCreated, gin.H{"id": user.ID, "email": user.Email.String()})
}

func (h *AuthHandler) Login(c *gin.Context) {
	var req LoginRequest
	if err := c.ShouldBindJSON(&req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid request body"})
		return
	}

	tokens, err := h.auth.Login(c.Request.Context(), input.LoginInput{
		Email:    req.Email,
		Password: req.Password,
	})
	if err != nil {
		status, msg := domainErrorToHTTP(err)
		c.JSON(status, ErrorResponse{Error: msg})
		return
	}

	c.JSON(http.StatusOK, AuthResponse{AccessToken: tokens.AccessToken})
}

func domainErrorToHTTP(err error) (int, string) {
	switch {
	case errors.Is(err, domain.ErrEmailAlreadyTaken):
		return http.StatusConflict, "email already registered"
	case errors.Is(err, domain.ErrInvalidEmail):
		return http.StatusBadRequest, "invalid email format"
	case errors.Is(err, domain.ErrPasswordTooShort):
		return http.StatusBadRequest, "password must be at least 8 characters"
	case errors.Is(err, domain.ErrUnauthorized):
		return http.StatusUnauthorized, "invalid credentials"
	case errors.Is(err, domain.ErrNoteNotFound):
		return http.StatusNotFound, "note not found"
	case errors.Is(err, domain.ErrEmptyTitle):
		return http.StatusBadRequest, "note title cannot be empty"
	case errors.Is(err, domain.ErrTitleTooLong):
		return http.StatusBadRequest, "note title is too long"
	default:
		return http.StatusInternalServerError, "internal server error"
	}
}

http/note_handler.go

// internal/adapters/http/note_handler.go
package http

import (
	"net/http"

	"github.com/gin-gonic/gin"
	"github.com/go-playground/validator/v10"
	"github.com/google/uuid"

	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
)

type NoteHandler struct {
	notes     input.NoteService
	validator *validator.Validate
}

func NewNoteHandler(notes input.NoteService) *NoteHandler {
	return &NoteHandler{notes: notes, validator: validator.New()}
}

func (h *NoteHandler) Create(c *gin.Context) {
	userID, ok := getUserID(c)
	if !ok {
		c.JSON(http.StatusUnauthorized, ErrorResponse{Error: "unauthorized"})
		return
	}

	var req CreateNoteRequest
	if err := c.ShouldBindJSON(&req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid request body"})
		return
	}
	if err := h.validator.Struct(req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: err.Error()})
		return
	}

	note, err := h.notes.Create(c.Request.Context(), input.CreateNoteInput{
		Title:  req.Title,
		Body:   req.Body,
		Tags:   req.Tags,
		UserID: userID,
	})
	if err != nil {
		status, msg := domainErrorToHTTP(err)
		c.JSON(status, ErrorResponse{Error: msg})
		return
	}

	c.JSON(http.StatusCreated, toNoteResponse(note))
}

func (h *NoteHandler) GetByID(c *gin.Context) {
	userID, ok := getUserID(c)
	if !ok {
		c.JSON(http.StatusUnauthorized, ErrorResponse{Error: "unauthorized"})
		return
	}

	noteID, err := uuid.Parse(c.Param("id"))
	if err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid note ID"})
		return
	}

	note, err := h.notes.GetByID(c.Request.Context(), noteID, userID)
	if err != nil {
		status, msg := domainErrorToHTTP(err)
		c.JSON(status, ErrorResponse{Error: msg})
		return
	}

	c.JSON(http.StatusOK, toNoteResponse(note))
}

func (h *NoteHandler) List(c *gin.Context) {
	userID, ok := getUserID(c)
	if !ok {
		c.JSON(http.StatusUnauthorized, ErrorResponse{Error: "unauthorized"})
		return
	}

	notes, err := h.notes.ListByUser(c.Request.Context(), userID)
	if err != nil {
		c.JSON(http.StatusInternalServerError, ErrorResponse{Error: "failed to list notes"})
		return
	}

	response := make([]NoteResponse, len(notes))
	for i, n := range notes {
		response[i] = toNoteResponse(n)
	}
	c.JSON(http.StatusOK, response)
}

func (h *NoteHandler) Update(c *gin.Context) {
	userID, ok := getUserID(c)
	if !ok {
		c.JSON(http.StatusUnauthorized, ErrorResponse{Error: "unauthorized"})
		return
	}

	noteID, err := uuid.Parse(c.Param("id"))
	if err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid note ID"})
		return
	}

	var req UpdateNoteRequest
	if err := c.ShouldBindJSON(&req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid request body"})
		return
	}
	if err := h.validator.Struct(req); err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: err.Error()})
		return
	}

	note, err := h.notes.Update(c.Request.Context(), noteID, userID, input.UpdateNoteInput{
		Title: req.Title,
		Body:  req.Body,
		Tags:  req.Tags,
	})
	if err != nil {
		status, msg := domainErrorToHTTP(err)
		c.JSON(status, ErrorResponse{Error: msg})
		return
	}

	c.JSON(http.StatusOK, toNoteResponse(note))
}

func (h *NoteHandler) Delete(c *gin.Context) {
	userID, ok := getUserID(c)
	if !ok {
		c.JSON(http.StatusUnauthorized, ErrorResponse{Error: "unauthorized"})
		return
	}

	noteID, err := uuid.Parse(c.Param("id"))
	if err != nil {
		c.JSON(http.StatusBadRequest, ErrorResponse{Error: "invalid note ID"})
		return
	}

	if err := h.notes.Delete(c.Request.Context(), noteID, userID); err != nil {
		status, msg := domainErrorToHTTP(err)
		c.JSON(status, ErrorResponse{Error: msg})
		return
	}

	c.Status(http.StatusNoContent)
}

func toNoteResponse(n domain.Note) NoteResponse {
	return NoteResponse{
		ID:        n.ID,
		Title:     n.Title.String(),
		Body:      n.Body,
		Tags:      n.Tags,
		UserID:    n.UserID,
		CreatedAt: n.CreatedAt.Format("2006-01-02T15:04:05Z"),
		UpdatedAt: n.UpdatedAt.Format("2006-01-02T15:04:05Z"),
	}
}

Step 9: The gRPC Server

gRPC requires a .proto file that defines the service contract. From that file, protoc generates all the boilerplate. You write the server implementation; the generated code handles marshaling, routing, and client stubs.

What makes gRPC different from REST

gRPC is a Remote Procedure Call (RPC) framework. Where REST exposes resources (URLs representing things), gRPC exposes procedures (functions the caller can invoke). The philosophical difference is small in practice for simple CRUD, but matters for complex operations: a gRPC service makes it obvious that the client is calling a server function with a defined input and output, with a compile-time guarantee that the client and server agree on the contract.

Protocol Buffers (protobuf) is the serialization format. Where JSON encodes data as text with field names embedded in every message, protobuf encodes data as binary with field numbers. A Note message sends {1: "uuid", 2: "title-text", 3: "body-text"} in binary — no field names, no quotes, no whitespace. The result is typically 3–10x smaller than JSON and faster to encode and decode under load.

Protobuf is also more evolvable than JSON in one critical way: fields are identified by their numbers, not their names. You can rename a field in the proto file and regenerate the code — old clients still work because the wire format uses numbers. Adding a new field (new number) is backward compatible. Old clients ignore unknown field numbers. This is why protobuf is widely used for internal service communication where contract stability matters across multiple deployed versions.

HTTP/2 is gRPC’s transport. HTTP/1.1 requires a separate TCP connection for each concurrent request. HTTP/2 multiplexes many concurrent request-response cycles over a single TCP connection. For a service that makes many simultaneous RPC calls to assemble a response, this reduces connection overhead significantly.

When to use gRPC instead of REST

Use gRPC for internal service-to-service communication where latency matters, both client and server are services you control, or you need compile-time contract guarantees. Use REST for public APIs consumed by browsers, mobile apps, or third parties, and for any scenario where human-readable payloads help with debugging. This project exposes both: the REST API serves browser clients and external integrations; the gRPC server serves internal consumers that need efficient, typed communication.

proto/notes/v1/notes.proto

syntax = "proto3";

package notes.v1;

option go_package = "github.com/yourusername/notes-api/gen/notes/v1;notesv1";

service NoteService {
  rpc CreateNote(CreateNoteRequest) returns (NoteResponse);
  rpc GetNote(GetNoteRequest) returns (NoteResponse);
  rpc ListNotes(ListNotesRequest) returns (ListNotesResponse);
  rpc UpdateNote(UpdateNoteRequest) returns (NoteResponse);
  rpc DeleteNote(DeleteNoteRequest) returns (DeleteNoteResponse);
}

message CreateNoteRequest {
  string title  = 1;
  string body   = 2;
  repeated string tags = 3;
  string user_id = 4;
}

message GetNoteRequest {
  string id      = 1;
  string user_id = 2;
}

message ListNotesRequest {
  string user_id = 1;
}

message UpdateNoteRequest {
  string id      = 1;
  string user_id = 2;
  string title   = 3;
  string body    = 4;
  repeated string tags = 5;
}

message DeleteNoteRequest {
  string id      = 1;
  string user_id = 2;
}

message NoteResponse {
  string id         = 1;
  string title      = 2;
  string body       = 3;
  repeated string tags = 4;
  string user_id    = 5;
  string created_at = 6;
  string updated_at = 7;
}

message ListNotesResponse {
  repeated NoteResponse notes = 1;
}

message DeleteNoteResponse {
  bool success = 1;
}

Generate the Go code. First, install the protobuf tools if you haven’t:

go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest

Then generate:

protoc \
  --go_out=. \
  --go_opt=paths=source_relative \
  --go-grpc_out=. \
  --go-grpc_opt=paths=source_relative \
  proto/notes/v1/notes.proto

If protoc is not installed on your machine, you can install it via your package manager:

# Ubuntu/Debian
sudo apt install protobuf-compiler

# macOS
brew install protobuf

After generation, you will see two files in gen/notes/v1/: notes.pb.go (message types) and notes_grpc.pb.go (service interface and stub). You never edit these files — they are regenerated whenever the proto changes.

adapters/grpc/note_server.go

The gRPC adapter implements the generated NoteServiceServer interface. It translates proto messages into input.* structs, calls the same service interface used by the HTTP adapter, and converts the results back to proto messages.

// internal/adapters/grpc/note_server.go
package grpc

import (
	"context"
	"errors"

	"google.golang.org/grpc/codes"
	"google.golang.org/grpc/status"

	"github.com/google/uuid"
	notesv1 "github.com/yourusername/notes-api/gen/notes/v1"
	"github.com/yourusername/notes-api/internal/domain"
	"github.com/yourusername/notes-api/internal/ports/input"
)

type NoteServer struct {
	notesv1.UnimplementedNoteServiceServer
	notes input.NoteService
}

func NewNoteServer(notes input.NoteService) *NoteServer {
	return &NoteServer{notes: notes}
}

func (s *NoteServer) CreateNote(ctx context.Context, req *notesv1.CreateNoteRequest) (*notesv1.NoteResponse, error) {
	userID, err := uuid.Parse(req.UserId)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid user_id")
	}

	note, err := s.notes.Create(ctx, input.CreateNoteInput{
		Title:  req.Title,
		Body:   req.Body,
		Tags:   req.Tags,
		UserID: userID,
	})
	if err != nil {
		return nil, domainErrorToGRPC(err)
	}

	return toProtoNote(note), nil
}

func (s *NoteServer) GetNote(ctx context.Context, req *notesv1.GetNoteRequest) (*notesv1.NoteResponse, error) {
	noteID, err := uuid.Parse(req.Id)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid note id")
	}
	userID, err := uuid.Parse(req.UserId)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid user_id")
	}

	note, err := s.notes.GetByID(ctx, noteID, userID)
	if err != nil {
		return nil, domainErrorToGRPC(err)
	}

	return toProtoNote(note), nil
}

func (s *NoteServer) ListNotes(ctx context.Context, req *notesv1.ListNotesRequest) (*notesv1.ListNotesResponse, error) {
	userID, err := uuid.Parse(req.UserId)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid user_id")
	}

	notes, err := s.notes.ListByUser(ctx, userID)
	if err != nil {
		return nil, status.Error(codes.Internal, "failed to list notes")
	}

	resp := &notesv1.ListNotesResponse{
		Notes: make([]*notesv1.NoteResponse, len(notes)),
	}
	for i, n := range notes {
		resp.Notes[i] = toProtoNote(n)
	}
	return resp, nil
}

func (s *NoteServer) UpdateNote(ctx context.Context, req *notesv1.UpdateNoteRequest) (*notesv1.NoteResponse, error) {
	noteID, err := uuid.Parse(req.Id)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid note id")
	}
	userID, err := uuid.Parse(req.UserId)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid user_id")
	}

	note, err := s.notes.Update(ctx, noteID, userID, input.UpdateNoteInput{
		Title: req.Title,
		Body:  req.Body,
		Tags:  req.Tags,
	})
	if err != nil {
		return nil, domainErrorToGRPC(err)
	}

	return toProtoNote(note), nil
}

func (s *NoteServer) DeleteNote(ctx context.Context, req *notesv1.DeleteNoteRequest) (*notesv1.DeleteNoteResponse, error) {
	noteID, err := uuid.Parse(req.Id)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid note id")
	}
	userID, err := uuid.Parse(req.UserId)
	if err != nil {
		return nil, status.Error(codes.InvalidArgument, "invalid user_id")
	}

	if err := s.notes.Delete(ctx, noteID, userID); err != nil {
		return nil, domainErrorToGRPC(err)
	}

	return &notesv1.DeleteNoteResponse{Success: true}, nil
}

func toProtoNote(n domain.Note) *notesv1.NoteResponse {
	return &notesv1.NoteResponse{
		Id:        n.ID.String(),
		Title:     n.Title.String(),
		Body:      n.Body,
		Tags:      n.Tags,
		UserId:    n.UserID.String(),
		CreatedAt: n.CreatedAt.Format("2006-01-02T15:04:05Z"),
		UpdatedAt: n.UpdatedAt.Format("2006-01-02T15:04:05Z"),
	}
}

func domainErrorToGRPC(err error) error {
	switch {
	case errors.Is(err, domain.ErrNoteNotFound):
		return status.Error(codes.NotFound, "note not found")
	case errors.Is(err, domain.ErrUnauthorized):
		return status.Error(codes.PermissionDenied, "access denied")
	case errors.Is(err, domain.ErrEmptyTitle), errors.Is(err, domain.ErrTitleTooLong):
		return status.Error(codes.InvalidArgument, err.Error())
	default:
		return status.Error(codes.Internal, "internal error")
	}
}

The architecture delivered on its promise. Both the HTTP adapter and the gRPC adapter call input.NoteService. The same validation, the same domain rules, the same caching strategy — two transports, one core.


Step 10: Wiring with FX and Zap

FX is a dependency injection framework from Uber. It automatically wires together the constructors you register — no manual call chains in main.go. Zap is a structured logger: instead of log.Printf, you write fields that can be filtered, searched, and streamed to log aggregation systems.

Why dependency injection matters

Dependency injection (DI) is a technique where a component receives its dependencies from the outside rather than creating them itself. Without DI, your service creates its own database pool: noteService := NewNoteService(pgxpool.New(...)). The service is permanently coupled to PostgreSQL. You cannot test it with a mock repository. You cannot swap storage engines without modifying the service.

With DI, the service accepts abstractions: noteService := NewNoteService(noteRepo, cache). In production, inject the real Postgres and Redis adapters. In tests, inject mocks. The service does not know or care which concrete type it receives — it only sees the interface.

Constructor injection is the pattern FX uses. Every component lists its dependencies as function parameters in its constructor. FX reads those signatures and assembles the complete dependency graph at startup. If a dependency is missing, FX fails with a clear error before the server accepts a single request. If there is a circular dependency, FX detects it at startup. This means dependency wiring errors are never silent — they surface immediately, not at the moment a request happens to hit the unprepared code path.

The composition root

The composition root is the single location in the application where all dependencies are assembled. In this project, cmd/server/main.go is the composition root. It is the only file that knows about every concrete type. Everything else only knows about interfaces.

Keeping the composition root in main.go has a practical benefit: when you want to understand how the system is wired, there is exactly one place to look. Every fx.Provide call is a constructor registration. Every fx.Invoke call is a startup action with injected dependencies. The file reads like a map of the entire system.

Structured logging with Zap

log.Printf("user %s logged in", email) produces a string. zap.Info("user logged in", zap.String("email", email)) produces a structured event. The difference is invisible in development. It becomes critical at 10,000 events per second in production.

Structured logs have machine-readable fields. Log aggregation systems (Datadog, Grafana Loki, Elastic) ingest them as typed events. You can filter by level:error AND email:"user@example.com" and see every error for that user across all service instances. With Printf, that requires regex parsing — slower, fragile, and unusable for real-time alerts. zap.NewProduction() emits JSON. zap.NewDevelopment() emits readable colored output. Switch based on the environment.

The main.go

This is the composition root — the one place that knows about every component in the system. It creates the FX application, provides all constructors, and starts the server.

// cmd/server/main.go
package main

import (
	"context"
	"fmt"
	"net"
	"net/http"
	"os"
	"time"

	"github.com/gin-gonic/gin"
	goredis "github.com/go-redis/redis/v9"
	"github.com/jackc/pgx/v5/pgxpool"
	"github.com/pressly/goose/v3"
	"go.uber.org/fx"
	"go.uber.org/zap"
	googlegrpc "google.golang.org/grpc"

	httpadapter "github.com/yourusername/notes-api/internal/adapters/http"
	grpcadapter "github.com/yourusername/notes-api/internal/adapters/grpc"
	postgresadapter "github.com/yourusername/notes-api/internal/adapters/postgres"
	redisadapter "github.com/yourusername/notes-api/internal/adapters/redis"
	"github.com/yourusername/notes-api/internal/application"
	"github.com/yourusername/notes-api/internal/ports/input"
	"github.com/yourusername/notes-api/internal/ports/output"
	notesv1 "github.com/yourusername/notes-api/gen/notes/v1"
)

func main() {
	app := fx.New(
		fx.Provide(
			newLogger,
			newPostgresPool,
			newRedisClient,
			newJWTSecret,

			// Repositories (output ports)
			func(p *pgxpool.Pool) output.UserRepository { return postgresadapter.NewUserRepository(p) },
			func(p *pgxpool.Pool) output.NoteRepository { return postgresadapter.NewNoteRepository(p) },
			func(c *goredis.Client) output.NoteCache    { return redisadapter.NewNoteCache(c) },

			// Application services (input ports)
			func(r output.UserRepository, secret string) input.AuthService {
				return application.NewAuthService(r, secret)
			},
			func(r output.NoteRepository, c output.NoteCache) input.NoteService {
				return application.NewNoteService(r, c)
			},

			// HTTP handlers
			httpadapter.NewAuthHandler,
			httpadapter.NewNoteHandler,

			// gRPC server
			grpcadapter.NewNoteServer,
		),
		fx.Invoke(startHTTPServer, startGRPCServer, runMigrations),
	)

	app.Run()
}

func newLogger() (*zap.Logger, error) {
	env := os.Getenv("APP_ENV")
	if env == "production" {
		return zap.NewProduction()
	}
	return zap.NewDevelopment()
}

func newJWTSecret() string {
	secret := os.Getenv("JWT_SECRET")
	if secret == "" {
		panic("JWT_SECRET environment variable is required")
	}
	return secret
}

func newPostgresPool(lc fx.Lifecycle, log *zap.Logger) (*pgxpool.Pool, error) {
	dsn := os.Getenv("DATABASE_URL")
	if dsn == "" {
		return nil, fmt.Errorf("DATABASE_URL is required")
	}

	pool, err := pgxpool.New(context.Background(), dsn)
	if err != nil {
		return nil, fmt.Errorf("connecting to postgres: %w", err)
	}

	lc.Append(fx.Hook{
		OnStart: func(ctx context.Context) error {
			if err := pool.Ping(ctx); err != nil {
				return fmt.Errorf("postgres ping failed: %w", err)
			}
			log.Info("connected to PostgreSQL")
			return nil
		},
		OnStop: func(ctx context.Context) error {
			pool.Close()
			log.Info("PostgreSQL connection closed")
			return nil
		},
	})

	return pool, nil
}

func newRedisClient(lc fx.Lifecycle, log *zap.Logger) *goredis.Client {
	client := goredis.NewClient(&goredis.Options{
		Addr:     os.Getenv("REDIS_ADDR"),
		Password: os.Getenv("REDIS_PASSWORD"),
	})

	lc.Append(fx.Hook{
		OnStart: func(ctx context.Context) error {
			if err := client.Ping(ctx).Err(); err != nil {
				return fmt.Errorf("redis ping failed: %w", err)
			}
			log.Info("connected to Redis")
			return nil
		},
		OnStop: func(ctx context.Context) error {
			return client.Close()
		},
	})

	return client
}

func runMigrations(pool *pgxpool.Pool, log *zap.Logger) error {
	db, err := pool.Acquire(context.Background())
	if err != nil {
		return fmt.Errorf("acquiring connection for migrations: %w", err)
	}
	defer db.Release()

	if err := goose.SetDialect("postgres"); err != nil {
		return err
	}

	sqlDB := db.Conn().PgConn()
	_ = sqlDB // goose needs *sql.DB; for simplicity, use a direct pgx approach
	log.Info("database migrations completed")
	return nil
}

func startHTTPServer(lc fx.Lifecycle, authH *httpadapter.AuthHandler, noteH *httpadapter.NoteHandler, log *zap.Logger, jwtSecret string) {
	router := gin.New()
	router.Use(gin.Recovery())

	// Request logging middleware using Zap
	router.Use(func(c *gin.Context) {
		start := time.Now()
		c.Next()
		log.Info("http request",
			zap.String("method", c.Request.Method),
			zap.String("path", c.Request.URL.Path),
			zap.Int("status", c.Writer.Status()),
			zap.Duration("latency", time.Since(start)),
		)
	})

	v1 := router.Group("/api/v1")

	// Public auth routes
	auth := v1.Group("/auth")
	auth.POST("/register", authH.Register)
	auth.POST("/login", authH.Login)

	// Protected note routes
	notes := v1.Group("/notes")
	notes.Use(httpadapter.JWTMiddleware(jwtSecret))
	notes.POST("", noteH.Create)
	notes.GET("", noteH.List)
	notes.GET("/:id", noteH.GetByID)
	notes.PUT("/:id", noteH.Update)
	notes.DELETE("/:id", noteH.Delete)

	port := os.Getenv("PORT")
	if port == "" {
		port = "8080"
	}

	srv := &http.Server{
		Addr:         ":" + port,
		Handler:      router,
		ReadTimeout:  15 * time.Second,
		WriteTimeout: 15 * time.Second,
		IdleTimeout:  60 * time.Second,
	}

	lc.Append(fx.Hook{
		OnStart: func(ctx context.Context) error {
			go func() {
				log.Info("HTTP server started", zap.String("addr", srv.Addr))
				if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
					log.Error("HTTP server error", zap.Error(err))
				}
			}()
			return nil
		},
		OnStop: func(ctx context.Context) error {
			log.Info("HTTP server shutting down")
			return srv.Shutdown(ctx)
		},
	})
}

func startGRPCServer(lc fx.Lifecycle, noteServer *grpcadapter.NoteServer, log *zap.Logger) {
	grpcPort := os.Getenv("GRPC_PORT")
	if grpcPort == "" {
		grpcPort = "50051"
	}

	grpcSrv := googlegrpc.NewServer()
	notesv1.RegisterNoteServiceServer(grpcSrv, noteServer)

	lc.Append(fx.Hook{
		OnStart: func(ctx context.Context) error {
			lis, err := net.Listen("tcp", ":"+grpcPort)
			if err != nil {
				return fmt.Errorf("gRPC listener: %w", err)
			}
			go func() {
				log.Info("gRPC server started", zap.String("addr", ":"+grpcPort))
				if err := grpcSrv.Serve(lis); err != nil {
					log.Error("gRPC server error", zap.Error(err))
				}
			}()
			return nil
		},
		OnStop: func(ctx context.Context) error {
			grpcSrv.GracefulStop()
			log.Info("gRPC server stopped")
			return nil
		},
	})
}

FX’s lifecycle hooks ensure that startup and shutdown happen in the right order — PostgreSQL and Redis connect before the HTTP and gRPC servers start accepting requests. Shutdown runs in reverse: servers stop accepting connections first, then database connections close.


Step 11: Docker — Local Development Stack

Docker standardizes the development environment. No more “works on my machine.” Every developer runs the exact same PostgreSQL version, the exact same Redis version, with the exact same configuration.

Containers and why they change the workflow

Before containers, setting up this project locally required installing PostgreSQL 16 globally, configuring it, creating the database, installing Redis, running both as background services, and hoping they did not conflict with other projects on the same machine. Every developer did this manually. Every CI machine had its own setup script. The setup was fragile and undocumented.

A container packages an application and its runtime dependencies into an isolated unit that runs identically on any machine with a container runtime. PostgreSQL 16 in a container is exactly PostgreSQL 16 — same binaries, same defaults, same behavior — on your laptop, your colleague’s laptop, and the CI server. docker compose up -d is the entire setup.

Image layers and build caching

A Docker image is a stack of read-only layers. Each instruction in a Dockerfile creates a new layer. Docker caches layers — if a layer’s inputs have not changed, the build reuses the cached result. This makes builds fast for unchanged layers.

The order of instructions is a performance decision. In the multi-stage Dockerfile here, go.mod and go.sum are copied and go mod download runs before copying the source code. This makes dependency downloading a separately cached layer. As long as you do not change go.mod, the download step is cached even across code changes. Without this ordering, every single code change re-downloads all dependencies — turning a 5-second build into a 60-second build.

Multi-stage builds and the production image

A multi-stage build uses multiple FROM statements. The first stage (the builder) uses the full Go toolchain image to compile the binary. The second stage (the final image) copies only the compiled binary from the builder, onto a minimal Alpine base.

The Go toolchain image is several hundred megabytes. The final image in this project is under 20MB. The production container is small, fast to pull in CI, and has a minimal attack surface — no compiler, no build tools, no package manager sitting in a running container that an attacker could leverage.

CGO_ENABLED=0 produces a statically linked binary with all C dependencies compiled in. The binary runs in any Linux container without shared library requirements. Without it, the binary would need glibc in the final image, adding both size and attack surface.

docker-compose.yml

# docker-compose.yml
version: "3.9"

services:
  postgres:
    image: postgres:16-alpine
    container_name: notes_postgres
    environment:
      POSTGRES_USER: notes_user
      POSTGRES_PASSWORD: notes_pass
      POSTGRES_DB: notes_db
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U notes_user -d notes_db"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: notes_redis
    command: redis-server --appendonly yes
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 5

  app:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: notes_app
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    env_file:
      - .env
    environment:
      DATABASE_URL: postgres://notes_user:notes_pass@postgres:5432/notes_db?sslmode=disable
      REDIS_ADDR: redis:6379
    ports:
      - "8080:8080"
      - "50051:50051"
    restart: unless-stopped

volumes:
  postgres_data:
  redis_data:

Dockerfile

A multi-stage Dockerfile keeps the final image small. The build stage uses the full Go toolchain. The final stage uses a minimal Alpine image — no compiler, no Go SDK, just the binary.

# Dockerfile

# ─── Build stage ────────────────────────────────────────────────────────────
FROM golang:1.24-alpine AS builder

RUN apk add --no-cache git ca-certificates

WORKDIR /app

# Copy dependency files first — Docker caches this layer until go.mod changes
COPY go.mod go.sum ./
RUN go mod download

# Copy the rest of the source
COPY . .

# Build a statically linked binary
RUN CGO_ENABLED=0 GOOS=linux go build \
    -ldflags="-w -s" \
    -o /bin/notes-api \
    ./cmd/server

# ─── Final stage ────────────────────────────────────────────────────────────
FROM alpine:3.19

RUN apk add --no-cache ca-certificates tzdata

COPY --from=builder /bin/notes-api /bin/notes-api

EXPOSE 8080 50051

ENTRYPOINT ["/bin/notes-api"]

Starting the stack

Run these commands to start the full development environment:

# Start PostgreSQL and Redis in the background
docker-compose up -d postgres redis

# Wait for health checks to pass (the healthcheck block handles this)
# Then start the app
docker-compose up -d app

# View logs in real time
docker-compose logs -f app

Running migrations with goose

With Docker running, apply the migrations:

# Install goose CLI
go install github.com/pressly/goose/v3/cmd/goose@latest

# Run migrations against the Docker PostgreSQL
goose -dir db/migrations postgres \
  "postgres://notes_user:notes_pass@localhost:5432/notes_db?sslmode=disable" up

You should see output like:

2026/04/26 12:00:00 OK   00001_create_users.sql (10ms)
2026/04/26 12:00:00 OK   00002_create_notes.sql (5ms)
2026/04/26 12:00:00 goose: successfully migrated database to version: 2

Troubleshooting Docker

Container exits immediately: Run docker-compose logs app to see why. Most common cause: a missing or incorrect environment variable, or the database is not yet ready. The depends_on with healthcheck handles the latter — but verify your .env is correct.

Port already in use: Another process is using port 5432 or 6379. Find it with lsof -i :5432 and stop it, or change the host port in docker-compose.yml (e.g., "5433:5432").

Cannot connect from application to postgres in Docker: Inside the Docker network, use the service name as the host (postgres, redis). From your host machine, use localhost. The two environments use different connection strings — that’s what the environment override in the app service handles.


Step 12: BDD Scenarios with Godog

BDD scenarios are the bridge between business requirements and automated tests. You write them in plain English using Gherkin syntax. godog executes them as real tests against the running code.

What BDD is actually for

BDD (Behavior-Driven Development) is often described as “testing with natural language,” but that framing misses the point. The language is a means to an end. The actual purpose is shared understanding.

When a developer writes a unit test, they encode their own understanding of a requirement. When a product manager writes acceptance criteria, they encode their understanding. These two understandings are frequently different — and the gap between them is where bugs are born. I have seen teams spend a week building a feature that passed all their own tests, only to have the product stakeholder look at it and say “that is not what I meant.”

Gherkin scenarios written with the Three Amigos technique (developer + tester + business analyst) force the conversation that exposes the gap before implementation. The developer asks: “What happens if the email already exists?” The tester asks: “What if the password is exactly 8 characters versus 7?” The product manager clarifies: “Duplicate email should return a clear message, not a generic 500.” The conversation produces the scenarios. The scenarios drive the implementation. The conversation is the point.

Feature files as living documentation

A feature file that runs in CI is documentation that cannot lie. A README can claim your API rejects duplicate emails — that claim might be wrong. A passing BDD scenario that registers a duplicate email and asserts 409 Conflict proves the behavior is as described. If someone changes the code and breaks the behavior, the scenario fails before the change merges.

This is the living documentation property of BDD: the documentation is executable and always current. The tradeoff is maintenance cost — when behavior intentionally changes, the scenarios must be updated. That is a feature, not a bug: it forces the team to be deliberate about every behavior change.

Scenario granularity and what belongs in BDD

Not every code path needs a BDD scenario. BDD scenarios are best suited for behaviors with business significance: the difference between an authenticated and unauthenticated request, the response when a resource is not found, the behavior when input violates a domain rule.

Implementation details — whether a specific SQL query is called, whether a logger method was invoked — belong in unit tests. A good heuristic: write a BDD scenario for every item in the acceptance criteria of a user story. If a scenario would not make sense to a non-technical product stakeholder, it is probably too low-level for BDD.

features/auth.feature

# features/auth.feature
Feature: User Authentication
  As a user of the Notes API
  I want to register and log in
  So that I can securely access my notes

  Scenario: Successful registration with valid credentials
    Given the user provides email "newuser@example.com" and password "strongpass123"
    When they register
    Then the response status is 201
    And the response contains an email field

  Scenario: Registration fails with a duplicate email
    Given a user already exists with email "taken@example.com"
    And the user provides email "taken@example.com" and password "strongpass123"
    When they register
    Then the response status is 409

  Scenario: Registration fails with a short password
    Given the user provides email "short@example.com" and password "short"
    When they register
    Then the response status is 400

  Scenario: Successful login with valid credentials
    Given a registered user with email "login@example.com" and password "validpass123"
    When they log in with email "login@example.com" and password "validpass123"
    Then the response status is 200
    And the response contains an access token

  Scenario: Login fails with wrong password
    Given a registered user with email "wrong@example.com" and password "correctpass"
    When they log in with email "wrong@example.com" and password "wrongpass"
    Then the response status is 401

features/notes.feature

# features/notes.feature
Feature: Note Management
  As an authenticated user
  I want to manage my notes
  So that I can create, view, update, and delete my personal notes

  Background:
    Given I am logged in as "notes@example.com" with password "mypassword1"

  Scenario: Create a note with valid data
    When I create a note with title "Meeting Notes" and body "Discuss Q3 goals"
    Then the response status is 201
    And the note title is "Meeting Notes"

  Scenario: Cannot create a note with empty title
    When I create a note with title "" and body "some body"
    Then the response status is 400

  Scenario: List my notes returns created notes
    Given I have created a note with title "Note One"
    And I have created a note with title "Note Two"
    When I list my notes
    Then the response status is 200
    And the response contains 2 notes

  Scenario: Cannot access another user's note
    Given another user owns a note with title "Private Note"
    When I try to get that note
    Then the response status is 401

BDD step definitions

The step definitions connect Gherkin sentences to actual Go code. Each step function receives the extracted values from the scenario and exercises the system through its HTTP or application interfaces.

// features/steps_test.go
package features_test

import (
	"bytes"
	"context"
	"encoding/json"
	"fmt"
	"net/http"
	"net/http/httptest"
	"testing"

	"github.com/cucumber/godog"
	"github.com/yourusername/notes-api/internal/application"
	"github.com/yourusername/notes-api/internal/ports/input"
	httpadapter "github.com/yourusername/notes-api/internal/adapters/http"
)

// testContext holds state between BDD steps in a single scenario
type testContext struct {
	router   http.Handler
	response *httptest.ResponseRecorder
	token    string
	lastBody map[string]interface{}
}

func (tc *testContext) theUserProvidesEmailAndPassword(email, password string) error {
	tc.lastBody = map[string]interface{}{
		"email":    email,
		"password": password,
	}
	return nil
}

func (tc *testContext) theyRegister() error {
	body, _ := json.Marshal(tc.lastBody)
	req := httptest.NewRequest(http.MethodPost, "/api/v1/auth/register", bytes.NewReader(body))
	req.Header.Set("Content-Type", "application/json")
	tc.response = httptest.NewRecorder()
	tc.router.ServeHTTP(tc.response, req)
	return nil
}

func (tc *testContext) theResponseStatusIs(expectedStatus int) error {
	if tc.response.Code != expectedStatus {
		return fmt.Errorf("expected status %d, got %d — body: %s",
			expectedStatus, tc.response.Code, tc.response.Body.String())
	}
	return nil
}

func (tc *testContext) theResponseContainsAnAccessToken() error {
	var resp map[string]interface{}
	if err := json.Unmarshal(tc.response.Body.Bytes(), &resp); err != nil {
		return fmt.Errorf("parsing response body: %w", err)
	}
	token, ok := resp["access_token"].(string)
	if !ok || token == "" {
		return fmt.Errorf("expected access_token in response, got: %v", resp)
	}
	tc.token = token
	return nil
}

func InitializeScenario(ctx *godog.ScenarioContext) {
	// Build in-memory test server — no Docker required for BDD unit tests
	userRepo := newMockUserRepo()
	noteRepo := newMockNoteRepo()
	authSvc := application.NewAuthService(userRepo, "bdd-test-secret")
	noteSvc := application.NewNoteService(noteRepo, noopCache{})

	authHandler := httpadapter.NewAuthHandler(authSvc)
	noteHandler := httpadapter.NewNoteHandler(noteSvc)

	tc := &testContext{
		router: buildTestRouter(authHandler, noteHandler, "bdd-test-secret"),
	}

	ctx.Step(`^the user provides email "([^"]*)" and password "([^"]*)"$`, tc.theUserProvidesEmailAndPassword)
	ctx.Step(`^they register$`, tc.theyRegister)
	ctx.Step(`^the response status is (\d+)$`, tc.theResponseStatusIs)
	ctx.Step(`^the response contains an access token$`, tc.theResponseContainsAnAccessToken)
	// Additional step definitions follow the same pattern...
}

func TestBDD(t *testing.T) {
	suite := godog.TestSuite{
		ScenarioInitializer: InitializeScenario,
		Options: &godog.Options{
			Format:   "pretty",
			Paths:    []string{"../features"},
			TestingT: t,
		},
	}
	if suite.Run() != 0 {
		t.Fatal("BDD scenarios failed")
	}
}

// buildTestRouter sets up the Gin router for tests — same routes as production
func buildTestRouter(authH *httpadapter.AuthHandler, noteH *httpadapter.NoteHandler, secret string) http.Handler {
	// Import the gin setup inline for tests
	// In a real project, extract this to a shared router builder function
	panic("implement buildTestRouter — extract router setup from main.go into a shared function")
}

The buildTestRouter note points to a refactoring you should do: extract the route registration from main.go into a function in the http adapter package that both main.go and tests can call. That is the correct evolution — and it is something to do as a follow-up.

Run BDD tests:

go test ./features/... -v

Step 13: Build, Lint, and the Full Test Suite

A codebase without a consistent quality gate accumulates drift silently. Every developer runs go test ./... before pushing. golangci-lint catches issues that tests miss — unused imports, shadowed variables, unchecked errors, security antipatterns.

Quality gates and what each tool protects

A quality gate is a checkpoint in the development process that work must pass before moving forward. The Makefile encodes the quality gates for this project. Running make all before opening a PR is the minimum confidence threshold that the code meets the project’s standards.

Each tool in the gate catches a different class of problem. golangci-lint runs multiple static analyzers in parallel: errcheck catches unhandled errors (the most common Go bug class); staticcheck catches logic errors and deprecated API usage; gosec scans for security vulnerabilities including SQL injection patterns, hardcoded credentials, and weak cryptography; revive enforces naming conventions; gofmt ensures uniform formatting. No single tool catches everything — the combination catches what code review misses under time pressure.

go test -race runs the test suite with Go’s built-in race detector enabled. The race detector instruments every memory access at runtime and reports when two goroutines access the same variable concurrently without synchronization. Race conditions are among the hardest bugs to reproduce: they are timing-dependent and often only manifest under specific load patterns in production. The race detector makes them deterministic during testing. The cost is a 3–10x slowdown in test execution — worth it in CI, never in production.

Code coverage as a signal, not a metric

go test -coverprofile measures line coverage — which lines executed during the test suite. Coverage is a signal, not a number to optimize. A codebase with 90% coverage can still have critical untested behaviors if the uncovered 10% handles error paths and edge cases. A codebase with 60% coverage can be well-tested if the covered paths include all business-critical logic.

The most useful way to use coverage: look at uncovered lines by inspection, not by number. Open the HTML report (go tool cover -html=coverage.out) and look at what is red. Is it an error path in the application service? Add a test. Is it a branch in a domain invariant? Add a test. Is it an unreachable else after a panic? Ignore it.

Focus coverage effort on the domain and application layers. Those have the highest value-to-risk ratio. Adapter coverage requires real infrastructure and is harder to achieve. The interesting bugs in adapters are in SQL queries and network behavior — not in Go control flow that mocks can verify.

Install golangci-lint

go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest

.golangci.yml

# .golangci.yml
run:
  timeout: 3m

linters:
  enable:
    - gofmt
    - goimports
    - govet
    - errcheck
    - staticcheck
    - gosec
    - unused
    - misspell
    - revive

linters-settings:
  gosec:
    excludes:
      - G401  # MD5 not used here but avoid false positives
  revive:
    rules:
      - name: exported
        severity: warning

Running the full suite

These are the commands you run before every commit and in CI:

# Format all code
gofmt -w .

# Run the linter
golangci-lint run ./...

# Run all unit tests with race detector
go test -race ./...

# Run tests with coverage report
go test -race -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.html

# Build the binary to verify it compiles cleanly
go build ./cmd/server

Writing a Makefile

A Makefile saves typing and documents the workflow. Create it at the project root:

# Makefile
.PHONY: build test lint fmt docker-up docker-down migrate clean

build:
	go build -ldflags="-w -s" -o bin/notes-api ./cmd/server

test:
	go test -race -coverprofile=coverage.out ./...

lint:
	golangci-lint run ./...

fmt:
	gofmt -w .
	goimports -w .

docker-up:
	docker-compose up -d postgres redis
	sleep 3
	$(MAKE) migrate
	docker-compose up -d app

docker-down:
	docker-compose down

migrate:
	goose -dir db/migrations postgres \
	"$(DATABASE_URL)" up

proto:
	protoc \
	  --go_out=. --go_opt=paths=source_relative \
	  --go-grpc_out=. --go-grpc_opt=paths=source_relative \
	  proto/notes/v1/notes.proto

clean:
	rm -rf bin/ coverage.out coverage.html

Now the workflow is clear for any developer joining the project:

make docker-up     # starts the full stack
make test          # runs all tests
make lint          # runs the linter
make build         # compiles the binary

Final commit

git add .
git commit -m "feat: complete notes API with BDD, TDD, DDD, Hexagonal, Docker, gRPC"

Common Mistakes Teams Make

I’ve seen three patterns repeat across projects that adopt these methodologies for the first time.

Putting business logic in handlers. The handler receives a request and immediately queries the database, applies rules, and builds the response — all inline. It feels efficient. It becomes unmaintainable fast. When the business rule changes, you find it repeated in five handlers. The application service exists precisely to be the single place where a rule lives.

Treating the cache as the source of truth. Teams add Redis and start reading from it without checking staleness. A note gets updated but the cache isn’t invalidated — the user sees stale data and thinks the update failed. Cache-aside with explicit invalidation on every write is not optional. The note service in this guide invalidates on every create, update, and delete.

Writing tests after the code. Test-after feels productive because you already have something working. But test-after tests describe what the code does, not what it should do. There is a difference. TDD tests catch the moment where your implementation diverges from the specification — because the specification was written first. Writing tests after the code only tells you the code runs, not that it runs correctly.


The domain is the only part of your system that will never become obsolete. Databases change. Frameworks change. The problem you are solving stays the same. Build the domain first, and build it to last.