Implementation Guide

Microservices Architecture Implementation

Design, build, containerise, and orchestrate a production microservices system — from domain-driven service boundaries to Kubernetes deployments with full observability.

60 min read
Expert
Updated 2025
Microservices Docker Kubernetes API Gateway
1

Microservices Principles

Microservices decompose a monolith into independently deployable services, each owning a single business capability. The architecture enables teams to scale, deploy, and evolve services independently — but introduces distributed systems complexity.

Single Responsibility

Each service owns one bounded context — User Service, Order Service, Notification Service. If it does two things, split it.

Loose Coupling

Services communicate only via explicit APIs or events. No shared databases. Changes inside a service should not break consumers.

Independent Deployment

Each service has its own CI/CD pipeline and can be deployed without coordinating with other teams or services.

Fault Isolation

A failing service does not cascade to the whole system. Circuit breakers, timeouts, and fallbacks contain blast radius.

Don't Start with Microservices

For new products, start with a modular monolith and extract services when clear domain boundaries emerge and team scaling demands it. Premature decomposition creates network overhead without team-scaling benefits.

2

Service Design

Domain-Driven Design (DDD) provides the vocabulary for finding service boundaries. A Bounded Context is an explicit boundary within which a domain model applies. Each microservice maps to one bounded context.

Define explicit service contracts using OpenAPI (REST) or Protobuf (gRPC). Version your APIs — never introduce breaking changes without a new version.

yamlopenapi.yaml (User Service contract)
openapi: 3.1.0
info:
  title: User Service API
  version: 1.0.0
paths:
  /users/{id}:
    get:
      summary: Get user by ID
      parameters:
        - name: id
          in: path
          required: true
          schema: { type: string, format: uuid }
      responses:
        '200':
          description: User object
          content:
            application/json:
              schema:
                type: object
                properties:
                  id:    { type: string }
                  name:  { type: string }
                  email: { type: string }
                  role:  { type: string, enum: [user, admin] }
        '404':
          description: User not found
Consumer-Driven Contract Testing

Use Pact to write consumer-driven contract tests. Each consumer defines what it expects from a provider API. The provider runs these contracts as tests in its CI pipeline — preventing breaking changes before deployment.

3

Containerising Services with Docker

Every service ships as an immutable Docker image. Multi-stage builds separate build-time dependencies from the runtime image, producing smaller, more secure artefacts.

dockerfileservices/order-service/Dockerfile
# Stage 1: Build
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o order-service ./cmd/server

# Stage 2: Minimal production image
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /app/order-service /order-service
EXPOSE 8080
ENTRYPOINT ["/order-service"]
yamldocker-compose.yml (local dev)
version: '3.9'
services:
  user-service:
    build: ./services/user-service
    environment:
      DB_URL: postgres://postgres:secret@users-db:5432/users
      KAFKA_BROKERS: kafka:9092
    depends_on: [users-db, kafka]

  order-service:
    build: ./services/order-service
    environment:
      DB_URL: postgres://postgres:secret@orders-db:5432/orders
      USER_SERVICE_URL: http://user-service:8080
      KAFKA_BROKERS: kafka:9092
    depends_on: [orders-db, kafka, user-service]

  users-db:
    image: postgres:16-alpine
    environment: { POSTGRES_DB: users, POSTGRES_PASSWORD: secret }

  orders-db:
    image: postgres:16-alpine
    environment: { POSTGRES_DB: orders, POSTGRES_PASSWORD: secret }

  kafka:
    image: confluentinc/cp-kafka:7.6.0
    environment:
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
4

API Gateway Pattern

The API Gateway is the single entry point for all clients. It handles cross-cutting concerns: routing, SSL termination, rate limiting, authentication, and request/response transformation — so individual services don't have to.

yamlkong/declarative.yml
_format_version: "3.0"
services:
  - name: user-service
    url: http://user-service:8080
    routes:
      - name: users-route
        paths: [/api/v1/users]
    plugins:
      - name: jwt
      - name: rate-limiting
        config:
          minute: 100
          policy: local

  - name: order-service
    url: http://order-service:8080
    routes:
      - name: orders-route
        paths: [/api/v1/orders]
    plugins:
      - name: jwt
      - name: request-transformer
        config:
          add:
            headers: ["X-Consumer-ID:$(consumer.id)"]
Gateway vs. Service Mesh
  • API Gateway: North–South traffic (external clients → internal services). Kong, NGINX, AWS API Gateway.
  • Service Mesh: East–West traffic (service-to-service). Istio, Linkerd. Handles mTLS, circuit breaking, retries transparently via sidecar proxies.
  • Production systems typically use both — gateway at the edge, mesh for internal communication.
5

Inter-service Communication

Services communicate synchronously (HTTP/gRPC — request/response) or asynchronously (message queues — fire and forget). Use synchronous calls when the caller needs an immediate response; use async messaging for workflows that can tolerate eventual consistency.

goorder-service: async event publishing via Kafka
package events

import (
  "encoding/json"
  "github.com/segmentio/kafka-go"
)

type OrderCreatedEvent struct {
  OrderID    string  `json:"order_id"`
  UserID     string  `json:"user_id"`
  TotalCents int64   `json:"total_cents"`
  Currency   string  `json:"currency"`
}

type Publisher struct {
  writer *kafka.Writer
}

func NewPublisher(brokers []string) *Publisher {
  return &Publisher{
    writer: &kafka.Writer{
      Addr:     kafka.TCP(brokers...),
      Balancer: &kafka.LeastBytes{},
    },
  }
}

func (p *Publisher) OrderCreated(evt OrderCreatedEvent) error {
  payload, _ := json.Marshal(evt)
  return p.writer.WriteMessages(context.Background(),
    kafka.Message{
      Topic: "order.created",
      Key:   []byte(evt.OrderID),
      Value: payload,
    },
  )
}
gonotification-service: Kafka consumer
func startConsumer(brokers []string) {
  r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:  brokers,
    GroupID:  "notification-service",
    Topic:    "order.created",
    MinBytes: 1,
    MaxBytes: 10e6,
  })
  defer r.Close()

  for {
    msg, err := r.ReadMessage(context.Background())
    if err != nil { log.Println("read error:", err); continue }

    var evt events.OrderCreatedEvent
    if err := json.Unmarshal(msg.Value, &evt); err != nil { continue }

    sendOrderConfirmationEmail(evt.UserID, evt.OrderID)
  }
}
6

Data Management

The Database per Service pattern gives each service full ownership and autonomy over its data. No service reads another's database directly — all data access goes through the owning service's API or events.

  • CQRS (Command Query Responsibility Segregation): Separate write models (commands) from read models (queries). Write sides emit events; read sides maintain denormalised projections optimised for queries.
  • Event Sourcing: Instead of storing current state, store the full sequence of events. Replay events to reconstruct state. Provides a built-in audit log and enables temporal queries.
  • Saga Pattern: Coordinate multi-step distributed transactions via a sequence of local transactions and compensating events — avoiding two-phase commits across service boundaries.
typescriptChoreography-based Saga
// Order Service emits → Payment Service listens → emits result
// No central orchestrator — services react to events

// Order Service
async function createOrder(data: CreateOrderDTO) {
  const order = await orderRepo.create({ ...data, status: 'PENDING_PAYMENT' });
  await eventBus.publish('order.created', { orderId: order.id, amount: order.total });
  return order;
}

// Payment Service
eventBus.subscribe('order.created', async ({ orderId, amount }) => {
  const result = await chargeCard(amount);
  if (result.success) {
    await eventBus.publish('payment.succeeded', { orderId });
  } else {
    await eventBus.publish('payment.failed', { orderId, reason: result.error });
  }
});

// Order Service — listens for payment result
eventBus.subscribe('payment.succeeded', async ({ orderId }) => {
  await orderRepo.update(orderId, { status: 'CONFIRMED' });
});
eventBus.subscribe('payment.failed', async ({ orderId }) => {
  await orderRepo.update(orderId, { status: 'CANCELLED' });
});
7

Kubernetes Orchestration

Kubernetes manages container lifecycle, scaling, service discovery, and self-healing. Each microservice gets a Deployment (manages pods), a Service (stable DNS), and optionally an Ingress (external HTTP routing).

yamlk8s/user-service/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: user-service
  strategy:
    type: RollingUpdate
    rollingUpdate: { maxUnavailable: 0, maxSurge: 1 }
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
        - name: user-service
          image: myorg/user-service:2.1.0
          ports:
            - containerPort: 8080
          envFrom:
            - configMapRef:  { name: user-service-config }
            - secretRef:     { name: user-service-secrets }
          resources:
            requests: { cpu: 100m, memory: 128Mi }
            limits:   { cpu: 500m, memory: 256Mi }
          readinessProbe:
            httpGet: { path: /health, port: 8080 }
            initialDelaySeconds: 10
            periodSeconds: 5
          livenessProbe:
            httpGet: { path: /health, port: 8080 }
            initialDelaySeconds: 30
            periodSeconds: 10
yamlk8s/user-service/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: user-service
  namespace: production
spec:
  selector:
    app: user-service
  ports:
    - port: 80
      targetPort: 8080
  type: ClusterIP   # internal-only; exposed via Ingress
yamlk8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  namespace: production
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
  tls:
    - hosts: [api.myapp.com]
      secretName: api-tls
  rules:
    - host: api.myapp.com
      http:
        paths:
          - path: /api/v1/users
            pathType: Prefix
            backend:
              service: { name: user-service, port: { number: 80 } }
          - path: /api/v1/orders
            pathType: Prefix
            backend:
              service: { name: order-service, port: { number: 80 } }
8

Observability

In a distributed system, bugs cross service boundaries. You need three pillars of observability: logs (what happened), metrics (system health), and traces (request flow across services).

typescriptDistributed tracing with OpenTelemetry
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { SEMRESATTRS_SERVICE_NAME } from '@opentelemetry/semantic-conventions';

const sdk = new NodeSDK({
  resource: new Resource({
    [SEMRESATTRS_SERVICE_NAME]: 'order-service',
  }),
  traceExporter: new OTLPTraceExporter({
    url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT,
  }),
});

sdk.start();
// All HTTP calls made by Express + fetch/axios are now auto-instrumented
// Trace IDs propagate via W3C Trace Context headers
yamlk8s/prometheus-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: order-service-monitor
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: order-service
  endpoints:
    - port: metrics
      interval: 15s
      path: /metrics
Observability Stack Recommendations
  • Logs: Structured JSON logs → Fluentd/Fluent Bit → Elasticsearch → Kibana.
  • Metrics: Prometheus scrapes /metrics endpoints → Grafana dashboards.
  • Traces: OpenTelemetry SDK → Jaeger or Tempo → visualise request waterfalls.
  • Correlate all three by propagating a trace-id through logs, spans, and metrics labels.