mateusz@systems ~/book/ch06/geographic $ cat section.md

Geographic Distribution

Distributing systems across multiple datacenters provides resilience against datacenter-level failures (natural disasters, power grid outages, region-wide network partitions). But geographic distribution introduces latency, consistency challenges, and operational complexity. This section explores multi-datacenter patterns and the trade-offs involved.

# CAP Theorem in Practice

The CAP theorem states that in a distributed system, you can have at most two of three properties: Consistency, Availability, Partition tolerance. In practice, network partitions happen, so you must choose between consistency and availability during partitions.

CAP Triangle:
    Consistency (all nodes see same data)
         ^
         |
         |
    Partition Tolerance <----> Availability (system responds)
    (network failures)         (even during failures)

During network partition, choose one:
    CP: Consistent but not available (reject writes until partition heals)
    AP: Available but not consistent (allow writes, resolve conflicts later)

In practice: CAP is often oversimplified. Real systems offer tunable consistency (e.g., choose consistency level per operation) and degrade gracefully rather than binary CP/AP choice.

# Multi-Datacenter Patterns

Pattern 1: Primary-Secondary (Active-Passive)

One datacenter is primary (handles all writes), others are secondary (read-only, replicate from primary).

DC1 (us-east): [Primary DB] <- Writes go here
       |
       | Async replication
       v
DC2 (us-west): [Secondary DB] <- Read-only
       |
       | Async replication
       v
DC3 (eu-west): [Secondary DB] <- Read-only

Failover: If DC1 fails, promote DC2 to primary

Pros: Simple, strongly consistent (single writer), easy to reason about.

Cons: Failover is slow (manual or automated promotion), all writes go to one DC (latency for distant users), secondary DCs are "wasted" capacity for writes.

Use case: Traditional databases, compliance requirements for single source of truth, applications not designed for multi-master.

Pattern 2: Multi-Master (Active-Active)

Multiple datacenters accept writes simultaneously. Replication is bidirectional.

DC1 (us-east): [Primary DB] <---> [Primary DB] DC2 (us-west)
                    ^                    ^
                    |                    |
                    +-----> Conflict resolution <------+
                            (last-write-wins, CRDT, etc.)

Both DCs accept writes
Replicate changes to each other
Handle conflicts when same key updated

Pros: Low write latency (users write to nearest DC), instant failover (other DCs already active), better resource utilization.

Cons: Complex conflict resolution, eventual consistency (replicas lag), risk of data conflicts (same key modified in both DCs).

Use case: Global applications (social media, e-commerce), NoSQL databases (Cassandra, DynamoDB Global Tables), CRDTs (conflict-free replicated data types).

Pattern 3: Sharded by Geography

Partition data by region. Each DC is authoritative for its region's data.

DC1 (us-east): Owns US user data (users.region = 'US')
DC2 (eu-west): Owns EU user data (users.region = 'EU')
DC3 (ap-south): Owns APAC user data (users.region = 'APAC')

US user write: Routed to DC1 (local, fast)
EU user write: Routed to DC2 (local, fast)
Cross-region query: Must query remote DC (slow)

Pros: No conflicts (each DC owns disjoint data), data residency compliance (EU data stays in EU), fast local writes.

Cons: Cross-region queries are slow, can't relocate users easily (data is "stuck" in one DC), uneven load if regions differ in size.

Use case: GDPR compliance (data locality), geo-distributed apps with regional users, content delivery networks.

# Replication Strategies

Synchronous Cross-DC Replication

Write to DC1:
    Write to local DB
        |
        v
    Replicate to DC2 (wait for ACK)
        |
        v
    DC2 confirms write
        |
        v
    Return success to client

Latency: RTT to remote DC (50-200ms typical)

Pros: Strong consistency (both DCs have data before commit), zero data loss on DC failure.

Cons: High latency (cross-DC RTT added to every write), availability risk (if DC2 unreachable, writes fail).

Example: Google Spanner (synchronous replication with Paxos across DCs).

Asynchronous Cross-DC Replication

Write to DC1:
    Write to local DB
        |
        v
    Return success to client (immediately)
        |
        v
    Replicate to DC2 (background, async)

Latency: Local write latency only (1-10ms)

Pros: Low latency (no cross-DC wait), availability (DC2 down doesn't block writes).

Cons: Eventual consistency (DC2 lags behind DC1), data loss risk (if DC1 crashes before replication).

Example: MySQL async replication, AWS RDS cross-region read replicas.

Comparison Table

+--------------------+---------------+------------------+
| Characteristic     | Sync          | Async            |
+--------------------+---------------+------------------+
| Write Latency      | High          | Low              |
|                    | (50-200ms)    | (1-10ms)         |
+--------------------+---------------+------------------+
| Consistency        | Strong        | Eventual         |
|                    | (immediate)   | (seconds lag)    |
+--------------------+---------------+------------------+
| Data Loss Risk     | None          | Possible         |
|                    |               | (if DC crashes)  |
+--------------------+---------------+------------------+
| Availability       | Lower (needs  | Higher (DC       |
|                    | remote DC)    | independent)     |
+--------------------+---------------+------------------+
| Use Case           | Financial,    | Social media,    |
|                    | transactional | analytics, logs  |
+--------------------+---------------+------------------+

# Real-World Examples

Netflix Multi-Region

Netflix runs active-active across three AWS regions (us-east-1, us-west-2, eu-west-1). Each region can handle 100% of traffic. If one region fails, traffic shifts to others.

Strategy: Stateless services (easy to replicate), Cassandra multi-DC for stateful data (eventually consistent), route users to nearest healthy region (Route53 DNS).

Google Spanner

Spanner provides globally distributed SQL database with strong consistency. Uses Paxos for synchronous replication across datacenters and TrueTime API for distributed transactions.

Trade-off: Writes are slow (cross-DC Paxos quorum) but strongly consistent. Reads can be fast (local replica) with stale data, or slow (quorum read) with fresh data.

AWS S3 Cross-Region Replication

S3 offers async replication between regions. Objects written to us-east-1 replicate to eu-west-1 in seconds to minutes (typical).

Use case: Disaster recovery (backup in another region), compliance (data locality), low-latency access for global users.

# Consistency vs Latency Trade-Off

The fundamental trade-off: Strong consistency requires coordination (slow). Low latency requires accepting stale reads or eventual consistency.

Strong Consistency (Sync Replication):
    Every read sees latest write
    Latency: High (cross-DC coordination)
    Use when: Correctness critical (banking, inventory)

Eventual Consistency (Async Replication):
    Reads may see stale data temporarily
    Latency: Low (local reads/writes)
    Use when: Availability and speed matter more (social feed, recommendations)

# Key Takeaways

  • CAP theorem: choose consistency or availability during network partitions (partitions will happen)
  • Multi-DC patterns: primary-secondary (simple, slower failover), multi-master (complex, fast failover), sharded (no conflicts, regional)
  • Sync replication: strong consistency, high latency (50-200ms writes)
  • Async replication: eventual consistency, low latency, data loss risk
  • Netflix: active-active multi-region with Cassandra (eventual consistency)
  • Spanner: globally distributed SQL with strong consistency (synchronous Paxos)
  • Choose based on requirements: financial/transactional needs sync, social/content can use async