CAP Theorem and Practical Implications

8 min read · Updated 2026-04-25

Every technical decision involves trade-offs. CAP theorem is the cleanest, most well-known illustration of one of the most fundamental trade-offs in distributed systems.

The CAP Theorem

The CAP theorem (Brewer’s theorem) states that any distributed system can guarantee at most two of three properties:

Consistency (C)
All nodes see the same data at the same time. Linearizability — every read sees the most recent write.
Availability (A)
Every request gets a response — though not necessarily the most recent data.
Partition tolerance (P)
System keeps working correctly when network partitions split nodes.

What This Means in Practice

Network partitions are inevitable in distributed systems. So the practical question is: when there’s a partition, do you sacrifice consistency or availability?

CP — Strong consistency
Consistency under partition
During a partition, the system refuses requests it can't fulfill consistently. Some users see "service unavailable" — but no one sees stale or wrong data. Examples: banking, ledger systems, financial transactions.
AP — High availability
Availability under partition
During a partition, the system always responds. Reads might be stale; recent writes might not be visible to all users yet. Examples: shopping carts, social feeds, recommendation systems.

Concrete examples

CP system
Bank transaction processing
A bank prefers being unavailable for a few minutes over showing wrong balances or letting two ATMs withdraw the same money. Better to be down than wrong.
AP system
Amazon shopping cart
A shopping cart that's slightly out of sync but always available beats one that's offline. Lost sales cost more than stale data. Conflicts get reconciled later.

What “Consistency” Actually Means

CAP’s “consistency” is linearizability — a strong form. There’s a whole spectrum of consistency models:

Strict / linearizable
Every read sees the most recent committed write. Simulates a single, fast machine. Most expensive to provide.
Sequential consistency
All nodes see operations in the same order, but may see different "current" states.
Eventual consistency
Without further updates, all replicas eventually converge. No bound on when. Cheapest, weakest.
Causal consistency
Causally related operations are seen in the right order; concurrent operations can be seen in any order.
Read-your-writes
You always see your own latest writes (even if other users see stale).
Monotonic reads
Once you read a value, you'll never see an earlier value (no time travel).

Most production systems pick a model per operation, not for the whole system. Critical writes use strong consistency; reads of less-critical data accept eventual consistency.

PACELC: The More Useful Theorem

CAP describes the trade-off during partitions. PACELC describes what happens the rest of the time — which is most of the time.

PACELC. If Partition then Availability vs Consistency. Else, Latency vs Consistency.

During partition (PA / PC)
Same as CAP
PA: prioritize availability. PC: prioritize consistency.
Normal operation (EL / EC)
Latency vs Consistency
EL: prioritize low latency (don't wait for all replicas). EC: prioritize consistency (wait for replicas to acknowledge).

In practice, you classify systems by both letters:

SystemCAPPACELCNotes
CassandraAPPA/ELTunable but defaults to AP/low-latency
DynamoDBAPPA/ELEventually consistent reads by default
MongoDBCP*PC/EC* Default; can be tuned
HBaseCPPC/ECStrong consistency at cost of latency
SpannerCPPC/ECTrueTime to make CP feasible globally

How Modern Systems Navigate This

The mature view: don’t pick one mode for the whole system. Pick per operation, per use case.

Tunable consistency
Cassandra, DynamoDB, Mongo let you choose consistency level per query: ONE, QUORUM, ALL. Trade latency for stronger guarantees on reads that matter.
Asynchronous writes
Writes return immediately; consistency reconciles in the background. Right for many user-facing operations.
Read-your-own-writes
Cookie or session steers your reads to the leader for a few seconds after you write. Most users never notice the underlying eventual consistency.
Strong only when needed
Payment, auth, ledger: strong consistency. Everything else: eventual. Most systems are 80% AP, 20% CP.

The Multi-Tenant SaaS Lens

For multi-tenant SaaS, CAP/PACELC choices ripple through:

Different tenancy patterns also affect the trade-off. DB-per-tenant gives you per-tenant strong consistency easily; shared-schema multi-tenancy makes it harder to get the same guarantees.

Recap