Does Google Spanner defy CAP theorem
- Balaaji Dhananjayan

- Oct 10
- 2 min read
First — What CAP Really Means
CAP theorem (Consistency, Availability, Partition tolerance) states that in the presence of a network partition, a distributed database can guarantee at most two of:
C (Consistency): all clients see the same data at the same time.
A (Availability): every request receives a non-error response, even if some nodes are down.
P (Partition tolerance): the system continues to operate despite network partitions.
You must choose between Consistency vs Availability when a partition happens.
What Cloud Spanner Actually Does
Google Cloud Spanner is a globally distributed, synchronous-replication, strongly consistent relational database.
It combines:
Synchronous replication using TrueTime (Google’s globally synchronized clock).
Consensus (Paxos/RAFT) protocol for writes.
Read replicas for local low-latency reads.
So — Does Spanner “Break” CAP?
No, it doesn’t break CAP; it reinterprets the trade-offs very cleverly.
Let’s map it:
Property | Description | Spanner Behavior |
Consistency (C) | Strong consistency across regions. | ✅ Yes — guaranteed by TrueTime + Paxos. |
Availability (A) | Always respond unless quorum is lost. | ⚠️ Partial — available as long as a majority of replicas are reachable. |
Partition tolerance (P) | Must survive network splits. | ✅ Yes — inherently designed for partition tolerance. |
But because:
Google’s private fiber network has extremely low partition probability, and
TrueTime reduces the latency of synchronous commits,
Spanner behaves almost like CA in normal operation — strong consistency and very high availability.
Why People Say “It Contradicts CAP”
Spanner appears to offer all three — C, A, and P — because in practice:
You almost never see partitions on Google’s internal network.
You get near-100% uptime.
You get global consistency.
But that doesn’t violate CAP — it simply operates in a design space where partitions are rare and short, so users rarely experience the CAP trade-off.
When a true partition occurs (e.g., quorum loss), Spanner prioritizes consistency over availability — write transactions will pause or fail until quorum is restored.
In Summary
Feature | Spanner’s Real Behavior | CAP Classification |
Global Consistency | ✅ Yes (strong consistency using TrueTime + Paxos) | C |
Partition Tolerance | ✅ Yes (can survive minority replica loss) | P |
Availability | ✅ High but not absolute — requires quorum | Partial (within CP) |
In a nut shell, Cloud Spanner doesn’t contradict CAP — it’s a CP system with exceptionally high availability engineered through Google’s network and TrueTime.








Comments