Introduction
When I started working in quantum error correction, I quickly realized that the term “fault-tolerance” wasn't a single concept but rather a collection of related but distinct definitions. In seminars, I'd hear someone mention that their quantum computer was “fault-tolerant,” and I'd ask myself: do they mean it can scale up? That errors are being suppressed? That they've crossed the threshold? All of the above?
The purpose of this blog post is to demystify these different contexts. I'll organize them by how they're typically used in our field, provide rigorous mathematical definitions where appropriate, and give examples from recent experimental demonstrations and theoretical work. Whether you're designing error-correcting codes, implementing protocols on hardware, or just trying to understand what we mean when we claim fault-tolerance, I hope you'll find this comprehensive guide useful.
Let me begin with the broadest definition and then narrow down to more specific contexts.
Definition 1: Fault-Tolerant Quantum Computing (FTQC) as Scalable Quantum Computing
This is probably the most overarching definition I encounter, and it's what most people mean when they talk about “fault-tolerant quantum computing” in a general sense.
Core Concept
Fault-tolerant quantum computing refers to a quantum computing system that can perform arbitrarily long and arbitrarily complex quantum computations to arbitrarily low error rates, despite the fact that individual physical qubits and quantum gates are inherently noisy and error-prone. In other words, it's a system that can scale.
Why This Matters
Here's the challenge: today's quantum computers, often called NISQ (Noisy Intermediate-Scale Quantum) devices, can handle only relatively shallow circuits before errors accumulate and destroy the computation. A fault-tolerant quantum computer, by contrast, should be able to run algorithms like Shor's factoring algorithm for cryptographically relevant numbers, or perform quantum chemistry simulations for drug discovery—computations that require millions or billions of operations.
Mathematical Framework
The theoretical foundation for this definition comes from the threshold theorem, which I'll discuss more formally later. But the core idea is elegant:
Here, \(p\) is the physical error rate per gate or time step, and \(p_{\text{threshold}}\) is the critical threshold value that depends on the error-correcting code, the noise model, and the decoding algorithm being used.
Key Requirements for FTQC
From my perspective, building a truly fault-tolerant quantum computer requires several non-negotiable elements:
- Low physical error rates: Typically, we need \(p \ll 10^{-3}\) for leading codes like the surface code.
- Quantum error-correcting codes: We must encode logical information redundantly across many physical qubits.
- Fault-tolerant protocols: Our gates, measurements, and state preparation must be designed to prevent errors from cascading.
- Ability to scale code size: As we make the code larger (increase distance), logical error rates should decrease exponentially.
- Fresh qubit initialization: We need a source of initialized qubits throughout the computation.
- Benign error scaling: Physical error rates shouldn't increase as the computer grows.
Definition 2: Logical Error Rate Below Physical Error Rate
Now let me zoom in on a more specific definition that I encounter constantly in experimental papers and code design discussions.
The Breakeven Point
I define this as follows: A quantum error-correcting code is considered “fault-tolerant” when the logical error rate (the error rate of the encoded quantum information) is demonstrably lower than the physical error rate of the unprotected qubits.
Mathematically:
Why the Breakeven Point Matters
The breakeven point is crucial because it's the first moment at which quantum error correction actually helps us. Below the breakeven point, the overhead of error correction is justified—we're genuinely making our quantum information more robust. Above it, error correction makes things worse because the ancilla qubits and measurements introduce more errors than they correct.
This is one of the most important experimental milestones. When Google demonstrated this with their surface code in 2024, achieving a logical qubit lifetime of 320 microseconds compared to the best physical qubit's 310 microseconds, they showed a suppression factor of \(G = 2.27 \pm 0.07\)—meaning the logical qubit's coherence time was 2.3 times longer.
The Break-Even Threshold Mathematically
I can formalize this more precisely. For a quantum memory experiment using a code of distance \(d\), the logical error probability per correction cycle can often be modeled as:
where \(c\) is a code-dependent constant, \(\alpha\) depends on the code geometry (typically \(\alpha \approx (d+1)/2\) for codes like the surface code), and \(p_t\) is the fault-tolerance threshold.
Definition 3: Error Suppression Below Threshold
This is a definition I use when talking about experimental demonstrations and scaling behavior.
The Error Suppression Factor
When we operate below the fault-tolerance threshold, the beautiful thing that happens is exponential error suppression. The logical error rate decreases exponentially as we increase the code distance. I define this via the error suppression factor:
where \(d_2 > d_1\). Typically, when increasing distance by 2, we have \(\Lambda > 2\), meaning the error rate more than halves.
Why Below-Threshold is Special
Below threshold, errors are being corrected faster than they accumulate. The quantum error correction process is “winning the race” against noise. Above threshold, errors accumulate faster than we can correct them, and everything cascades into failure.
This is why the Google experiment was so significant. They showed that for a distance-7 surface code operating at physical error rates around \(10^{-3}\), they achieved exponential error suppression with a suppression factor of \(\Lambda = 2.14 \pm 0.02\) when increasing distance by two. This was the first demonstration that a superconducting qubit system could operate below threshold.
Mathematical Characterization
The error suppression regime can be characterized as follows. If the physical error rate is \(p\) and the threshold is \(p_t\), then when \(p \ll p_t\):
or sometimes, depending on the code:
for some constants \(A, B, \beta > 0\).
Definition 4: Fault-Tolerance Threshold
Now I want to discuss one of the most important concepts in our field: the threshold itself.
The Threshold Theorem
The threshold theorem (also called the quantum fault-tolerance theorem) states:
If the physical error rate per gate and per time step is below a critical threshold value \(p_t\), then arbitrarily long quantum computations can be performed with arbitrarily small logical error rates, with only a polynomial overhead in the number of gates.
This theorem was independently proven by Aharonov and Ben-Or, Knill, Laflamme, and Zurek, and Kitaev.
Threshold Values
The specific threshold value depends on:
- The quantum error-correcting code (surface code, LDPC codes, topological codes, etc.)
- The noise model (circuit-level depolarizing noise, specific error types, measurement errors, etc.)
- The decoding algorithm (minimum weight perfect matching, machine learning decoders, etc.)
- The syndrome measurement protocol (how we extract and use error information)
Here are some representative threshold values I've seen in recent literature:
| Code | Threshold | Reference |
|---|---|---|
| Surface Code (Rigorous) | \(2.7 \times 10^{-5}\) | Aliferis, Gottesman, Preskill |
| Surface Code (Practical) | \(\sim 1\%\) | Recent experiments |
| LDPC Codes (Bravyi et al.) | \(0.7-0.8\%\) | Nature 2024 |
| 4D Topological Codes | \(\sim 1\%\) | Recent constructions |
| Floquet Codes | \(\sim 6.3\%\) | Quandela architecture |
Why Rigorous vs Practical Thresholds Differ
I always find it fascinating that the rigorous thresholds (like \(2.7 \times 10^{-5}\) for surface codes) are much lower than what's achieved in practice (\(\sim 1\%\)). The rigorous values are proven under very conservative assumptions and assume perfect syndrome extraction. In practice, we can do better because:
- We optimize our specific implementations.
- We use faster decoders.
- We have better understanding of correlated errors.
- Hardware-specific optimizations can reduce effective error rates.
Definition 5: Pseudothreshold vs Actual Threshold
Here's a distinction I think is often overlooked but crucial for interpreting experimental results.
What is a Pseudothreshold?
A pseudothreshold is a threshold value estimated based on a small number of recursion levels or on only a subset of the error sources. As we perform deeper analysis or more recursion levels, the pseudothreshold can change.
Why This Matters Experimentally
I've noticed that when researchers first demonstrate error suppression with a particular code, they sometimes report a pseudothreshold rather than the asymptotic threshold. The pseudothreshold can be significantly higher (by a factor of 2-4) than the true threshold.
This is important for setting expectations. A pseudothreshold of 1% might seem very encouraging, but if the actual threshold (determined after considering all error sources and performing many recursion levels) is only 0.5%, the path to scalability becomes steeper.
The relationship can be visualized as error performance curves: with each additional recursion level, the curve shifts, and the crossing point (threshold) moves. As we add more recursion levels (moving toward the truly asymptotic case), the pseudothreshold converges to the actual threshold.
Definition 6: Cascading Errors and Error Propagation
Let me now discuss a more mechanistic definition of fault-tolerance, which emphasizes how we prevent errors from taking over.
The Problem: Error Propagation
Without careful design, quantum gates can take a single error on one qubit and spread it to multiple qubits. For instance, consider a CNOT gate:
- An X error on the control qubit before the CNOT becomes an X error on both qubits after the gate.
- A Z error on the target qubit before the CNOT becomes a Z error on both qubits after the gate.
If this happens unchecked in an error-correcting circuit, what should be a single error becomes two errors, then four, and so on—exponential explosion.
The Solution: Fault-Tolerant Protocol Design
A fault-tolerant protocol is designed so that a single error in one physical component (one faulty gate, one measurement error, one qubit error) produces at most one error in each code block. Mathematically:
This requirement is often expressed through the principle of transversality. A gate is transversal if it operates independently on each physical qubit of an encoded state, with no interactions between code blocks.
Syndrome Extraction as Fault-Tolerance
One of the most important fault-tolerant procedures is syndrome extraction—measuring the error syndrome without itself introducing dangerous errors that would propagate to the logical information.
Early work used Shor's syndrome extraction, which distributes measurements across multiple ancilla qubits to limit error propagation.
More recent approaches like Steane's encoded ancilla method use fully fault-tolerant logical qubits as ancillas, achieving better syndrome fidelity (97.8% in recent demonstrations).
Definition 7: Code Distance and Fault-Tolerance Capability
This definition connects fault-tolerance to a specific code parameter.
What is Code Distance?
The distance \(d\) of a quantum error-correcting code is the minimum number of physical qubits that would need to fail (undetected by the syndrome) to cause a logical error. It's the quantum analog of the classical Hamming distance.
For a code denoted \([[n,k,d]]\), we have:
- \(n\) = number of physical qubits
- \(k\) = number of logical qubits
- \(d\) = distance
A code of distance \(d\) can reliably correct up to \(\lfloor (d-1)/2 \rfloor\) arbitrary single-qubit errors.
Examples
- Steane code \([[7,1,3]]\): Encodes 1 logical qubit into 7 physical qubits, can correct 1 error, detects 2 errors.
- Surface code, distance-5: Can correct 2 simultaneous errors.
- Surface code, distance-7: Can correct 3 simultaneous errors.
Fault-Tolerance and Distance
A code is called “fault-tolerant” (in this specific sense) when its design ensures that single errors don't cascade, and higher distance provides exponentially better error suppression below threshold. So really, all well-designed codes are “fault-tolerant” in this sense, but only those with sufficient distance and operating below threshold provide useful fault-tolerance for large computations.
Definition 8: Concatenated Codes and Hierarchical Fault-Tolerance
This definition emphasizes the recursive structure of fault-tolerance.
What Are Concatenated Codes?
Concatenated codes work by recursively encoding logical qubits into further codes. Each level of concatenation doubles the protection exponentially.
Error Rate Suppression with Concatenation
At level 1, we encode with a code having threshold \(p_t\). The effective error rate becomes:
At level 2, we apply the same code to the encoded qubits:
With \(L\) levels of concatenation:
This is doubly exponential suppression!
Trade-offs: Space Overhead vs Error Rate
The beautiful thing about concatenated codes is the flexibility they offer. With recent optimizations, concatenated codes achieve:
| Target Error Rate | Surface Code Qubits | Concatenated Code Qubits |
|---|---|---|
| \(10^{-10}\) | \(\sim 1,700\) | \(\sim 162\) |
| \(10^{-24}\) | \(\sim 10,200\) | \(\sim 373\) |
Qubit overhead comparison: Surface code vs optimized concatenated codes at physical error rate \(10^{-3}\). From Yoshida et al. 2025.
Definition 9: Fault-Tolerant Universal Gate Set
This definition focuses on the set of operations we can perform fault-tolerantly.
What Makes a Gate Set Universal?
A set of quantum gates is universal if any unitary transformation can be approximated arbitrarily well using gates from this set. A common universal set is \(\{H, \text{CNOT}, T\}\) or \(\{H, \text{CNOT}, S, T\}\).
The Fault-Tolerance Requirement
A gate set is fault-tolerant if each gate can be implemented on encoded qubits such that a single error in the gate implementation produces at most one error per code block. This prevents error cascade.
Clifford Gates vs Non-Clifford Gates
Here's where things get interesting. Most quantum codes allow Clifford gates (gates in the Clifford group, including H, S, CNOT) to be implemented transversally—meaning each physical qubit gets the same operation independently. Transversal gates are automatically fault-tolerant.
Non-Clifford gates (like T gates) typically cannot be implemented transversally. Instead, we must use magic state distillation—a procedure where we:
- Prepare specially encoded states (magic states) with high fidelity
- Use these states to perform non-Clifford gates via state injection
Recently, Quantinuum demonstrated for the first time a fully fault-tolerant universal gate set with repeatable error correction, achieving logical error rates for non-Clifford gates lower than their physical counterparts: \(2.3 \times 10^{-4}\) vs \(1 \times 10^{-3}\).
Definition 10: Qubit Overhead and Resource Requirements
This is a practical definition often used in engineering discussions.
The Overhead Problem
Achieving fault-tolerance requires massive overhead. Current estimates suggest we need 100 to 1,000 physical qubits to create a single logical qubit with sufficiently low error rate.
I define a system as approaching “practical fault-tolerance” when:
with a target logical error rate of \(10^{-10}\) per operation. This is called 'teraquop' regime.
Recent Progress
Recent innovations have dramatically improved this. Using concatenated codes and LDPC codes, we've achieved:
- Overhead reduced by 90% compared to surface codes (achieving \(10^{-10}\) error rates)
- Overhead reduced by 96% for \(10^{-24}\) error rates
- High-rate LDPC codes approaching the theoretical hashing bound
Definition 11: Topological Fault-Tolerance
This definition emphasizes the inherent robustness of certain code families.
Intrinsic Error Protection
Topological quantum codes (like the surface code and toric code) have an interesting property: they're intrinsically more robust to certain types of errors. They achieve this by encoding information in non-local, topological properties rather than in individual physical qubits.
For a topological code, a local error doesn't immediately threaten the logical information because logical operators are also non-local. This gives us a head start in the error-correcting race.
Fault-Tolerant By Design
I would argue that topological codes are inherently “fault-tolerant” in a certain sense—they require less overhead than their non-topological counterparts to achieve the same level of protection.
Recent developments include:
- 4D geometric topological codes with high pseudothresholds (\(\sim 1\%\))
- Non-Abelian anyons for universal computation (topological qubits)
- Single-shot error correction in high-dimensional topological codes
Definition 12: Measurement-Free Fault-Tolerance
This is an emerging definition reflecting recent theoretical and experimental progress.
The Problem with Measurements
Traditional fault-tolerant protocols require mid-circuit measurements and fast, real-time feedback. This creates several challenges:
- Measurement latency: Measurements and feedback loops take time, during which qubits continue to decohere.
- Heating: Measurement readout can heat qubits, especially in trapped-ion systems.
- Additional errors: Measurement-induced dephasing and measurement errors themselves.
Measurement-Free Approach
Recent work has developed protocols that achieve fault-tolerance without stopping for measurements. The key idea: instead of measuring a syndrome and applying corrections, we:
- Perform code-switching operations
- Coherently encode syndrome information into auxiliary qubits
- Apply corrections within the quantum circuit itself
- Reset auxiliary qubits without measurement
This approach is enabled by careful circuit design and concatenation of fault-tolerant codes.
Why It Matters
From my perspective, measurement-free fault-tolerance is crucial for platforms like trapped ions and neutral atoms, where measurement is slow or disruptive. Systems like superconducting qubits with fast readout may benefit less, but the principle is important for genuine scalability across platforms.
Definition 13: Code Switching and Modular Architecture
Here's another recent development in how we think about fault-tolerance.
What is Code Switching?
Code switching involves dynamically changing between different quantum error-correcting codes during computation. For instance, you might:
- Use a code optimized for memory (preserving information) during certain phases
- Switch to a different code optimized for gates during operation phases
- Use concatenation selectively based on current needs
Modular Fault-Tolerance
By switching codes, we achieve modular fault-tolerant architectures where:
- Different modules can use different codes
- We avoid the overhead of a single universal code that must handle all cases
- Each module is independently scalable
Quantinuum's recent universal gate set demonstration relied heavily on code switching to achieve efficient fault-tolerant non-Clifford gates.
Definition 14: Pseudothreshold as a Practical Measure
I want to return to pseudothresholds, but from a different angle—as a practical engineering metric.
Engineering vs Theoretical Thresholds
When designing quantum hardware, we care about the pseudothreshold because it tells us when we should expect our first experimental signatures of error suppression with our specific implementation and recursion level.
Using Pseudothresholds Practically
I use pseudothresholds to set hardware development goals:
This ensures that by the time we reach the first experimentally detectable recursion level, we're below the pseudothreshold and will see error suppression with code distance.
Definition 15: Scalability Threshold
Finally, let me mention a definition that's gaining importance as we think about industrial-scale quantum computing.
Industrial Fault-Tolerance
Scalability threshold refers to the error rate and hardware capabilities needed to scale from experimental demonstrations (tens of logical qubits) to practical applications (millions of logical qubits).
Requirements
For true scalability, I believe we need:
- Physical error rates at least 10\(\times\) below threshold (not just below threshold): \(p \lesssim 0.1 \times p_t\)
- Millions of qubits with uniform error rates
- Reliable qubit initialization and readout
- Fast, reliable inter-qubit connectivity
- Space overhead \(\lesssim 1000\) for practical applications
Most platforms are still working toward the first requirement; achieving all simultaneously is the grand challenge.
Comparison and Relationships Between Definitions
Let me step back and show how all these definitions relate to each other.
| Definition | Primary Focus | When Used |
|---|---|---|
| FTQC/Scalability | System-level capability | Overall architecture design |
| Logical \(p_L < p_{\text{phys}}\) | Single metric achievement | Experimental milestone |
| Error Suppression | Trend with distance | Performance characterization |
| Threshold Theorem | Theoretical foundation | Proof of principle |
| Pseudothreshold | Practical estimate | Hardware development |
| Error Propagation | Circuit design principle | Gate implementation |
| Code Distance | Error-correction capacity | Code design |
| Concatenation | Recursive protection | High-precision requirements |
| Universal Gate Set | Computational completeness | Algorithm capability |
| Qubit Overhead | Resource efficiency | Engineering optimization |
| Topological FT | Intrinsic robustness | Code family selection |
| Measurement-Free | Latency reduction | Platform-specific optimization |
| Code Switching | Flexibility | Modular systems |
| Scalability Threshold | Industrial deployment | Long-term roadmaps |
A Unified Framework
Now let me try to synthesize all these definitions into a unified framework for how I think about fault-tolerance.
The Hierarchy of Fault-Tolerance
I think of fault-tolerance as having multiple levels:
- Local fault-tolerance: Single errors don't cascade. (Definition 6)
- Code-level fault-tolerance: A single error per code block. (Definition 6)
- Error suppression: Logical errors decrease with code distance. (Definition 3)
- Below-threshold: Exponential error suppression below \(p_t\). (Definitions 3, 4)
- Break-even: Logical error rate beats physical error rate. (Definition 2)
- Practical scalability: Overhead reasonable for applications. (Definition 10)
- Industrial scalability: Millions of logical qubits, sufficiently long computation. (Definition 15)
Each level builds on the previous one. You can't have break-even without below-threshold operation. You can't scale without break-even.
Flowchart of Fault-Tolerance Development
- Design error-correcting code
- Implement fault-tolerant circuits (prevent error propagation)
- Measure logical error rate
-
- If \(p_L > p_{\text{phys}}\): Go back, optimize circuits or code
- If \(p_L < p_{\text{phys}}\): Proceed
-
Increase code distance, verify error suppression
- If \(p_L\) decreases with distance: You're below threshold!
- If not: Improve physical error rates or code design
- Demonstrate recursive codes (concatenation levels)
- Calculate resource overhead for target application
- Design complete fault-tolerant universal gate set
- Test on scaled systems (more qubits, deeper circuits)
Recent Experimental Demonstrations
To ground these definitions in reality, let me briefly review some key recent experiments.
Google's Below-Threshold Surface Code (2024)
Google demonstrated a distance-7 surface code with logical error per cycle of \(0.143\% \pm 0.003\%\) and clear exponential error suppression with error suppression factor \(\Lambda = 2.14 \pm 0.02\) per two-level increase in distance.
This validates:
- Error suppression (Definition 3)
- Below-threshold operation (Definition 4)
Quantinuum's Universal Fault-Tolerant Gate Set (2025)
Quantinuum demonstrated the first universal, fully fault-tolerant quantum gate set with repeatable error correction, achieving logical error rates for non-Clifford gates of \(2.3 \times 10^{-4}\), well below the physical gate error of \(1 \times 10^{-3}\).
This validates:
- Break-even for non-Clifford gates (Definition 2)
- Universal gate set implementation (Definition 9)
- Code switching effectiveness (Definition 13)
Topological Qubit Demonstration (2024)
Quantinuum, Harvard, and Caltech demonstrated the first true topological qubit using a \(\mathbb{Z}_3\) toric code, manipulating non-Abelian anyons to protect quantum information.
This validates:
- Topological fault-tolerance principles (Definition 11)
- Intrinsic error protection via topology
Recent Concatenated Code Results (2025)
Yoshida et al. demonstrated that optimized concatenated codes achieve 90% overhead reduction compared to surface codes while maintaining the same error rates.
This validates:
- Concatenated code efficiency (Definition 8)
- Practical scalability improvements (Definition 10)
Open Challenges and Future Directions
I believe several important challenges remain:
Challenge 1: Closing the Error Rate Gap
We need to improve physical error rates further. For the surface code, practical thresholds are \(\sim 1\%\), but we typically operate at \(\sim 10^{-3}\). We need closer to \(\sim 10^{-4}\) or lower for practical utility.
Challenge 2: Low-Overhead Scalable Codes
While concatenated codes and LDPC codes show promise, we need codes that simultaneously achieve:
- High thresholds (\(>1\%\))
- Low space overhead (\(<100\) per logical qubit)
- Efficient gates (low circuit depth)
- Locality constraints matching real hardware
Challenge 3: Unified Framework Across Platforms
Different quantum platforms (superconducting, trapped ion, neutral atom, photonic) have different error profiles and constraints. We need fault-tolerant protocols optimized for each.
Recent progress:
- Measurement-free FT (Definition 12) helps trapped ions and atoms
- Hardware-specific optimizations for each platform
- Platform-agnostic theoretical frameworks
Challenge 4: Beyond Demonstrations
Moving from demonstrations on 100-qubit systems to industrial-scale systems with millions of logical qubits represents perhaps the greatest engineering challenge in quantum computing.
Conclusion
When I set out to understand the various uses of “fault-tolerance” in quantum computing, I discovered that rather than conflicting definitions, they form a comprehensive framework. Each definition captures an important aspect of the same goal: building quantum computers that work reliably at scale.
To summarize:
- Broadest: FTQC as system-level scalability (Definition 1)
- Most concrete: Logical error below physical error (Definition 2)
- Most important experimentally: Error suppression below threshold (Definition 3)
- Most foundational theoretically: Threshold theorem (Definition 4)
- Most practical: Pseudothreshold for hardware design (Definition 14)
The field has made remarkable progress. We've moved from theoretical proofs-of-concept to actual experimental demonstrations of below-threshold operation, universal gate sets, and topological qubits. Yet significant challenges remain.
In my view, the next decade will be crucial. We must:
- Improve physical error rates by another order of magnitude
- Develop highly efficient, scalable codes
- Demonstrate multi-level concatenation
- Build integrated systems with reliable qubit initialization and readout
- Scale from experiments to industrial systems
The diversity of definitions of “fault-tolerance” reflects the richness and complexity of the problem. Each perspective—scalability, error suppression, threshold, overhead, universality—provides essential insight. As I work on quantum error correction, I keep all these definitions in mind, because they all matter for the ultimate goal: fault-tolerant quantum computers that can solve real problems at scale.
References
- Aharonov, D., & Ben-Or, M. (2008). Fault-Tolerant Quantum Computation with Constant Error Rate. SIAM Journal on Computing, 38(4), 1207--1282.
- Knill, E., Laflamme, R., & Zurek, W. H. (2005). Resilient quantum computation: architecture and the threshold theorem. Reviews of Modern Physics, 52, 527--552.
- Kitaev, A. Y. (2003). Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1), 2--30.
- Surface Code Team (2024). Quantum error correction below the surface code threshold. Nature, Quantum error correction demonstration on superconducting qubits.
- Quantinuum Team (2025). Overcomes last major hurdle to deliver scalable universal fault-tolerant quantum computing. arXiv preprint.
- Bravyi, S., Cross, A. W., Gambetta, J. M., Maslov, D., Rall, P., & Yoder, T. J. (2024). High-threshold and low-overhead fault-tolerant quantum memory. Nature, 627, 778--782.
- IBM Quantum Learning. Controlling error propagation. IBM Quantum Cloud.
- Gottesman, D. (2015). An introduction to quantum error correction and fault-tolerant quantum computation. arXiv preprint arXiv:0904.2557.
- Shor, P. W. (1996). Fault-tolerant quantum computation. Proceedings of the 37th Annual Symposium on Foundations of Computer Science.
- Postler, L., et al. (2024). Demonstration of Fault-Tolerant Steane Quantum Error Correction. Physical Review X, 5, 040326.
- Error Correction Zoo. https://errorcorrectionzoo.org.
- Yoshida, S., et al. (2025). Concatenate codes, save qubits. npj Quantum Information, 11, 88.
- PostQuantum (2025). Experimental Quantum Error Correction Below Threshold. Research Report.
- QBlox (2025). The quantum leap that needs error correction. Blog.
- Institute of Science Tokyo (2025). Scalable and efficient quantum error correction. Research Publication.
- Dennis, E., Kitaev, A., Landau, F., & Preskill, J. (2002). Topological quantum memory. Journal of Mathematical Physics, 43(9), 4452--4505.
- Aasen, D., et al. (2025). A Topologically Fault-Tolerant Quantum Computer with Four Dimensional Geometric Codes. arXiv preprint.
- Müller, M., et al. (2025). Measurement-free, scalable, and fault-tolerant universal quantum computing. Science Advances.
- Svore, K. M., Cross, A. W., Chuang, I. L., & Aho, A. V. (2005). A flow-map model for analyzing pseudothresholds in fault-tolerant quantum computing. Quantum Information and Computation, 6(3), 193--212.