The Dirty Secret Nobody Talks About
Imagine being handed the keys to a quantum computing company. Your first instinct? Let's just build one MASSIVE quantum computer. Pack a million qubits into one box, refrigerate it to near absolute zero, and boom—quantum supremacy.
Sorry. That's not happening.
And it's not because we lack ambition or funding. It's because you've just bumped into a wall that physics and engineering throw up together, and it's a spectacular wall.
Part 1: Why We Can't Build a Single Giant Quantum Computer
Let me paint the picture of what goes wrong when you try to scale a monolithic (all-in-one) quantum computer to the size needed for real-world applications.
The Coherence Time Nightmare
First, there's the uncomfortable truth: qubits are incredibly fragile. Superconducting qubits—the ones IBM and Google are using—maintain their quantum state for roughly microseconds. That's one millionth of a second. Your eye blinks in 100 milliseconds. A superconducting qubit is 100,000 times more impatient than your blink.
Meanwhile, you need to perform thousands upon thousands of quantum operations to encode error correction, stabilize logical qubits, and actually run useful algorithms. Each operation takes time. Each moment of idling, your qubits are losing coherence to environmental noise—stray electromagnetic fields, thermal fluctuations, cosmic rays, even the heat from your own control electronics.
It's like trying to perform surgery on a patient who wakes up every microsecond. You'd better work fast.
The Control System Problem: Wiring Hell
Now imagine you've packed 1 million superconducting qubits into one device. Each qubit needs individual control lines. Each needs carefully calibrated microwave pulses delivered at precise timing. Each requires readout electronics. Some need to interact with neighbors, which means couplers, which need tuning.
At current densities, this is physically impossible. The wiring alone becomes a nightmare—you can't route a million control lines into a dilution refrigerator operating at -273°C. You'd have a tangle of fibers as thick as a tree trunk. Crosstalk between signals becomes uncontrollable. The heat dissipation from trying to control this many qubits breaks your refrigeration.
And here's the kicker: adding more qubits doesn't just scale linearly—it scales nonlinearly in complexity. Interference effects that were negligible at 100 qubits become catastrophic at 1,000. IBM learned this the hard way with their Condor chip, which suffered severe crosstalk between components.
The Manufacturing Precision Problem
To achieve the error rates needed for fault tolerance (better than 10-9), you need to fabricate qubits with near-identical properties. But here's reality: no two superconducting qubits are exactly alike. They require individual tuning and calibration.
At small scales (a few hundred qubits), engineers manually tune each one. At a million qubits, this becomes impossible. You'd need to tune the first qubit, then the second, then... 999,998 more. At even a few seconds per qubit, you're looking at months of calibration work.
Worse: environmental drifts mean that once you calibrate the last qubit, the first one has drifted out of spec. It's an arms race you can't win with one monolithic device.
The Software Bottleneck
Classical algorithms assume perfect qubits. Quantum algorithms today have to be rewritten to work with noisy, imperfect hardware. You need noise-aware algorithms or hybrid approaches that offload work to classical systems. This introduces latency, communication overhead, and synchronization nightmares.
At scale, coordinating a million qubits through a single classical control system becomes a bottleneck. Your quantum processor becomes limited not by quantum physics, but by how fast your classical computers can send commands.
The Fundamental Physics Problem
There's also an architectural limitation baked into the surface code—the most practical error correction code we have. Surface codes rely on local interactions—nearest-neighbor qubit couplings.
In a monolithic system, you arrange qubits on a 2D grid. But as the grid grows, the surface code distance (which determines error suppression) grows slowly. To encode many logical qubits in one monolithic device, you need an enormous physical grid. This amplifies all the previous problems: more qubits, more control lines, more crosstalk, harder to manufacture.
Part 2: How Modularity Is the Natural Path Forward
Instead of fighting these physical laws, what if you embraced them? What if you said: "Okay, physics. You're telling me I can't build one giant quantum computer. Fine. I'll build many small quantum computers that talk to each other."
This is distributed quantum computing. And it's not a compromise—it's a liberation.
The Network Paradigm Shift
Here's the beautiful insight: quantum computers don't need to be physically close to be useful together. Thanks to quantum entanglement and quantum teleportation, you can link distant quantum processors via quantum networks.
This is exactly how the classical internet works. We don't have one gigantic supercomputer in one building serving the whole world. We have millions of computers distributed globally, connected via fiber optic networks. Why should quantum be different?
A modular quantum computer consists of:
- Multiple independent quantum modules—each a small quantum processor
- Quantum communication links between modules using entangled photons
- Classical control systems coordinating the modules
- Error correction spread across the entire distributed system
Why This Solves the Problems
Coherence Time: Each module is smaller, so qubits spend less time idle waiting for distant operations. Errors from decoherence are more localized.
Control System Simplicity: Instead of a million control lines converging on one device, you have fewer control lines per module, then simpler classical communication between modules. This is orchestration instead of monolithic control.
Manufacturing: You build identical, small modules repeatedly. Quality control is easier. If a module has defects, you replace one module, not retool an entire million-qubit device.
Error Correction Efficiency: By distributing error correction across modules, you can isolate errors locally, preventing them from spreading globally.
Scalability: Want more computational power? Add more modules. Want to upgrade? Swap out old modules for new ones. It's like building with LEGO instead of carving from marble.
The Real-World Validation
This isn't theoretical. Researchers at Oxford have already linked trapped-ion modules using photonic quantum networks and run Grover's search algorithm across them. This proves distributed quantum computing works in practice, not just theory.
IBM's roadmap for 2029 includes their Quantum Starling system: 200 logical qubits built from multiple modular components, each handling different functions (synthesis, refinement, processing). Not one giant chip. Many coordinated pieces.
This isn't the future. It's being built right now.
Part 3: How Far Can We Take Modularity? The Granularity Question
So modularity works. But here's an important question: How modular should we go? How many qubits per module? One? Ten? A thousand?
This is the granularity problem, and it's where things get deliciously complex.
The Spectrum of Modularity
Ultra-Fine Modularity: One Data Qubit Per Module
One configuration explores this with the Weight-4 architecture (WT4). Each module houses:
- 1 communication qubit for linking to other modules via photons
- 1 data qubit for storing quantum information
- A few auxiliary qubits for error correction measurements
This requires generating 4-qubit GHZ states across four different modules to measure stabilizers.
Advantages: Minimal local crosstalk, each module is extremely simple, easy to manufacture and scale
Disadvantages: Requires very high-fidelity distributed GHZ state generation, more modules needed overall, more communication overhead between modules
Fine Modularity: Two Data Qubits Per Module
The Weight-3 architecture (WT3) uses a different approach. Each module has:
- 1 communication qubit
- 2 data qubits
- Auxiliary qubits
This requires 3-qubit GHZ states, which are easier to generate than 4-qubit states. But it doubles the QEC cycle time because modules now have two local data qubits that can create errors within the module itself.
Advantages: Fewer qubits needed overall (halves module count for same distance), easier GHZ generation (3-qubit < 4-qubit fidelity requirements)
Disadvantages: Hook errors (local operations create weight-2 errors within module), longer QEC cycles (8 sub-rounds instead of 4), slightly worse performance (5% worse logical error rates)
Moderate Modularity: 10-30 Qubits Per Module
Trapped-ion systems using the QCCD architecture typically use small ion traps with 2-30 ions. Recent research suggests 2-qubit traps are surprisingly optimal because they minimize control complexity while maintaining low error rates. Larger traps (20-30 ions) were thought ideal, but actually create worse scaling due to gate fidelity degradation and communication overhead.
Coarse Modularity: 100-1,000 Qubits Per Module
Neutral atom systems naturally fall here. Arrays of 100-1,000 atoms can be manipulated as a unit using optical tweezers. These aren't typically described as modules yet, but represent a middle ground—large enough to run meaningful computation, yet still maintainable as independent units.
Loose Modularity: 1,000+ Qubits Per Module
At this scale, you're approaching monolithic again. Individual modules can run substantial algorithms independently, but link together for larger problems.
The Literature Landscape
| Platform | Granularity | Approach | Current Status |
|---|---|---|---|
| Superconducting (IBM, Google) | 100-400 qubits/chip | Modular chips linked | Prototyping 2025-2026 |
| Trapped Ions (IonQ) | 1 monolithic trap (100 ions) | Moving to QCCD modules (2 ions/trap) | Transitioning now |
| Trapped Ions (Quantinuum) | QCCD with small traps | Grid of traps (2-10 ions each) | Deploying 2026-2027 |
| Neutral Atoms | Reconfigurable arrays 100-1000 | Single-array modules | Operational now |
| Photonics | Cluster state modules | Multiple QPU cells networked | Prototyping 2024-2025 |
| Color Centers | 1 data qubit/module (WT4) or 2/module (WT3) | Fully distributed architecture | Simulation & early experiments |
What Research Shows
My recent papers paper's rigorously compared WT4 vs. WT3. The verdict: Both work, but with trade-offs:
- WT4 has 5% better logical error rates but requires more modules
- WT3 needs fewer modules but suffers from hook errors and longer cycles
- Choice depends on whether you're limited by module count or coherence time
Trapped-ion research suggests two-ion traps are optimal—fine enough to avoid monolithic problems, coarse enough to keep control simple.
Neutral atoms suggest 100-1,000 qubit arrays work well—the platform's native all-to-all connectivity makes larger modules practical.
No single answer exists. The right granularity depends on:
- Your hardware platform (superconducting vs. trapped ions vs. photonic vs. color centers)
- Your native connectivity (all-to-all vs. nearest-neighbor)
- Your error rates (better error rates tolerate coarser modules)
- Your communication fidelity (better links tolerate finer modules)
- Your manufacturing precision (easier manufacturing tolerates coarser initial modules)
Part 4: What Determines Success for Modular Systems?
You can have a beautiful modular architecture on paper. But what actually determines whether it works in practice?
The Error Threshold: The Magic Number
For error correction to suppress errors rather than amplify them, your physical error rate must be below a threshold. For surface codes, this is roughly 0.5% in monolithic systems.
Here's the critical finding from recent research: Distributed systems can achieve 0.3-0.4% thresholds with current hardware parameters, approaching the monolithic limit with modest improvements.
But this depends on several factors:
Factor 1: Entanglement Generation Scheme
My paper compares three approaches to generating GHZ states between modules:
Emission-Based (EM) Schemes
- Generate Bell pairs between modules, then fuse them into GHZ states
- Success probability: ~10-4 (one per 10,000 attempts)
- Advantages: Well-understood, uses photon technologies
- Disadvantages: Slow (requires multiple fusion steps), noisy (each fusion adds errors)
- Threshold: ~0.13% (only achievable with future improvements)
Reflection (RFL) Scheme
- A photon acts as a flying qubit, scattering sequentially off each module
- Success probability: 1-4% (much higher!)
- Fidelity: High (99.9%)
- Advantages: Fast, high-fidelity, higher success rates
- Disadvantages: Requires optical circulators and delay lines
- Threshold: 0.32-0.35% (achievable with current hardware)
Carving (CAR) Scheme
- Uses spin-dependent reflection in cavities or waveguides
- Success probability: 1/16 to 1/32 (lower than RFL, but higher than EM for near-term)
- Advantages: Integrable photonics, path to on-chip implementation
- Disadvantages: Probabilistic, requires feedback
- Threshold: 0.01-0.40% (highly sensitive to hardware quality)
The Winner: Scattering-based schemes (RFL and CAR) dramatically outperform emission-based schemes because they're faster and have higher success rates, reducing decoherence during waiting times.
Factor 2: Coherence Times
My paper identifies a critical boundary: \(T_{\text{coherence}} / T_{\text{operation}} < 10^4\) means no threshold exists.
This means if your qubits lose coherence 10,000 times faster than your operations run, error correction doesn't work. It's like trying to paint a canvas while it's dissolving.
Current hardware:
- Superconducting qubits: 1-100 microseconds coherence, 10-100 nanoseconds per operation (ratio 104 to 106)
- Trapped ions: 1-100 milliseconds coherence, 1-10 microseconds per operation (ratio 105 to 107)
- Color centers (SiV): 1-10 milliseconds coherence, 100 nanoseconds per operation (ratio 107 to 108)
This is why color centers like silicon vacancy in diamond are so promising for modular systems—they give you a much larger coherence window to work within.
Factor 3: GHZ State Fidelity and Success Probability
My paper identifies that you need:
- GHZ state fidelity ≥99% (ideally 99.5%)
- Success probability \(P_{\text{succ}} > 10^{-4}\) (at least 1 per 10,000 attempts)
Why? Lower fidelity means errors creep into every stabilizer measurement. Lower success rates mean longer waiting times, more decoherence, more errors.
Scattering-based schemes achieve this. Emission-based schemes struggle to hit these marks with current parameters.
Factor 4: Photon Efficiency
The effective photon detection efficiency \(\eta_{\text{ph}}\) affects everything. It determines:
- How many attempts needed to generate Bell pairs
- How long you wait before giving up (cut-off time)
- How much decoherence accumulates during attempts
Improving \(\eta_{\text{ph}}\) from 4.6% (near-term) to 44.7% (future parameters) transforms an unworkable system to a usable one.
This is why integrated photonics (on-chip waveguides and detectors) is such a big deal—it improves photon efficiency dramatically.
Factor 5: Distributed Error Correction
Classic surface codes assume nearest-neighbor interactions. In distributed systems, stabilizer measurements happen across multiple time steps (sub-rounds) because modules aren't fully connected.
This creates temporal errors—mistakes can happen between sub-rounds.
Modern decoders like weighted-growth Union-Find handle this by building a 3D syndrome graph (2D space + 1D time) and detecting errors across both dimensions simultaneously.
Weight-3 architecture suffers more here because its longer QEC cycles (8 sub-rounds vs. 4 for WT4) give more opportunities for temporal errors.
The Practical Threshold: Breaking Even
Here's the question nobody asks: What code distance do I actually need to see quantum advantage?
Research finds:
- Near-term parameters: Smallest break-even is d=8 (requires ~10 million physical qubits for 1 logical qubit with overhead). This is achievable but not yet practical for computation.
- Future parameters: d=6 gives break-even. This is getting into range where you could run meaningful algorithms.
Compare to monolithic: same code distance needed. So modular doesn't require more qubits, just different engineering.
Part 5: The Industrial Landscape—Who's Building Modular Quantum Computers?
This isn't theoretical anymore. The quantum computing industry has bet its entire future on modularity.
The Big Tech Giants
IBM
Strategy: Modular superconducting chips linked together
Timeline:
- 2025: Loon (architectural elements)
- 2026: Kookaburra (first fault-tolerant module, 100 physical qubits)
- 2029: Starling (200 logical qubits, 100 million quantum operations)
Secret Sauce: qLDPC codes instead of surface codes—90% reduction in qubit overhead. But qLDPC requires longer-range connections between qubits, which modularity enables.
Why it matters: IBM explicitly states scaling may require complete rethinking of quantum architecture. Modularity is that rethinking.
Strategy: Modular superconducting chips, though less transparent about architecture
Timeline: Targeting fault-tolerant by 2030
Challenge: Component costs, manufacturing precision
Approach: Less detail than IBM, but hints at modular chiplets linked together
Trapped Ion Leaders
IonQ
Strategy: Transitioning from monolithic to QCCD (Quantum Charge-Coupled Device) modular architecture
Innovation: Partnering with imec to develop chip-scale photonic integrated circuits (PICs) for laser delivery and photon collection
Why it matters: Current systems use bulky bulk optics. Integrated photonics dramatically reduces size, cost, and enables better scaling. This transforms trapped ions from hard-to-scale to practical for modularity.
Quantinuum
Strategy: QCCD with grid of small traps
Timeline: 192 qubits by 2027 vs. 100 today
Approach: Hybrid between monolithic and distributed—multiple traps in same device, but decoupled for modularity
Neutral Atom Platforms
QuEra, Pasqal, Atom Computing
Strategy: Reconfigurable neutral atom arrays with optical tweezers
Why modularity matters: Arrays naturally separate into independent cells that can function as modules. Optical tweezers rearrange qubits as needed.
Advantage: Room-temperature operation, high scalability (100-1,000 qubits feasible)
Trajectory: Already at 100-1,000 qubits, aiming for 10,000-100,000
Note: Neutral atoms don't explicitly call it "modular," but their architecture IS distributed—multiple qubits work together seamlessly without requiring inter-module links initially.
Photonic Quantum Computing
Xanadu (Canada)
Strategy: Modular cluster states with photonic QPU arrays
Demonstration: Aurora system (2024) with multiple QPU cells networked via fiber buffers
Advantage: Photonics naturally scales by adding more integrated circuit chips
Approach: Photon source → refinery (Bell pair generation) → QPU array (cluster state formation and measurement)
Color Center Networks
QuTech (Delft), Harvard, Caltech, Mainz
Strategy: Distributed color center nodes linked via quantum networks
Technology: Silicon vacancy (SiV) or other group-IV centers in diamond, networked via photonics
Recent Win: Caltech-Stanford achieved entanglement through 40 km of fiber; Boston deployment underway
Why it matters: Proves you can connect distant quantum processors via telecom-band fiber—the backbone of future quantum internet.
Cloud Quantum Platforms
AWS Braket, Microsoft Azure Quantum
Strategy: Agnostic approach—offer access to multiple hardware providers' systems
Modularity Role: As different modalities mature (superconducting, trapped ions, photonic, neutral atoms), cloud platforms will orchestrate jobs across heterogeneous systems.
Example: You could send part of a calculation to an IonQ trapped-ion system, part to a neutral-atom system, coordinate through the cloud.
The Unspoken Race
Here's what's fascinating: companies aren't competing on who has the most qubits anymore. Google went from 53 qubits (2019) to 105 (2025). That's barely 2× in 6 years.
Meanwhile, neutral atoms are hitting 10,000 qubits.
The difference? Architecture. Neutral atoms scale naturally because the platform enables modularity. Superconducting qubits hit a wall around 400-500 qubits in a single monolithic device.
So the race shifted. Now it's:
- Who can build the best modular architecture?
- Who can generate highest-fidelity entanglement between modules?
- Who can deploy distributed error correction?
IBM, IonQ, Quantinuum, Xanadu, QuEra—they're all playing this game now.
Part 6: The Spicy Future—Where This Goes
Let's zoom out for a moment.
Right now, we're at the network topology transition. We've proven that:
- Monolithic quantum computers hit hard physical limits around 400-500 qubits
- Modular systems don't hit these limits
- Thresholds for distributed systems are achievable with current/near-future hardware
- Industry is all-in on modularity
But here's what's coming that should make you excited (or scared, depending on your perspective):
The Quantum Internet
Distributed quantum computers aren't just about scaling computation. They're about building a quantum internet—a network where any quantum processor can entangle with any other, securely and reliably.
Once that exists, you don't ship data between quantum computers. You teleport quantum states. Computation becomes distributed not just within a company, but across geography. A bank in London could securely share quantum computation with a hospital in Tokyo without trusting either network.
This is harder than building one big quantum computer. But it's also orders of magnitude more powerful.
Heterogeneous Quantum Systems
The future probably isn't one type of quantum computer wins. It's different hardware types excel at different problems.
my modular architecture could link:
- Superconducting qubits (for fast gates)
- Trapped ions (for long coherence times)
- Photonic systems (for long-distance links)
- Neutral atoms (for high qubit counts)
All orchestrated together via quantum networks. Best-of-breed for each function.
This is analogous to modern classical HPC centers—you have CPUs, GPUs, TPUs, all linked together. Same thing coming for quantum.
The Cost Curve
Right now, quantum computers are absurdly expensive. But modularity changes this.
It's cheaper to build 100 small identical systems than 1 giant monolithic system (due to manufacturing economies of scale). It's cheaper to upgrade (swap out old modules, not retool everything). It's easier to debug (if one module fails, fix that one).
Within 5 years, the cost-per-qubit for modular systems could be 10× lower than monolithic approaches.
When Modularity Wins For Sure: 2027-2030
Based on current industry roadmaps:
- 2026: First true modular fault-tolerant modules from IBM, Quantinuum, or trapped ions
- 2027-2028: Multi-module systems running interesting problems
- 2029-2030: First systems with 50+ logical qubits through modularity
At that point, it's over. Monolithic is dead.
The Takeaway: Why You Should Care
If you're in quantum computing, you need to understand modularity not as a future vision, but as the only viable path forward. This isn't opinion—it's physics and engineering.
If you're an investor, modularity is the filter: teams betting on monolithic systems are betting against physics. Teams betting on modularity are betting with physics.
If you're a researcher, modularity opens new questions: How do we optimize distributed error correction? How do we generate high-fidelity entanglement at scale? How do we orchestrate quantum networks? These are hard problems, but they're solvable problems.
And if you're just fascinated by quantum computing: this is the transformation happening right now. The move from one giant quantum computer to quantum internet is as fundamental as the move from mainframes to personal computers to the cloud.
The difference? We're doing it for quantum, and the quantum version is even more disruptive.
Further Reading
The concepts and benchmark data presented here are drawn from recent research comparing modular quantum architectures across multiple platforms. For detailed hardware specifications, threshold analysis, and entanglement generation scheme comparisons, refer to:
- Singh, S., Kashiwagi, R., Tanji, K., Roga, W., Bhatti, D., Takeoka, M., & Elkouss, D. (2026). Fault-tolerant modular quantum computing with surface codes using single-shot emission-based hardware. arXiv preprint
2601.07241v1. Comprehensive analysis of emission-based GHZ generation schemes, thresholds for distributed surface codes, and hardware feasibility for single-shot protocols. - Singh, S., Gu, F., de Bone, S., et al. (2025). Modular Architectures and Entanglement Schemes for Error-Corrected Distributed Quantum Computation. arXiv preprint
2408.02837v2. Detailed comparisons of WT4 vs. WT3 architectures, GHZ generation schemes (emission, reflection, carving), and threshold analysis across platforms. - IBM Quantum. (2025). IBM's Modular, Scalable Full-Stack Quantum Roadmap. Technical roadmap detailing Loon, Kookaburra, and Starling modular quantum systems with qLDPC code integration and fault-tolerance milestones.
- IonQ & imec Partnership. (2024). Photonic Integrated Circuits for Trapped-Ion Quantum Computing. Research on chip-scale PICs for laser delivery and photon collection in modular ion trap architectures.
- QuTech, Harvard, Caltech, Mainz Collaboration. (2025). Entanglement of Nanophotonic Quantum Memory Nodes in a Telecom Network. Nature. Demonstration of distributed color center nodes linked via 40 km fiber networks for practical quantum internet applications.
- Quantinuum. (2024). QCCD Architecture: Scaling Trapped-Ion Quantum Computers. Technical overview of Quantum Charge-Coupled Device modular trap arrays and 192-qubit scaling roadmap.
- Xanadu. (2025). Scaling and Networking a Modular Photonic Quantum Computer. Nature. Photonic modular architecture with Aurora system demonstration and cluster state networking protocols.
- QuEra & Pasqal. (2026). Neutral-Atom Quantum Computing for HPC Centers. Reconfigurable array modularity with 100-1,000+ qubit platforms and optical tweezer orchestration.