Concise Constructions of Modular Architectures in Quantum Computing

From Monolithic Chips to Distributed Quantum Networks: Where Does the Boundary Lie? Inspired by Chandra, Kaur & Seshadreesan (arXiv:2511.13657).

Modular quantum computing architecture visualization: interconnected hexagonal modules transitioning from monolithic (green) to distributed (red)

From tightly coupled monolithic chips to loosely networked distributed modules — the modularity spectrum.

Introduction: Why I Cannot Stop Thinking About Modularity

If you have been anywhere near the quantum computing world lately, you have probably heard the word "modular" thrown around more than "entanglement" at a physics cocktail party. And for good reason. As someone working in quantum error correction, I find myself constantly bumping into the same uncomfortable truth: we simply cannot build a useful fault-tolerant quantum computer on a single chip. Not today, not with foreseeable technology.

The numbers are sobering. To run practical quantum algorithms, such as breaking RSA-2048 or simulating complex molecules, we need millions of physical qubits [1]. Current state-of-the-art processors top out around 100 to 150 qubits on a single chip. Google grew from 53 to 105 qubits over six years. IBM's best sits around 120 to 156 qubits. Even aggressive roadmaps only promise a few thousand qubits per chip within this decade.

So what do we do? We go modular. We build smaller, high-quality quantum modules, and we connect them together, much like snapping LEGO bricks to build a castle. This sounds deceptively simple, but the devil is in the details, and those details are what I want to explore in this blog post.

I was recently reading a paper by Chandra, Kaur, and Seshadreesan [2] that lays out a beautiful taxonomy of three architectural types for fault-tolerant distributed quantum computing. Their analysis of entanglement resource overheads across these types got me thinking: what exactly are the categories of modularity? How do we construct modular systems? And where is the boundary between "modular" and "monolithic"? Is it even a sharp boundary, or is it more of a spectrum?

Let me take you through my thinking.

The Monolithic Dream and Its Limits

Before we talk about modularity, let me briefly mourn the monolithic dream. In an ideal world, we would fabricate one giant chip with millions of perfectly identical qubits, all connected with pristine nearest-neighbor couplings, and call it a day. This is the "monolithic" approach, and it has served classical computing extraordinarily well for decades.

But quantum systems are not classical transistors. Here is why the monolithic approach breaks down:

  • Fabrication yield: As chip area increases, the probability of a fatal defect grows. For superconducting qubits, yield drops roughly exponentially with chip area. A chip with 1000 qubits might have acceptable yield; a chip with 100,000 qubits almost certainly will not.
  • Cryogenic constraints: Superconducting qubits operate at millikelvin temperatures inside dilution refrigerators, which have limited cooling power and physical volume. You cannot just keep making the chip bigger inside the same fridge.
  • Control complexity: Every qubit needs control lines, readout lines, and calibration. Wiring density scales terribly with qubit count on a single substrate.
  • Frequency crowding: On a large chip, qubit frequencies start to collide, leading to unwanted cross-talk and reduced fidelity.

The classical computing world faced analogous problems and solved them with chiplet architectures and multi-chip modules (MCMs). Interestingly, the quantum case has a silver lining that the classical case lacks: inter-chip quantum links can achieve fidelities much closer to intra-chip operations than their classical counterparts, because qubit footprints are large and communication is inherently nearest-neighbor anyway. This makes the modular penalty less severe in quantum systems than you might naively expect.

The Three Types: A Taxonomy from Chandra et al.

The paper that inspired this blog post [2] introduces a clean three-way classification of distributed quantum computing architectures. I find this taxonomy incredibly useful, so let me walk you through it with some intuition.

Type I: Small Modules, GHZ-Mediated Measurements

Imagine you have a bunch of tiny quantum nodes, each containing just a handful of qubits: one data qubit participating in the error-correcting code, plus a few memory and communication qubits. These nodes are connected optically, and they use multi-party GHZ (Greenberger–Horne–Zeilinger) states as nonlocal ancillae to perform stabilizer measurements.

Think of it this way: in a standard surface code, you measure a weight-4 stabilizer by using a local ancilla qubit that interacts with four neighboring data qubits. But if those four data qubits live on four different nodes, you cannot do this locally. Instead, you prepare a 4-qubit GHZ state shared across the four nodes, and each node interacts its share of the GHZ state with its local data qubit. The combined measurement outcomes reveal the stabilizer eigenvalue.

This approach is particularly natural for platforms like nitrogen-vacancy (NV) centers in diamond, where the electron spin serves as a communication qubit with excellent optical interface properties, and nuclear spins act as local memory [3, 4].

The catch? Generating high-fidelity GHZ states is probabilistic and expensive. The paper analyzes four protocols:

Table 1. GHZ generation protocols and their Bell pair costs.
ProtocolBell pairs per GHZ (\(n\))Distillation?Expected attempts \(R(n)\)
Plain3No\(\frac{6}{p_\text{link} \cdot p_\text{parity}}\)
Basic8Yes\(\frac{16}{p_\text{link} \cdot p_\text{distill} \cdot p_\text{parity}}\)
Medium16Yes\(\frac{32}{p_\text{link} \cdot p_\text{distill} \cdot p_\text{parity}}\)
Refined40Yes\(\frac{80}{p_\text{link} \cdot p_\text{distill} \cdot p_\text{parity}}\)

The expected number of entanglement link generation attempts per GHZ state follows a simple but revealing formula:

\[ R(n) = \frac{2n}{p_\text{link} \cdot p_\text{distill} \cdot p_\text{parity}} \]

where \(p_\text{link}\) is the per-attempt entanglement success probability, \(p_\text{distill}\) is the distillation success probability (set to 1 for Plain), and \(p_\text{parity}\) is the parity-projection acceptance probability. The parity acceptance under symmetric depolarizing noise with parameter \(p\) is:

\[ p_\text{parity} = \frac{1}{2}\left[1 + \left(1 - \frac{4}{3}p\right)^8\right] \]

For the toric code with distance \(d\), you need \(2d^2\) GHZ states per syndrome round (one per stabilizer), giving a total entanglement cost per round of:

\[ N_\text{round}(d) = \frac{4n \, d^2}{p_\text{link} \cdot p_\text{distill} \cdot p_\text{parity}} \]

That \(d^2\) scaling is quadratic, which means doubling the code distance costs you roughly four times the entanglement resources. Ouch.

Type II: Boundary-Connected Code Patches

Now imagine something different. Instead of distributing individual qubits across tiny nodes, you have larger modules, each hosting a sizable chunk of a surface code. Two modules are "stitched" together along a shared boundary, creating a larger logical qubit.

The key insight here is that the boundary between modules is one-dimensional, while the code itself is two-dimensional. This dimensional mismatch is actually a feature: it means the boundary (where noisy inter-module links operate) is a lower-dimensional subset of the full code, so the code can tolerate higher noise rates on boundary qubits than in the bulk [6].

For a distance-\(d\) planar surface code, each boundary has \(d\) data qubits and \(d-1\) syndrome qubits, so \(2d-1\) Bell pairs are needed per syndrome round. The average number of generation attempts scales as:

\[ N_\text{boundary}(d) = \frac{2d - 1}{p_\text{link}} \]

This is linear in \(d\), not quadratic. That is a massive advantage over Type I. No entanglement distillation is strictly required either, since the boundary noise tolerance is inherently higher. This makes Type II architectures very attractive for quantum memory applications and for platforms like superconducting qubits, where chips can be physically abutted with short-range interconnects.

Type III: Full Code Blocks per Module

In the most autonomous configuration, each module runs an entire logical code block. Computation between modules happens through nonlocal logical operations: transversal CNOT gates, distributed lattice surgery, or logical state teleportation.

For instance, to teleport a logical qubit encoded in a distance-\(d\) surface code from one module to another, you need to establish \(d^2\) physical Bell pairs (one per physical data qubit) and perform a nonlocal transversal CNOT [8]. The expected number of generation attempts is:

\[ N_\text{teleport}(d) = \frac{d^2}{p_\text{link}} \]

Alternatively, lattice surgery between remote code blocks requires \(O(d)\) Bell pairs per merge round, repeated over \(d\) rounds, giving an \(O(d^2)\) total cost, similar to teleportation [9, 10].

Type III architectures are the most flexible for actual computation, since each module is a self-contained logical processor. But they pay for that flexibility with quadratic Bell pair overhead per logical operation.

Connecting to the Broader Modular Architecture Literature

The Chandra et al. taxonomy is wonderfully clean, but it sits within a much richer landscape of modular architecture research. Let me connect some dots.

The Singh et al. Framework

A closely related and highly detailed study by Singh, Gu, de Bone, Villaseñor, Elkouss, and Borregaard [7] was published in npj Quantum Information in 2025. They investigated the distributed surface code (specifically the toric code) across modular architectures with one or two data qubits per module, using hardware-tailored noise models for solid-state quantum hardware like NV centers and silicon vacancy centers.

Their key finding is that the choice of entanglement generation scheme (emission-based vs. scattering-based) matters enormously, sometimes even more than the architectural layout itself. For some schemes, fault-tolerance thresholds approach those of non-distributed implementations (around 0.4%), which is remarkable and encouraging.

The Nickerson-Fitzsimons-Benjamin Vision

Going back to 2014, Nickerson, Fitzsimons, and Benjamin [5] showed that a modular quantum computer built from cells of just 5 to 50 qubits, connected by very lossy and noisy photonic links (98% photon loss!), could still achieve kilohertz-scale computation speeds using loss-tolerant entanglement purification and surface code protocols. Their network noise threshold of 13.3% was surprisingly high.

This was a landmark result because it demonstrated that modularity does not require perfect interconnects. You can tolerate terrible links if your purification protocols and error correction are good enough. This philosophy underpins much of the Type I architecture thinking.

Industry Momentum

The modular paradigm has become an industry consensus. IBM's quantum roadmap explicitly features modular multi-chip processors: Flamingo (2024) with quantum communication links, Kookaburra (2026) as the first modular fault-tolerant processor, and the Quantum Starling system targeted for 2029 with 200 logical qubits [11]. They are also transitioning from surface codes to quantum LDPC codes, which can reduce physical qubit overhead by up to 90%.

MIT's quantum-system-on-chip (QSoC) integrates thousands of diamond color center qubits on a CMOS platform [12]. Universal Quantum is pursuing electric-field interconnects for trapped ions, claiming connection speeds up to 10,000 times faster than photonic approaches [13].

Categories of Modularity: A Spectrum, Not a Binary

Here is where I want to push the conversation further. People often talk about "modular vs. monolithic" as if it were a binary choice, like a light switch. But I think it is much more like a dimmer. Let me propose a modularity spectrum with four distinct levels.

The Modularity Spectrum

Table 2. The modularity spectrum for quantum computing architectures.
LevelDescriptionInterconnectLink FidelityExample
L0: MonolithicAll qubits on one chipOn-chip wiringHighestGoogle Sycamore, IBM Eagle
L1: Chiplet/MCMMultiple chips in one packageShort-range coupler, bump bondsHighIBM Heron multi-chip, QuantWare VIO
L2: ModularSeparate modules, entanglement-linkedPhotonic, microwave, electric fieldModerateNV center networks, trapped ion modules
L3: DistributedRemote processors, network-connectedFiber-optic quantum linksLowerQuantum internet nodes

I like to visualize this as a literal spectrum running from fully monolithic on the left to fully modular (distributed) on the right. The degree of "networkness" increases as you move right: on the left, no networking is needed because everything is on one chip; on the right, every operation that crosses a module boundary requires entanglement generation, classical communication, and possibly purification.

MONOLITHIC FULLY MODULAR
L0 L1 (Chiplet) L2 (Modular) L3 (Distributed)

→ Increasing degree of "networkness" (entanglement overhead, latency, classical communication) →

Let me describe each level in more detail.

Level 0: Monolithic

Everything on one substrate. This is where we started and where the smallest current processors still live. The advantages are maximal connectivity, lowest latency, and highest fidelity for nearest-neighbor gates. The disadvantage is that it simply does not scale beyond a few hundred qubits with current fabrication technology.

Level 1: Chiplet / Multi-Chip Module

Multiple smaller chips are placed in close physical proximity, often within the same cryostat or vacuum chamber, and connected via short-range couplers. For superconducting qubits, this might mean bump-bond interconnects or flip-chip architectures. For trapped ions, it could mean ion shuttling between adjacent chip segments using electric fields.

The key characteristic of L1 is that inter-chip link quality is close to (though slightly below) intra-chip quality. IBM's multi-chip Heron processors and Universal Quantum's electric-field interconnects fall into this category. The Type II architecture from Chandra et al. maps naturally onto L1, because boundary-connected surface code patches only need \(O(d)\) Bell pairs per round and can tolerate somewhat noisier boundary links.

Level 2: Modular with Entanglement Links

Here, separate quantum processing modules are connected through entanglement generated via photonic or other interfaces. The modules might be in the same lab but are physically distinct systems, each with their own control hardware. The inter-module connection is probabilistic: you attempt to create entanglement and sometimes you succeed, sometimes you do not.

Both Type I and Type III architectures from Chandra et al. operate at this level. NV center networks, where each node has one data qubit plus communication qubits, are the canonical Type I example. Trapped-ion modules connected by photonic links represent either Type I or Type III depending on module size.

Level 3: Fully Distributed

The most extreme form of modularity: quantum processors at geographically separated locations, connected through a quantum network. This is the quantum internet vision. Here, link latencies are significant (speed-of-light delays), entanglement rates are low, and noise is high. But the potential for distributed quantum computing exists.

At L3, fault tolerance requires the most robust protocols. The entanglement overhead is maximal, and careful co-design of error correction codes, network topology, and entanglement generation is essential.

Industrial Players on the Spectrum: The Data

To make the spectrum concrete, I compiled data on major quantum computing companies and their module sizes. The key number here is the local module qubit count: how many physical qubits live on a single module before you need inter-module connections. This number, combined with the quantum error correction overhead, determines where each player sits on the modularity spectrum.

For quantum error correction context: a single logical qubit encoded in a distance-\(d\) surface code requires \(d^2\) physical data qubits plus \((d^2 - 1)\) syndrome ancilla qubits, so roughly \(2d^2 - 1\) physical qubits total. For \(d = 7\), that is about 97 physical qubits per logical qubit. For \(d = 13\) (which is more realistic for fault tolerance), it is about 337 physical qubits per logical qubit.

Table 3. Industrial players: module sizes, qubit counts, modularity levels, and QEC context.
CompanyQubit TypeModule Qubits (current)Module Qubits (roadmap)LevelInterconnectLogical qubits/module (\(d\!=\!7\))
GoogleSC105~1000L0On-chip1
IBMSC120120/chip; 9 chips coupledL0 → L1Coupler links1 per chip
RigettiSC9/chiplet100+ (chiplet tiled)L1Chiplet bonds<1 per chiplet
QuantWareSCchiplets10,000 (VIO-40K)L13D chiplet~103
QuEraNeutral atom3,00010,000+L0Optical tweezers~30
Atom ComputingNeutral atom1,22510,000+L0Optical tweezers~12
QuantinuumTrapped ion56–98Multi-module photonicL0 → L2QCCD + photonic1
IonQTrapped ion~10010,000/chip; multi-chipL0 → L2Photonic (Lightsynq)1
Universal QuantumTrapped ionfew 1000millions (multi-module)L1 → L2Electric field links~10–30
PsiQuantumPhotonicchip-scalemillions (multi-rack)L2 → L3Fiber opticN/A (FBQC)

A few observations jump out from this table:

  • Neutral atom platforms have the largest monolithic modules. QuEra has demonstrated a 3,000-qubit array operating continuously for over two hours [18], and Atom Computing crossed the 1,225-qubit mark in 2023 [19]. These platforms can fit roughly 12 to 30 logical qubits (at \(d = 7\)) on a single module without any networking, placing them firmly in the monolithic zone.
  • Superconducting platforms are the most eager to go modular. With module sizes stuck around 100 to 120 qubits (just enough for one logical qubit at \(d = 7\)), companies like IBM, Rigetti, and QuantWare are aggressively pursuing chiplet and multi-chip architectures. QuantWare's VIO-40K is the boldest claim, targeting 10,000 qubits through 3D chiplet stacking [21].
  • Trapped-ion systems sit at a crossroads. Quantinuum's Helios (98 qubits) and IonQ's Tempo (~100 qubits) are currently monolithic, but both companies have explicit plans to link multiple modules via photonic interconnects. Universal Quantum takes a different path with electric-field links, achieving connection rates of 2,424 shuttles per second and fidelity exceeding 99.999993% [13].
  • Photonic platforms are born modular. PsiQuantum and Xanadu design their systems from the ground up as networked, rack-scale architectures connected by optical fiber. For these platforms, modularity is not a concession to scaling limits; it is the fundamental architecture.

How to Construct Modular Architectures

So, practically speaking, how do you build one of these things? Let me outline the construction methodology for each level.

Constructing L1: Chiplet Architectures

The recipe for L1 is closest to classical chip packaging:

  1. Fabricate individual chiplets with well-characterized qubits. Each chiplet might contain 9 to 500 qubits with all necessary control wiring.
  2. Test and select chiplets that meet fidelity and frequency specifications. This is a huge advantage: defective chiplets are discarded, dramatically improving effective yield.
  3. Assemble selected chiplets onto a common substrate using bump bonds, flip-chip techniques, or direct abutment.
  4. Calibrate boundary couplers to ensure inter-chip two-qubit gates meet the required fidelity threshold.

For the surface code, this means boundary stabilizers (which straddle two chiplets) use teleported CNOT gates mediated by Bell pairs across the coupler. As Ramette et al. showed [6], these boundary stabilizers can tolerate significantly higher noise than bulk stabilizers because the boundary is codimension-1.

Constructing L2: Entanglement-Linked Modules

Building an L2 system requires more sophisticated quantum networking:

  1. Build individual quantum processing modules, each a small self-contained quantum computer with data qubits, memory qubits, and communication qubits.
  2. Establish entanglement links between modules using photonic interfaces. For NV centers, this involves emitting photons from the electron spin, interfering them on a beam splitter, and heralding successful Bell pair creation.
  3. Purify and distill entangled states to increase fidelity. This step consumes multiple raw Bell pairs to produce one high-fidelity pair, using protocols like the EPL (Extreme Photon Loss) scheme [16].
  4. Fuse Bell pairs into GHZ states (for Type I) or directly use Bell pairs for teleported CNOTs (for Type II/III).
  5. Run the error correction cycle, using the prepared entangled resources to perform nonlocal stabilizer measurements.

The entire pipeline from entanglement generation to stabilizer measurement is repeated every syndrome extraction round, and it must complete before qubit memories decohere. This imposes strict timing constraints that couple the entanglement generation rate to the achievable code distance.

Constructing L3: Distributed Quantum Networks

At L3, the construction challenge becomes a network engineering problem:

  1. Deploy quantum processors at separate locations, each running local error correction.
  2. Connect via quantum repeaters or direct fiber links for entanglement distribution.
  3. Buffer entangled states in quantum memories while waiting for classical communication (which travels at the speed of light and introduces latency).
  4. Perform distributed logical operations (lattice surgery, teleportation) across the network using the accumulated entanglement resources.

The resource cost here is dominated by the \(d^2/p_\text{link}\) scaling for logical teleportation, combined with the need for multiple rounds of syndrome extraction that each require fresh entangled pairs.

Where is the Boundary? Defining the Modular Threshold

Now for the question that keeps me up at night: where exactly does a system transition from "monolithic" to "modular"?

I would argue there is no sharp boundary. Instead, the transition happens gradually as module size shrinks relative to the total system size. But we can identify a few useful criteria.

The Yield Criterion

A system should go modular when monolithic fabrication yield drops below an economically viable threshold. If \(Y(A)\) is the yield as a function of chip area \(A\), and \(A_\text{target}\) is the area needed for the desired qubit count, then modularity becomes preferable when:

\[ Y(A_\text{target}) < Y(A_\text{module})^{N_\text{modules}} \]

where \(N_\text{modules}\) chips of area \(A_\text{module}\) compose the full system. Since yield typically decays exponentially with area, this crossover happens surprisingly early.

The Fidelity Criterion

Modularity is viable when inter-module link fidelity exceeds the fault-tolerance threshold for boundary operations. For the surface code with boundary-connected patches (Type II), this threshold is remarkably generous. Ramette et al. showed that boundary noise can be significantly higher than bulk noise without destroying the code's error correction capability [6]. Nu Quantum estimates that 99.5% entanglement fidelity on interconnects, combined with 99.99% local gate fidelity, is sufficient for distributed fault tolerance [14].

The Cryogenic Criterion

For superconducting platforms, there is a hard physical limit: the cooling power of the dilution refrigerator. Once the heat load from control wiring exceeds the fridge's capacity, you must split the system across multiple cryostats, and you have crossed into at least L2 modularity whether you like it or not.

Putting It Together: A Decision Framework

Table 4. Decision criteria for choosing a modularity level.
CriterionFavors MonolithicFavors Chiplet (L1)Favors Modular (L2+)
Qubit count needed< 500500 to 10,000> 10,000
Fabrication yieldHigh at target sizeModerateLow at target size
Inter-chip link fidelityN/A> 99.5%> 99% with purification
Cryogenic budgetSufficientTightExceeded
Reconfigurability needLowModerateHigh

The Entanglement Cost Landscape

One of the most illuminating aspects of the Chandra et al. paper is how clearly it shows the entanglement cost landscape across the three architecture types. Let me summarize the scaling behavior:

Table 5. Entanglement resource scaling across architecture types for distance-\(d\) surface/toric codes.
ArchitectureBell pairs per roundScalingPrimary use
Type I (GHZ-mediated)\(\frac{2n \cdot d^2}{p_\text{link} \, p_\text{distill} \, p_\text{parity}}\)\(O(d^2)\)Quantum memory
Type II (Boundary patches)\(\frac{2d-1}{p_\text{link}}\)\(O(d)\)Quantum memory
Type III (Logical ops)\(\frac{d^2}{p_\text{link}}\) per CNOT\(O(d^2)\)Computation

The standout here is Type II with its linear scaling. For quantum memory tasks, where you just need to store a logical qubit reliably and perform syndrome extraction, Type II is dramatically more efficient than Type I. The price you pay is that Type II requires larger modules (each hosting a significant code patch), while Type I works with minimal per-node resources.

For computation (not just memory), Type III is the natural choice, but its \(O(d^2)\) cost per logical gate means that the entanglement generation rate directly limits the logical clock speed. If your modules can only produce Bell pairs at rate \(\lambda\), then the time per logical CNOT is approximately \(d^2 / (\lambda \cdot p_\text{link})\), which can become painfully slow at large distances.

What I Think the Future Looks Like

After digesting all of this, here is my personal take on where things are headed.

In the near term (2025 to 2028), I expect the L1 chiplet approach to dominate for superconducting platforms. IBM's roadmap with Kookaburra and Cockatoo processors explicitly targets this regime.

For NV centers and other solid-state spin platforms, the L2 modular approach is the natural path, as explored extensively by the QuTech and Delft groups. The Singh et al. results showing near-monolithic thresholds for carefully designed entanglement schemes are very encouraging.

For trapped ions, there is an interesting competition between photonic interconnects (L2) and electric-field shuttling (L1). The latter, pioneered by Universal Quantum, offers much faster connection speeds but requires precise chip alignment [13]. The former is more flexible but slower.

In the longer term, as we push toward millions of logical qubits for industrial applications, I believe we will inevitably move toward L2 and even L3 architectures. The transition to quantum LDPC codes, which IBM is already pursuing, will help by dramatically reducing the number of physical qubits per logical qubit and potentially changing the entanglement overhead calculations.

The real engineering challenge will be co-designing the error correction code, the modular architecture, the entanglement generation protocol, and the classical control system as a unified stack. The days of treating these as independent research problems are numbered.

Conclusion: Modularity Is Not a Choice, It Is an Inevitability

If there is one message I want you to take away from this blog, it is this: modular quantum computing is not an alternative to monolithic quantum computing. It is what monolithic quantum computing will become as it grows up.

The question is not whether to go modular, but how. The Type I, II, and III taxonomy from Chandra et al. gives us a language for discussing this. The modularity spectrum from L0 to L3 gives us a framework for mapping real hardware onto that language. And the entanglement cost analysis gives us the quantitative tools to evaluate the tradeoffs.

As quantum error correction scientists, our job is to design codes, decoders, and protocols that work gracefully across module boundaries, tolerating the inevitable noise penalty of inter-module links while maintaining fault tolerance. It is a beautiful optimization problem, and I am excited to see where it leads.

Until next time, happy entangling!

References

  1. C. Gidney, "How to factor 2048 bit RSA integers with less than a million noisy qubits," arXiv:2505.15917, 2025.
  2. N. K. Chandra, E. Kaur, and K. P. Seshadreesan, "Architectural approaches to fault-tolerant distributed quantum computing and their entanglement overheads," arXiv:2511.13657, 2025.
  3. S. de Bone, P. Möller, C. E. Bradley, T. H. Taminiau, and D. Elkouss, "Thresholds for the distributed surface code in the presence of memory decoherence," AVS Quantum Science, vol. 6, p. 033801, 2024.
  4. N. H. Nickerson, Y. Li, and S. C. Benjamin, "Topological quantum computing with a very noisy network and local error rates approaching one percent," Nature Communications, vol. 4, p. 1756, 2013.
  5. N. H. Nickerson, J. F. Fitzsimons, and S. C. Benjamin, "Freely scalable quantum technologies using cells of 5-to-50 qubits with very lossy and noisy photonic links," Phys. Rev. X, vol. 4, p. 041041, 2014.
  6. J. Ramette, J. Sinclair, N. P. Breuckmann, and V. Vuletić, "Fault-tolerant connection of error-corrected qubits with noisy links," npj Quantum Information, vol. 10, p. 58, 2024.
  7. S. Singh, F. Gu, S. de Bone, E. Villaseñor, D. Elkouss, and J. Borregaard, "Modular architectures and entanglement schemes for error-corrected distributed quantum computation," npj Quantum Information, 2025. arXiv:2408.02837.
  8. J. Stack, M. Wang, and F. Mueller, "Assessing teleportation of logical qubits in a distributed quantum architecture under error correction," arXiv:2504.05611, 2025.
  9. D. Horsman, A. G. Fowler, S. Devitt, and R. Van Meter, "Surface code quantum computing by lattice surgery," New J. Phys., vol. 14, p. 123011, 2012.
  10. C. Guinn et al., "Co-designed superconducting architecture for lattice surgery of surface codes with quantum interface routing card," arXiv:2312.01246, 2023.
  11. IBM, "Engineering fault tolerance: IBM's modular, scalable full-stack quantum roadmap," 2025.
  12. MIT Lincoln Laboratory, "Modular, scalable hardware architecture for a quantum computer," MIT News, May 2024.
  13. University of Sussex / Universal Quantum, "Electric field link modularity: UQ Connect achieving world record connection rate (2424/s) and fidelity (>99.999993%)," 2025.
  14. E. Sutcliffe, B. Jonnadula, C. L. Gall, A. E. Moylett, and C. M. Westoby, "Distributed quantum error correction based on hyperbolic Floquet codes," arXiv:2501.14029, 2025.
  15. C. Monroe et al., "Large scale modular quantum computer architecture with atomic memory and photonic interconnects," Phys. Rev. A, vol. 89, p. 022317, 2014.
  16. E. T. Campbell and S. C. Benjamin, "Measurement-based entanglement under conditions of extreme photon loss," Phys. Rev. Lett., vol. 101, p. 130502, 2008.
  17. D. Barral et al., "Review of distributed quantum computing: From single QPU to high performance quantum computing," Computer Science Review, vol. 57, p. 100747, 2025.
  18. QuEra Computing, "Continuous operation with a 3,000-qubit array and scalable error correction with up to 96 logical qubits," Nature, 2025.
  19. Atom Computing, "1,225-site atomic array with 1,180 qubits in next-generation quantum computing platform," Press release, October 2023.
  20. QuantWare, "VIO-40K: 3D scaling architecture enabling 10,000-qubit QPUs," Press release, December 2025.
  21. M. Caleffi et al., "Distributed quantum computing: A survey," Computer Networks, vol. 254, p. 110672, 2024.
  22. H. T. Larasati and B.-S. Choi, "Towards fault-tolerant distributed quantum computation (FT-DQC): Taxonomy, recent progress, and challenges," ICT Express, vol. 11, pp. 417–435, 2025.
← Back to Blog