Conversation
…ig. D) Standalone plotting script placed alongside the simulation data in qualtran/surface_code/flasq/data/. Reads cultivation_simulation_summary.csv and generates the paper's 'cultivation_expected_cost' figure at 400x400 grid resolution.
…atplotlib objects
…nd display string assignment
…ore for negative test
…te_spans signature to Sequence and adding type ignore in test
…unpacking and type annotation for volume_limited_depth
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
…ple.py (pylint W1309)
… in tests and scripts (pylint W0718)
The execute-notebooks.py CI step discovers all .ipynb files via git ls-files and executes them directly through nbconvert, bypassing pytest fixtures. Without this env var, FLASQ notebooks run in full mode and can exceed the 15-minute job timeout. The notebooks_test.py pytest tests are kept for local fast-mode validation (used by nightly full-suite runs).
There was a problem hiding this comment.
Code Review
This pull request integrates the FLASQ cost model into Qualtran, providing a comprehensive suite of tools for fault-tolerant quantum resource estimation, including gate volume counting, distance-dependent span costs, and measurement depth analysis. It also introduces an optimization sweep engine and several example applications such as Ising models and Hamming Weight Phasing. Feedback was provided regarding a potential runtime TypeError in the cost model when comparing symbolic integers, suggesting the need for a more robust symbolic check.
Address feedback from code review regarding potential TypeError when n_fluid_ancilla is symbolic. Added is_symbolic check to guard the comparison. Adversarial review: Checked that it preserves behavior for concrete values and falls through to division for symbolic ones. No issues found.
| from ._success_prob import SuccessProb | ||
| from ._qubit_counts import QubitCount | ||
| from ._bloq_counts import BloqCount, QECGatesCost, GateCounts | ||
| from .classify_bloqs import bloq_is_t_like |
There was a problem hiding this comment.
We can't have just this one function re-exported. Can you please update usage of this function to just use its full, alread-public import qualtran.resource_counting.classify_bloqs.bloq_is_t_like?
| python dev_tools/execute-notebooks.py --n-workers=8 | ||
| env: | ||
| NUMBA_NUM_THREADS: 4 | ||
| FLASQ_FAST_MODE_OVERRIDE: 'True' |
There was a problem hiding this comment.
Can we document this .. somewhere? How slow are the notebooks in non-fast mode? When we build the docs do we need the full notebook versions? Is there any merit to making the fast mode the default and explaining in the notebooks what to set to get the full version?
Add FLASQ cost model for fault-tolerant resource estimation
I know this is a monster-sized PR and I'm sorry.
Summary
FLASQ (Fault-tolerant Lattice Surgery with Ancilla Qubits) is a spacetime volume cost model that estimates fault-tolerant quantum computing resources by accounting for qubit routing, lattice surgery, and T-state cultivation costs. This PR integrates the FLASQ model into Qualtran as a new module under
qualtran/surface_code/flasq/, implementing three newCostKeysubclasses that plug into Qualtran's existing resource counting framework.Module Architecture
graph TD subgraph L0["Layer 0: Foundations"] SY[symbols.py — sympy constants] UT[utils.py — symbolic substitution] CA[cultivation_analysis.py — CSV data fitting] NQ[naive_grid_qubit_manager.py — grid allocator] end subgraph L1["Layer 1: CostKey Implementations"] VC[volume_counting.py — FLASQGateTotals → FLASQGateCounts] SC[span_counting.py — TotalSpanCost → GateSpan] MD[measurement_depth.py — TotalMeasurementDepth → MeasurementDepth] end subgraph L2["Layer 2: Core Model"] FM[flasq_model.py — FLASQCostModel + FLASQSummary] end subgraph L3["Layer 3: Analysis"] CI[cirq_interop.py — Cirq circuit conversion] EM[error_mitigation.py — PEC overhead + failure probs] end subgraph L4["Layer 4: Optimization Pipeline"] OC[optimization/configs.py — sweep config dataclasses] OA[optimization/analysis.py — circuit analysis orchestrator] OS[optimization/sweep.py — parameter sweep engine] OP[optimization/postprocessing.py — sweep results → DataFrames] end subgraph L5["Layer 5: Examples + Notebooks"] IS[examples/ising.py — Trotterized Ising circuits] HW[examples/hwp.py — Hamming Weight Phasing] GF[examples/gf2_multiplier.py — GF2 multiplier circuits] AD[examples/adder_example.py — Add bloq demo] PL[examples/plotting.py — heatmaps + visualization] NB1[ising_notebook.ipynb] NB2[hwp_notebook.ipynb] NB3[gf2_multiplier_example_notebook.ipynb] end %% Layer 0 → Layer 1 UT --> VC UT --> SC UT --> MD %% Layer 1 → Layer 2 VC --> FM SC --> FM MD --> FM SY --> FM UT --> FM %% Layer 2 → Layer 3 SC --> CI FM --> EM %% Layers → Layer 4 CA --> OC CA --> OA CI --> OA FM --> OA OC --> OA OA --> OS FM --> OS OC --> OS EM --> OP OC --> OP OS --> OP %% Layer 5 connections NQ --> HW CI --> AD IS --> NB1 HW --> NB2 GF --> NB3 OS --> NB1 OS --> NB2 OP --> NB1 OP --> NB2 PL --> NB1Changes to Existing Qualtran Code
This PR touches only 3 files outside the new
flasq/directory:.gitignore*.pdfpyproject.tomlfrozendict,pandas>=2.0,seaborn,joblib,tqdmqualtran/resource_counting/__init__.pyfrom .classify_bloqs import bloq_is_t_likeAll other ~21K lines are new, entirely under
qualtran/surface_code/flasq/.Suggested Review Order
Step 1: External changes (3 files, ~11 lines)
Review the 3 files listed above. The
bloq_is_t_likeexport is the only one with semantic content — it makes an existing, already-public-API-quality function available at the package level.Step 2: Public API —
__init__.py(~85 lines)Shows everything the module exports. This is the contract with downstream users. Key exports:
FLASQCostModel,FLASQSummary,apply_flasq_cost_model(), and the threeCostKeysubclasses (FLASQGateTotals,TotalSpanCost,TotalMeasurementDepth).Step 3: Foundation modules —
symbols.py,utils.py(~210 lines total)symbols.pydefines 4 sympy placeholder symbols used throughout the model for deferred numerical resolution (rotation error, cultivation volume factor, reaction time, mixed fallback T-count).utils.pyprovidessubstitute_until_fixed_point()— the canonical way to resolve symbolic expressions — and a DataFrame conversion helper. These are leaf modules with no intra-FLASQ dependencies.Step 4: CostKey implementations (~940 lines total)
The Qualtran integration points. Each implements
CostKey.compute():volume_counting.py: Walks bloq decomposition trees and tallies gates intoFLASQGateCounts(T, Toffoli, CNOT, rotations, etc.). This is where the gate classification logic lives.span_counting.py: Computes Manhattan/Steiner-tree distance costs for multi-qubit gates. TheBloqWithSpanInfowrapper attaches span metadata to bloqs.measurement_depth.py: Upper-bounds sequential measurement chain length via longest-path in the circuit DAG. This determines whether the computation is volume-limited or reaction-limited.These three are independent of each other and depend only on
utils.py.Step 5: Core model —
flasq_model.py(~630 lines)The central module.
FLASQCostModelholds volume parameters (conservative and optimistic defaults from the paper).FLASQSummaryis a frozen attrs class with all computed estimates (spacetime volume, depth, qubit counts, volume breakdown).apply_flasq_cost_model()combines the three CostKey outputs into a summary.Step 6: Circuit conversion —
cirq_interop.py(~190 lines)convert_circuit_for_flasq_analysis()converts Cirq circuits to QualtranCompositeBloqwhile attachingBloqWithSpanInfowrappers based on qubit positions. Uses the publiccirq_gate_to_bloqAPI.Step 7: Supporting modules (~530 lines total)
naive_grid_qubit_manager.py: Zig-zag grid qubit allocator with allocation/deallocation tracking. Standalone — no FLASQ dependencies.error_mitigation.py: Computes PEC sampling overhead (Γ²), failure probabilities for Clifford and non-Clifford errors, and wall-clock time per noiseless sample. Depends onFLASQSummary.cultivation_analysis.py: Loads pre-computed cultivation simulation data from CSV and interpolates optimal T-state cultivation parameters. Standalone — no FLASQ dependencies.Step 8: Optimization pipeline —
optimization/(~1,150 lines)The parameter sweep machinery. Best reviewed in dependency order:
configs.py— Data classes for sweep configuration (CoreParametersConfig,ErrorBudget)analysis.py— Orchestrates circuit building, gate counting, span/depth analysissweep.py— Iterates over parameter space, producesSweepResultobjectspostprocessing.py— Converts sweep results to DataFrames, applies error budget filteringStep 9: Examples —
examples/(~590 lines of Python + 3 notebooks)Circuit builders for three applications from the paper:
ising.py: 2nd/4th-order Trotterized Ising model on a gridhwp.py: Hamming Weight Phasing circuitsgf2_multiplier.py: GF(2) quadratic and Karatsuba multipliersadder_example.py: Minimal FLASQ demo on an Add bloqplotting.py: Heatmap and visualization utilitiesThe Jupyter notebooks (
ising_notebook.ipynb,hwp_notebook.ipynb,gf2_multiplier_example_notebook.ipynb) demonstrate end-to-end workflows.Step 10: Tests (~6,600 lines)
Co-located
*_test.pyfiles. Notable:golden_values_test.py: End-to-end regression tests locking numerical outputs for all three example applications against known valuesmisc_bug_test.py: Regression tests for specific upstream Qualtran bugs encountered during integration*_test.pywith characterization and edge-case testsTest Coverage
New Dependencies
frozendictattrsfrozen classes andlru_cachecompatibilitypandasseabornjoblibtqdm