Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

PyMatching: A Python/C++ library for decoding quantum error correcting codes with minimum-weight perfect matching.

License

NotificationsYou must be signed in to change notification settings

justinlietz/PyMatching

 
 

Repository files navigation

Continuous IntegrationcodecovdocsPyPI versionUnitary Fund

PyMatching is a fast Python/C++ library for decoding quantum error correcting (QEC) codes using the Minimum WeightPerfect Matching (MWPM) decoder.Given the syndrome measurements from a quantum error correction circuit, the MWPM decoder finds the most probable setof errors, given the assumption that error mechanisms areindependent, as well asgraphlike (each error causeseither one or two detection events).The MWPM decoder is the most popular decoder for decodingsurface codes,and can also be used to decode various other code families, includingsubsystem codes,honeycomb codes and2D hyperbolic codes.

Version 2 includes a new implementation of the blossom algorithm which is100-1000x faster than previous versionsof PyMatching.PyMatching can be configured using arbitrary weighted graphs, with or without a boundary, and can be combined withCraig Gidney'sStim library to simulate and decode error correction circuitsin the presence of circuit-level noise. Thesinter package combines Stim andPyMatching to perform fast, parallelised monte-carlo sampling of quantum error correction circuits.

Documentation for PyMatching can be found at:pymatching.readthedocs.io

Ourpaper gives more background on the MWPM decoder and our implementation (sparse blossom) released in PyMatching v2.

To see how stim, sinter and pymatching can be used to estimate the threshold of an error correcting code withcircuit-level noise, try out thestim getting started notebook.

The new >100x faster implementation for Version 2

Version 2 features a new implementation of the blossom algorithm, which I wrote with Craig Gidney.Our new implementation, which we refer to as thesparse blossom algorithm, can be seen as a generalisation of theblossom algorithm to handle the decoding problem relevant to QEC.We solve the problem of finding minimum-weight paths between detection events in a detector graphdirectly, which avoids the need to use costly all-to-all Dijkstra searches to find a MWPM in a derivedgraph using the original blossom algorithm.The new version is also exact - unlike previous versions of PyMatching, no approximation is made.See ourpaper for more details.

Our new implementation isover 100x faster than previous versions of PyMatching, and isover 100,000x faster than NetworkX (benchmarked with surface code circuits). At 0.1% circuit-noise, PyMatching candecode both X and Z basis measurements of surface code circuits up to distance 17 in under 1 microsecond per roundof syndrome extraction on a single core. Furthermore, the runtime is roughly linear in the numberof nodes in the graph.

The plot below compares the performance of PyMatching v2 with the previousversion (v0.7) as well as with NetworkX for decoding surface code circuits with circuit-level depolarising noise.All decoders were run on a single core of an M1 processor, processing both the X and Z basis measurements.The equations T=N^x in the legend (and plotted as dashed lines) areobtained from a fit to the same dataset fordistance > 10, where N is the number of detectors (nodes) per round, and T is the decoding time per round.See thebenchmarks folder in the repositoryfor the data and stim circuits, as well as additional benchmarks.

PyMatching new vs old vs NetworkX

Sparse blossom is conceptually similar to the approach described inthis paperby Austin Fowler, although our approach differs in many of the details (which will be explained in our upcoming paper).There are even more similarities with the very nice independent work by Yue Wu, who recently released thefusion-blossom library.One of the differences with our approach is that fusion-blossom grows the exploratory regions of alternating treesin a similar way to how clusters are grown in Union-Find, whereas our approach instead progresses along a timeline,and uses a global priority queue to grow alternating trees.Yue also has a paper coming soon, so stay tuned for that as well.

Installation

The latest version of PyMatching can be downloaded and installed fromPyPIwith the command:

pip install pymatching --upgrade

Usage

PyMatching can load matching graphs from a check matrix, astim.DetectorErrorModel, anetworkx.Graph, aretworkx.PyGraph or by adding edges individually withpymatching.Matching.add_edge andpymatching.Matching.add_boundary_edge.

Decoding Stim circuits

PyMatching can be combined withStim. Generally, the easiest and fastest way todo this is usingsinter (use v1.10.0 or later), which uses PyMatching and Stim to runparallelised monte carlo simulations of quantum error correction circuits.However, in this section we will use Stim and PyMatching directly, to demonstrate how their Python APIs can be used.To install stim, runpip install stim --upgrade.

First, we generate a stim circuit. Here, we use a surface code circuit included with stim:

importnumpyasnpimportstimimportpymatchingcircuit=stim.Circuit.generated("surface_code:rotated_memory_x",distance=5,rounds=5,after_clifford_depolarization=0.005)

Next, we use stim to generate astim.DetectorErrorModel (DEM), which is effectively aTanner graph describing the circuit-level noise model.By settingdecompose_errors=True, stim decomposes all error mechanisms intoedge-like errormechanisms (which cause either one or two detection events).This ensures that our DEM is graphlike, and can be loaded by pymatching:

model=circuit.detector_error_model(decompose_errors=True)matching=pymatching.Matching.from_detector_error_model(model)

Next, we will sample 1000 shots from the circuit. Each shot (a row ofshots) contains the full syndrome (detectormeasurements), as well as the logical observable measurements, from simulating the noisy circuit:

sampler=circuit.compile_detector_sampler()syndrome,actual_observables=sampler.sample(shots=1000,separate_observables=True)

Now we can decode! We compare PyMatching's predictions of the logical observables with the actual observables sampledwith stim, in order to count the number of mistakes and estimate the logical error rate:

num_errors=0foriinrange(syndrome.shape[0]):predicted_observables=matching.decode(syndrome[i, :])num_errors+=notnp.array_equal(actual_observables[i, :],predicted_observables)print(num_errors)# prints 8

As of PyMatching v2.1.0, you can usematching.decode_batch to decode a batch of shots instead.Sincematching.decode_batch iterates over the shots in C++, it's faster than iterating over callstomatching.decode in Python. The following cell is therefore a fasterequivalent to the cell above:

predicted_observables=matching.decode_batch(syndrome)num_errors=np.sum(np.any(predicted_observables!=actual_observables,axis=1))print(num_errors)# prints 8

Loading from a parity check matrix

We can also load apymatching.Matching object from a binaryparity check matrix, another representation of a Tanner graph.Each row in the parity check matrixH corresponds to a parity check, and each column corresponds to anerror mechanism.The elementH[i,j] ofH is 1 if parity checki is flipped by error mechanismj, and 0 otherwise.To be used by PyMatching, the error mechanisms inH must begraphlike.This means that each column must contain either one or two 1s (if a column has a single 1, it represents a half-edgeconnected to the boundary).

We can give each edge in the graph a weight, by providing PyMatching with aweights numpy array.Elementweights[j] of theweights array sets the edge weight for the edge corresponding to columnj ofH.If the error mechanisms are treated as independent, then we typically want to set the weight of edgej tothe log-likelihood ratiolog((1-p_j)/p_j), wherep_j is the error probability associated with edgej.With this setting, PyMatching will find the most probable set of error mechanisms, given the syndrome.

With PyMatching configured usingH andweights, decoding a binary syndrome vectorsyndrome (a numpy arrayof lengthH.shape[0]) corresponds to finding a set of errors defined in a binarypredictions vectorsatisfyingH@predictions % 2 == syndrome while minimising the total solution weightpredictions@weights.

In quantum error correction, rather than predicting which exact set of error mechanisms occurred, we typically want topredict the outcome oflogical observable measurements, which are the parities of error mechanisms.These can be represented by a binary matrixobservables. Similar to the check matrix,observables[i,j] is 1 iflogical observablei is flipped by error mechanismj.For example, suppose our syndromesyndrome, was the result of a set of errorsnoise (a binary array oflengthH.shape[1]), such thatsyndrome = H@noise % 2.Our decoding is successful ifobservables@noise % 2 == observables@predictions % 2.

Putting this together, we can decode a distance 5 repetition code as follows:

importnumpyasnpfromscipy.sparseimportcsc_matriximportpymatchingH=csc_matrix([[1,1,0,0,0],                 [0,1,1,0,0],                 [0,0,1,1,0],                 [0,0,0,1,1]])weights=np.array([4,3,2,3,4])# Set arbitrary weights for illustrationmatching=pymatching.Matching(H,weights=weights)prediction=matching.decode(np.array([0,1,0,1]))print(prediction)# prints: [0 0 1 1 0]# Optionally, we can return the weight as well:prediction,solution_weight=matching.decode(np.array([0,1,0,1]),return_weight=True)print(prediction)# prints: [0 0 1 1 0]print(solution_weight)# prints: 5.0

And in order to estimate the logical error rate for a physical error rate of 10%, we can sampleas follows:

importnumpyasnpfromscipy.sparseimportcsc_matriximportpymatchingH=csc_matrix([[1,1,0,0,0],                [0,1,1,0,0],                [0,0,1,1,0],                [0,0,0,1,1]])observables=csc_matrix([[1,0,0,0,0]])error_probability=0.1weights=np.ones(H.shape[1])*np.log((1-error_probability)/error_probability)matching=pymatching.Matching.from_check_matrix(H,weights=weights)num_shots=1000num_errors=0foriinrange(num_shots):noise= (np.random.random(H.shape[1])<error_probability).astype(np.uint8)syndrome=H@noise%2prediction=matching.decode(syndrome)predicted_observables=observables@prediction%2actual_observables=observables@noise%2num_errors+=notnp.array_equal(predicted_observables,actual_observables)print(num_errors)# prints 4

Note that we can also ask PyMatching to predict the logical observables directly, by supplying themto thefaults_matrix argument when constructing thepymatching.Matching object. This allows the decoder to makesome additional optimisations, that speed up the decoding procedure a bit. The following example uses this approach,and is equivalent to the example above:

importnumpyasnpfromscipy.sparseimportcsc_matriximportpymatchingH=csc_matrix([[1,1,0,0,0],                [0,1,1,0,0],                [0,0,1,1,0],                [0,0,0,1,1]])observables=csc_matrix([[1,0,0,0,0]])error_probability=0.1weights=np.ones(H.shape[1])*np.log((1-error_probability)/error_probability)matching=pymatching.Matching.from_check_matrix(H,weights=weights,faults_matrix=observables)num_shots=1000num_errors=0foriinrange(num_shots):noise= (np.random.random(H.shape[1])<error_probability).astype(np.uint8)syndrome=H@noise%2predicted_observables=matching.decode(syndrome)actual_observables=observables@noise%2num_errors+=notnp.array_equal(predicted_observables,actual_observables)print(num_errors)# prints 6

We'll make one more optimisation, which is to usematching.decode_batch to decode the batch of shots, rather thaniterating over calls tomatching.decode in Python:

importnumpyasnpfromscipy.sparseimportcsc_matriximportpymatchingH=csc_matrix([[1,1,0,0,0],                [0,1,1,0,0],                [0,0,1,1,0],                [0,0,0,1,1]])observables=csc_matrix([[1,0,0,0,0]])error_probability=0.1num_shots=1000weights=np.ones(H.shape[1])*np.log((1-error_probability)/error_probability)matching=pymatching.Matching.from_check_matrix(H,weights=weights,faults_matrix=observables)noise= (np.random.random((num_shots,H.shape[1]))<error_probability).astype(np.uint8)shots= (noise @H.T)%2actual_observables= (noise @observables.T)%2predicted_observables=matching.decode_batch(shots)num_errors=np.sum(np.any(predicted_observables!=actual_observables,axis=1))print(num_errors)# prints 6

Instead of using a check matrix, the Matching object can also be constructed usingtheMatching.add_edgeandMatching.add_boundary_edgemethods, or by loading from a NetworkX or retworkx graph.

For more details on how to use PyMatching,seethe documentation.

Attribution

When using PyMatching please cite ourpaper on the sparse blossom algorithm (implemented in version 2):

@article{higgott2023sparse,  title={Sparse Blossom: correcting a million errors per core second with minimum-weight matching},  author={Higgott, Oscar and Gidney, Craig},  journal={arXiv preprint arXiv:2303.15933},  year={2023}}

Note: the previous PyMatchingpaper descibes the implementation in version 0.7 andearlier of PyMatching (not v2).

Acknowledgements

We are grateful to the Google Quantum AI team for supporting the development of PyMatching v2. Earlier versions ofPyMatching were supported by Unitary Fund and EPSRC.

About

PyMatching: A Python/C++ library for decoding quantum error correcting codes with minimum-weight perfect matching.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++71.4%
  • Python27.4%
  • Other1.2%

[8]ページ先頭

©2009-2025 Movatter.jp