Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

HHL algorithm

From Wikipedia, the free encyclopedia
Quantum linear algebra algorithm offering exponential speedup under certain conditions

TheHarrow–Hassidim–Lloyd (HHL)algorithm is aquantum algorithm for obtaining certain information about the solution to asystem of linear equations, introduced byAram Harrow, Avinatan Hassidim, andSeth Lloyd. Specifically, the algorithm estimates quadratic functions of the solution vector to a given system of linear equations.[1]

The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along withShor's factoring algorithm andGrover's search algorithm. Assuming the linear system issparse[2] and has a lowcondition numberκ{\displaystyle \kappa }, and that the user is interested in the result of a scalar measurement on the solution vector and not the entire vector itself, the algorithm has a runtime ofO(log(N)κ2){\displaystyle O(\log(N)\kappa ^{2})}, whereN{\displaystyle N} is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs inO(Nκ){\displaystyle O(N\kappa )} (orO(Nκ){\displaystyle O(N{\sqrt {\kappa }})} for positive semidefinite matrices).

An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by three independent publications.[3][4][5] The demonstrations consisted of simple linear equations on specially designed quantum devices.[3][4][5] The first demonstration of a general-purpose version of the algorithm appeared in 2018.[6]

Overview

[edit]

The HHL algorithm solves the following problem: given aN×N{\displaystyle N\times N}Hermitian matrixA{\displaystyle A} and a unit vectorbRN{\displaystyle {\vec {b}}\in \mathbb {R} ^{N}}, prepare the quantum state|x{\displaystyle |x\rangle } whose amplitudes equal the elements of the solution vectorxRN{\displaystyle {\vec {x}}\in \mathbb {R} ^{N}} to the linear systemAx=b{\displaystyle A{\vec {x}}={\vec {b}}}. In particular, the algorithm cannot efficiently output the solutionx{\displaystyle {\vec {x}}} itself. However, one can still efficiently compute products of the form(x)TMx{\displaystyle ({\vec {x}})^{T}M{\vec {x}}} for some Hermitian matrixM{\displaystyle M}.

The algorithm first prepares a quantum state corresponding tob{\displaystyle {\vec {b}}} of the form

|b=i=1Nbi|i.{\displaystyle |b\rangle =\sum _{i\mathop {=} 1}^{N}b_{i}|i\rangle .}

Next,Hamiltonian simulation is used to apply the unitary operatoreiAt{\displaystyle e^{iAt}} to|b{\displaystyle |b\rangle } for a superposition of different timest{\displaystyle t}. The algorithm usesquantum phase estimation to decompose|b{\displaystyle |b\rangle } into the eigenbasis ofA{\displaystyle A} and find the corresponding eigenvaluesλj{\displaystyle \lambda _{j}}. The state of the system after this decomposition is approximately

j=1Nβj|uj|λj,{\displaystyle \sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle |\lambda _{j}\rangle ,}

whereuj{\displaystyle u_{j}} are the eigenvectors ofA{\displaystyle A} and|b=j=1Nβj|uj{\displaystyle |b\rangle =\sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle }.

We would then like to apply the linear map taking|λj{\displaystyle |\lambda _{j}\rangle } toCλj1|λj{\displaystyle C\lambda _{j}^{-1}|\lambda _{j}\rangle }, whereC{\displaystyle C} is a normalizing constant. This linear map is not unitary, so it must be implemented using a quantum measurement with a nonzero probability of failure. After it succeeds, we have uncomputed the|λj{\displaystyle |\lambda _{j}\rangle } register and are left with a state proportional to

i=1Nβiλj1|uj=A1|b=|x,{\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle ,}

where|x{\displaystyle |x\rangle } is a quantum state corresponding to the desired solution vector x.

Retrieving all components ofx would require the procedure be repeated at leastN times. However, sometimes the full vector is not needed and one only needs the expectation value of a linear operatorM acting on x. By performing the quantum measurement corresponding toM, we obtain an estimate of the product(x)TMx{\displaystyle ({\vec {x}})^{T}M{\vec {x}}}.

Explanation

[edit]

Initialization and assumptions

[edit]

Firstly, the algorithm requires that the matrixA{\displaystyle A} beHermitian so that it can be converted into aunitary operator. In the case whereA{\displaystyle A} is not Hermitian, one can define a Hermitian matrix

C=[0AA0].{\displaystyle \mathbf {C} ={\begin{bmatrix}0&A\\A^{\dagger }&0\end{bmatrix}}.}

The algorithm can now be used to solveCy=[b0]{\displaystyle Cy={\begin{bmatrix}b\\0\end{bmatrix}}} to obtainy=[0x]{\displaystyle y={\begin{bmatrix}0\\x\end{bmatrix}}}.

Secondly, the algorithm requires an efficient procedure to prepare|b{\displaystyle |b\rangle }, the quantum representation of b. It is assumed ether that|b{\displaystyle |b\rangle } has already been prepared or that there exists someB{\displaystyle B} which takes some quantum state|initial{\displaystyle |\mathrm {initial} \rangle } to|b{\displaystyle |b\rangle } efficiently. Any error in the preparation of state|b{\displaystyle |b\rangle } is ignored.

Finally, the algorithm assumes that the state|ψ0{\displaystyle |\psi _{0}\rangle } can be prepared efficiently, where

|ψ0:=2/Tτ=0T1sinπ(τ+12T)|τ{\displaystyle |\psi _{0}\rangle :={\sqrt {2/T}}\sum _{\tau \mathop {=} 0}^{T-1}\sin \pi \left({\tfrac {\tau +{\tfrac {1}{2}}}{T}}\right)|\tau \rangle }

for some largeT{\displaystyle T}. The coefficients of|ψ0{\displaystyle |\psi _{0}\rangle } are chosen to minimize a certain quadratic loss function which induces error in theUinvert{\displaystyle U_{\mathrm {invert} }} subroutine described below.

Hamiltonian simulation

[edit]

Hamiltonian simulation is used to transform the Hermitian matrixA{\displaystyle A} into a unitary operator, which can then be applied at will. This is possible ifA iss-sparse and efficiently row computable, meaning it has at mosts nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allowseiAt{\displaystyle e^{iAt}} to be simulated in timeO(log(N)s2t){\displaystyle O(\log(N)s^{2}t)}.

Uinvert subroutine

[edit]

The key subroutine to the algorithm, denotedUinvert{\displaystyle U_{\mathrm {invert} }}, is defined as follows and incorporates aphase estimation subroutine:

1. Prepare|ψ0C{\displaystyle |\psi _{0}\rangle ^{C}} on registerC

2. Apply the conditional Hamiltonian evolution (sum)

3. Apply the Fourier transform to the register C. Denote the resulting basis states with|k{\displaystyle |k\rangle } fork = 0, ..., T − 1. Defineλk:=2πk/t0{\displaystyle \lambda _{k}:=2\pi k/t_{0}}.

4. Adjoin a three-dimensional registerS in the state

|h(λk)S:=1f(λk)2g(λk)2|nothingS+f(λk)|wellS+g(λk)|illS,{\displaystyle |h(\lambda _{k})\rangle ^{S}:={\sqrt {1-f(\lambda _{k})^{2}-g(\lambda _{k})^{2}}}|\mathrm {nothing} \rangle ^{S}+f(\lambda _{k})|\mathrm {well} \rangle ^{S}+g(\lambda _{k})|\mathrm {ill} \rangle ^{S},}

5. Reverse steps 1–3, uncomputing any garbage produced along the way.

The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues ofA up to errorϵ{\displaystyle \epsilon }.

The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse ofA. In this register, the functionsf,g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of|b{\displaystyle |b\rangle } is in the ill-conditioned subspace ofA and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse ofA requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extendedBorn rule.

Main loop

[edit]

The body of the algorithm follows theamplitude amplification procedure: starting withUinvertB|initial{\displaystyle U_{\mathrm {invert} }B|\mathrm {initial} \rangle }, the following operation is repeatedly applied:

UinvertBRinitBUinvertRsucc,{\displaystyle U_{\mathrm {invert} }BR_{\mathrm {init} }B^{\dagger }U_{\mathrm {invert} }^{\dagger }R_{\mathrm {succ} },}

where

Rsucc=I2|wellwell|{\displaystyle R_{\mathrm {succ} }=I-2|\mathrm {well} \rangle \langle \mathrm {well} |}

and

Rinit=I2|initialinitial|.{\displaystyle R_{\mathrm {init} }=I-2|\mathrm {initial} \rangle \langle \mathrm {initial} |.}

After each repetition,S{\displaystyle S} is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until|well{\displaystyle |\mathrm {well} \rangle } is measured, which occurs with a probabilityp{\displaystyle p}. Rather than repeating1p{\displaystyle {\frac {1}{p}}} times to minimize error, amplitude amplification is used to achieve the same error resilience using onlyO(1p){\displaystyle O\left({\frac {1}{\sqrt {p}}}\right)} repetitions.

Scalar measurement

[edit]

After successfully measuring 'well' onS{\displaystyle S} the system will be in a state proportional to

i=1Nβiλj1|uj=A1|b=|x.{\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle .}

We can then perform the quantum measurement corresponding to M and obtain an estimate of(x)TMx{\displaystyle ({\vec {x}})^{T}M{\vec {x}}}.

Run time analysis

[edit]

Classical efficiency

[edit]

The best classical algorithm which produces the actual solution vectorx{\displaystyle {\vec {x}}} isGaussian elimination, which runs inO(N3){\displaystyle O(N^{3})} time.

IfA iss-sparse and positive semi-definite, then theConjugate Gradient method can be used to find the solution vectorx{\displaystyle {\vec {x}}}, which can be found inO(Nsκ){\displaystyle O(Ns\kappa )} time by minimizing the quadratic functionAxb2{\displaystyle \lVert A{\vec {x}}-{\vec {b}}\rVert ^{2}}.

When only a summary statistic of the solution vectorx{\displaystyle {\vec {x}}} is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate ofxMx{\displaystyle {\vec {x}}^{\dagger }M{\vec {x}}} inO(Nκ){\displaystyle O(N{\sqrt {\kappa }})}.

Quantum efficiency

[edit]

The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to beO(κ2logN/ε){\displaystyle O(\kappa ^{2}\log N/\varepsilon )}, whereε>0{\displaystyle \varepsilon >0} is the error parameter andκ{\displaystyle \kappa } is thecondition number ofA{\displaystyle A}. This was subsequently improved toO(κlog3κlogN/ε3){\displaystyle O(\kappa \log ^{3}\kappa \log N/\varepsilon ^{3})} by Andris Ambainis[7] and a quantum algorithm with runtime polynomial inlog(1/ε){\displaystyle \log(1/\varepsilon )} was developed by Childs et al.[8] Since the HHL algorithm maintains its logarithmic scaling inN{\displaystyle N} only for sparse or low rank matrices, Wossnig et al.[9] extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs inO(NlogNκ2){\displaystyle O({\sqrt {N}}\log N\kappa ^{2})} time compared to theO(NlogNκ2){\displaystyle O(N\log N\kappa ^{2})} of the standard HHL algorithm.

Optimality

[edit]

An important factor in the performance of the matrix inversion algorithm is thecondition numberκ{\displaystyle \kappa }, which represents the ratio ofA{\displaystyle A}'s largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as theconjugate gradient method decreases, asA{\displaystyle A} becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that allsingular values of the matrixA{\displaystyle A} lie between1κ{\displaystyle {\frac {1}{\kappa }}} and 1, in which case the claimed run-time proportional toκ2{\displaystyle \kappa ^{2}} will be achieved. Therefore, the speedup over classical algorithms is increased further whenκ{\displaystyle \kappa } is apoly(log(N)){\displaystyle \mathrm {poly} (\log(N))}.[1]

If the run-time of the algorithm were made poly-logarithmic inκ{\displaystyle \kappa } then problems solvable onn qubits could be solved in poly(n) time, causing the complexity classBQP to be equal toPSPACE.[1]

Error analysis

[edit]

Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulatingeiAt{\displaystyle e^{iAt}}. Assuming thatA{\displaystyle A} is s-sparse, this can be done with an error bounded by a constantε{\displaystyle \varepsilon }, which will translate to the additive error achieved in the output state|x{\displaystyle |x\rangle }.

The phase estimation step errs byO(1t0){\displaystyle O\left({\frac {1}{t_{0}}}\right)} in estimatingλ{\displaystyle \lambda }, which translates into a relative error ofO(1λt0){\displaystyle O\left({\frac {1}{\lambda t_{0}}}\right)} inλ1{\displaystyle \lambda ^{-1}}. Ifλ1/κ{\displaystyle \lambda \geq 1/\kappa }, takingt0=O(κε){\displaystyle t_{0}=O(\kappa \varepsilon )} induces a final error ofε{\displaystyle \varepsilon }. This requires that the overall run-time efficiency be increased proportional toO(1ε){\displaystyle O\left({\frac {1}{\varepsilon }}\right)} to minimize error.

Experimental realization

[edit]

While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel.

Cai et al.

[edit]

Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving2×2{\displaystyle 2\times 2} linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993.[10]

Barz et al.

[edit]

On February 5, 2013,Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[4]

Pan et al.

[edit]

On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity.[5]

Wen et al.

[edit]

Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al.[11] in 2018 using the algorithm developed by Subaşı et al.[12]

Proposed applications

[edit]

Several concrete applications of the HHL algorithm have been proposed, which analyze the algorithm's input assumptions and output guarantees for particular problems.

Electromagnetic scattering

[edit]

Clader et al. gave a version of the HHL algorithm which allows apreconditioner to be included, which can be used improve the dependence on thecondition number. The algorithm was applied to solving for theradar cross-section of a complex shape, which was one of the first examples of an application of the HHL algorithm to a concrete problem.[13]

Solving linear differential equations

[edit]

Berry proposed an algorithm for solving linear, time-dependentinitial value problems using the HHL algorithm.[14]

Solving nonlinear differential equations

[edit]

Two groups proposed[15] efficient algorithms for numerically integrating dissipative nonlinearordinary differential equations. Liu et al.[16] utilizedCarleman linearization for second order equations and Lloyd et al.[17] used a mean field linearization method inspired by thenonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations.

Finite element method

[edit]

Thefinite element method approximates linearpartial differential equations using large systems of linear equations. Montanaro and Pallister demonstrate that the HHL algorithm can achieve a polynomial quantum speedup for the resulting linear systems. Exponential speedups are not expected for problems in a fixed dimension or for which the solution meets certain smoothness conditions, such as certain high-order problems in many-body dynamics, or some problems incomputational finance.[18]

Least-squares fitting

[edit]

Wiebe et al. gave a quantum algorithm to determine the quality of aleast-squares fit. The optimal coefficients cannot be calculated directly from the output of the quantum algorithm, but the algorithm still outputs the optimal least-squares error.[19]

Machine learning

[edit]
Main article:Quantum machine learning

Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and thus are well-suited platforms for machine learning algorithms.[20]

The HHL algorithm has been applied tosupport vector machines. Rebentrost et al. show that a quantum support vector machine can be used for classification and achieve an exponential speedup over classical computers.[21]

In June 2018, Zhao et al. developed a quantum algorithm for Bayesian training of deep neural networks with an exponential speedup over classical training due to the use of the HHL algorithm.[6] They also gave the first implementation of the HHL algorithm to be run incloud-based quantum computers.[22]

Finance

[edit]

Proposals for using HHL in finance include solvingpartial differential equations for theBlack–Scholes equation and determiningportfolio optimization via aMarkowitz solution.[23]

Quantum chemistry

[edit]

The linearizedcoupled cluster method in quantum chemistry can be recast as a system of linear equations. In 2023, Baskaran et al. proposed the use of HHL algorithm to solve the resulting linear systems.[24] The number of state register qubits in the quantum algorithm is the logarithm of the number of excitations, offering an exponential reduction in the number of required qubits when compared to using thevariational quantum eigensolver orquantum phase estimation.

Implementation difficulties

[edit]

Recognizing the importance of the HHL algorithm in the field ofquantum machine learning, Scott Aaronson[25] analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm.

  1. the solution vector,|b{\displaystyle |b\rangle }, has to beefficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takesO(nc){\displaystyle O(n^{c})} steps the exponential advantage of HHL would vanish.
  2. the QPE phases calls for the generation of the unitaryeiAt{\displaystyle e^{iAt}}, and its controlled application. The efficiency of this step depends on theA{\displaystyle A} matrix being sparse and 'well conditioned' (lowκ{\displaystyle \kappa }). Otherwise, the application ofeiAt{\displaystyle e^{iAt}} would grow asO(nc){\displaystyle O(n^{c})} and once again, the algorithm's quantum advantage would vanish.
  3. lastly, the vector,|x{\displaystyle |x\rangle }, is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operatorx|M|x{\displaystyle \langle x|M|x\rangle }. If actual values ofx{\displaystyle {\vec {x}}} are needed, then HHL would need to be repeatedO(n){\displaystyle O(n)} times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed;[26] second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed.[27]

See also

[edit]

References

[edit]
  1. ^abcHarrow, Aram W; Hassidim, Avinatan; Lloyd, Seth (2008). "Quantum algorithm for linear systems of equations".Physical Review Letters.103 (15): 150502.arXiv:0811.3171.Bibcode:2009PhRvL.103o0502H.doi:10.1103/PhysRevLett.103.150502.PMID 19905613.S2CID 5187993.
  2. ^Johnston, Eric (2019-07-03).Programming Quantum Computers: Essential Algorithms and Code Samples.O'Reilly Media. p. 267.ISBN 9781492039655.
  3. ^abCai, X.-D; Weedbrook, C; Su, Z.-E; Chen, M.-C; Gu, Mile; Zhu, M.-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei (2013). "Experimental Quantum Computing to Solve Systems of Linear Equations".Physical Review Letters.110 (23): 230501.arXiv:1302.4310.Bibcode:2013PhRvL.110w0501C.doi:10.1103/PhysRevLett.110.230501.PMID 25167475.S2CID 20427454.
  4. ^abcBarz, Stefanie; Kassal, Ivan; Ringbauer, Martin; Lipp, Yannick Ole; Dakić, Borivoje; Aspuru-Guzik, Alán; Walther, Philip (2014)."A two-qubit photonic quantum processor and its application to solving systems of linear equations".Scientific Reports.4: 6115.arXiv:1302.1210.Bibcode:2014NatSR...4.6115B.doi:10.1038/srep06115.ISSN 2045-2322.PMC 4137340.PMID 25135432.
  5. ^abcPan, Jian; Cao, Yudong; Yao, Xiwei; Li, Zhaokai; Ju, Chenyong; Peng, Xinhua; Kais, Sabre; Du, Jiangfeng; Du, Jiangfeng (2014). "Experimental realization of quantum algorithm for solving linear systems of equations".Physical Review A.89 (2): 022313.arXiv:1302.1946.Bibcode:2014PhRvA..89b2313P.doi:10.1103/PhysRevA.89.022313.S2CID 14303240.
  6. ^abZhao, Zhikuan; Pozas-Kerstjens, Alejandro; Rebentrost, Patrick; Wittek, Peter (2019). "Bayesian Deep Learning on a Quantum Computer".Quantum Machine Intelligence.1 (1–2):41–51.arXiv:1806.11463.doi:10.1007/s42484-019-00004-7.S2CID 49554188.
  7. ^Ambainis, Andris (2010). "Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations".arXiv:1010.4458 [quant-ph].
  8. ^Childs, Andrew M.; Kothari, Robin; Somma, Rolando D. (2017). "Quantum Algorithm for Systems of Linear Equations with Exponentially Improved Dependence on Precision".SIAM Journal on Computing.46 (6):1920–1950.arXiv:1511.02306.doi:10.1137/16m1087072.ISSN 0097-5397.S2CID 3834959.
  9. ^Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam (2018). "A quantum linear system algorithm for dense matrices".Physical Review Letters.120 (5): 050502.arXiv:1704.06174.Bibcode:2018PhRvL.120e0502W.doi:10.1103/PhysRevLett.120.050502.PMID 29481180.S2CID 3714239.
  10. ^Cai, X. -D; Weedbrook, Christian; Su, Z. -E; Chen, M. -C; Gu, Mile; Zhu, M. -J; Li, L; Liu, N. -L; Lu, Chao-Yang; Pan, Jian-Wei (2013). "Experimental Quantum Computing to Solve Systems of Linear Equations".Physical Review Letters.110 (23): 230501.arXiv:1302.4310.Bibcode:2013PhRvL.110w0501C.doi:10.1103/PhysRevLett.110.230501.PMID 25167475.S2CID 20427454.
  11. ^Jingwei Wen, Xiangyu Kong, Shijie Wei, Bixue Wang, Tao Xin, and Guilu Long (2019). "Experimental realization of quantum algorithms for a linear system inspired by adiabatic quantum computing". Phys. Rev. A99, 012320.
  12. ^Subaşı, Yiğit; Somma, Rolando D.; Orsucci, Davide (2019-02-14). "Quantum Algorithms for Systems of Linear Equations Inspired by Adiabatic Quantum Computing".Physical Review Letters.122 (6): 060504.arXiv:1805.10549.Bibcode:2019PhRvL.122f0504S.doi:10.1103/physrevlett.122.060504.ISSN 0031-9007.PMID 30822089.S2CID 73493666.
  13. ^Clader, B. D; Jacobs, B. C; Sprouse, C. R (2013). "Preconditioned Quantum Linear System Algorithm".Physical Review Letters.110 (25): 250504.arXiv:1301.2340.Bibcode:2013PhRvL.110y0504C.doi:10.1103/PhysRevLett.110.250504.PMID 23829722.S2CID 33391978.
  14. ^Berry, Dominic W (2010). "High-order quantum algorithm for solving linear differential equations".Journal of Physics A: Mathematical and Theoretical.47 (10): 105301.arXiv:1010.2745.Bibcode:2014JPhA...47j5301B.doi:10.1088/1751-8113/47/10/105301.S2CID 17623971.
  15. ^Levy, Max G. (January 5, 2021)."New Quantum Algorithms Finally Crack Nonlinear Equations".Quanta Magazine. RetrievedDecember 31, 2022.
  16. ^Liu, J.P.; Kolden, H.Ø.; Krovi, H.K.; Loureiro, N.F.; Trivisa, K.; Childs, A.M. (2021)."Efficient quantum algorithm for dissipative nonlinear differential equations".PNAS.118 (35): e2026805118.arXiv:2011.03185.Bibcode:2021PNAS..11826805L.doi:10.1073/pnas.2026805118.PMC 8536387.PMID 34446548.
  17. ^Lloyd, S.; De Palma, G; Gokler, C.; Kiani, B.; Liu, Z.W.; Marvian, M.; Tennie, F.; Palmer, T. (2020). "Quantum algorithm for nonlinear differential equations".arXiv:2011.06571 [quant-ph].
  18. ^Montanaro, Ashley; Pallister, Sam (2016). "Quantum Algorithms and the Finite Element Method".Physical Review A.93 (3): 032324.arXiv:1512.05903.Bibcode:2016PhRvA..93c2324M.doi:10.1103/PhysRevA.93.032324.S2CID 44004935.
  19. ^Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2012). "Quantum Data Fitting".Physical Review Letters.109 (5): 050505.arXiv:1204.5242.Bibcode:2012PhRvL.109e0505W.doi:10.1103/PhysRevLett.109.050505.PMID 23006156.S2CID 118439810.
  20. ^Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2013). "Quantum algorithms for supervised and unsupervised machine learning".arXiv:1307.0411 [quant-ph].
  21. ^Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth (2013). "Quantum support vector machine for big feature and big data classification".arXiv:1307.0471v2 [quant-ph].
  22. ^"apozas/bayesian-dl-quantum".GitLab. Retrieved30 October 2018.
  23. ^Jacquier, Antoine (2022-10-31).Quantum Machine Learning and Optimisation in Finance: On the Road to Quantum Advantage.Packt. p. 349.ISBN 9781801817875.
  24. ^Baskaran, N (2023)."Adapting the Harrow-Hassidim-Lloyd algorithm to quantum many-body theory".Physical Review Research.5 (4): 043113.Bibcode:2023PhRvR...5d3113B.doi:10.1103/PhysRevResearch.5.043113.
  25. ^Aaronson, Scott (2015)."Read the fine print".Nature Physics.11 (4):291–293.Bibcode:2015NatPh..11..291A.doi:10.1038/nphys3272.S2CID 122167250. Retrieved2023-05-09.
  26. ^Schuld, Maria (2018).Supervised Learning with Quantum Computers.Springer Publishing. p. 218.ISBN 9783319964249.
  27. ^Schuld, Maria (2018).Supervised Learning with Quantum Computers.Springer Publishing. p. 219.ISBN 9783319964249.
General
Theorems
Quantum
communication
Quantum cryptography
Quantum algorithms
Quantum
complexity theory
Quantum
processor benchmarks
Quantum
computing models
Quantum
error correction
Physical
implementations
Quantum optics
Ultracold atoms
Spin-based
Superconducting
Quantum
programming
Retrieved from "https://en.wikipedia.org/w/index.php?title=HHL_algorithm&oldid=1297634022"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp