Incomputer science,lattice problems are a class ofoptimization problems related to mathematical objects calledlattices. The conjecturedintractability of such problems is central to the construction of securelattice-based cryptosystems: lattice problems are an example ofNP-hard problems which have been shown to beaverage-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even againstquantum computers. For applications in suchcryptosystems, lattices overvector spaces (often) orfree modules (often) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) abasis for the vector spaceV and anormN. The norm usually considered is the Euclidean normL2. However, other norms (such asLp) are also considered and show up in a variety of results.[1]
Throughout this article, let denote the length of the shortest non-zero vector in the latticeL: that is,
This is an illustration of the shortest vector problem (basis vectors in blue, shortest vector in red).
In the SVP, abasis of avector spaceV and anormN (oftenL2) are given for a latticeL and one must find the shortest non-zero vector inV, as measured byN, inL. In other words, the algorithm should output a non-zero vectorv such that. In the following, the size of the problem is specified byn, thedimension of the vector spaceV.
In the γ-approximation version SVPγ, one must find a non-zero lattice vector of length at most for given.
The exact version of the problem is only known to beNP-hard for randomized reductions.[2][3] By contrast, the corresponding problem with respect to theuniform norm is known to beNP-hard.[4]
To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time () and memory, and algorithms requiring both exponential time and space () in the lattice dimension. The former class of algorithms most notably includes lattice enumeration[5][6][7] and random sampling reduction,[8][9] while the latter includes lattice sieving,[10][11][12] computing the Voronoi cell of the lattice,[13][14] and discrete Gaussian sampling.[15] An open problem is whether algorithms for solving exact SVP exist running in single exponential time () and requiring memory scaling polynomially in the lattice dimension.[16]
To solve the γ-approximation version SVPγ for for the Euclidean norm, the best known approaches are based on usinglattice basis reduction. For large, theLenstra–Lenstra–Lovász (LLL) algorithm can find a solution in time polynomial in the lattice dimension. For smaller values, the Block Korkine-Zolotarev algorithm (BKZ)[17][18][19] is commonly used, where the input to the algorithm (the blocksize) determines the time complexity and output quality: for large approximation factors, a small block size suffices, and the algorithm terminates quickly. For small, larger are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most), and its overall complexity is closely related to the costs of these SVP calls in dimension.
The problem GapSVPβ consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most or larger than, where can be a fixed function of the dimension of the lattice. Given a basis for the lattice, the algorithm must decide whether or. Like otherpromise problems, the algorithm is allowed to err on all other cases.
Yet another version of the problem is GapSVPζ,γ for some functions ζ and γ. The input to the algorithm is a basis and a number. It is assured that all the vectors in theGram–Schmidt orthogonalization are of length at least 1, and that and that, where is the dimension. The algorithm must accept if, and reject if. For large (i.e.), the problem is equivalent to GapSVPγ because[20] a preprocessing done using theLLL algorithm makes the second condition (and hence,) redundant.
This is an illustration of the closest vector problem (basis vectors in blue, external vector in green, closest vector in red).
In CVP, a basis of a vector spaceV and ametricM (oftenL2) are given for a latticeL, as well as a vectorv inV but not necessarily inL. It is desired to find the vector inL closest tov (as measured byM). In the-approximation version CVPγ, one must find a lattice vector at distance at most.
The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given anoracle for CVPγ (defined below), one can solve SVPγ by making some queries to the oracle.[21] The naive method to find the shortest vector by calling the CVPγ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from SVPγ to CVPγ is as follows: Suppose that the input to the SVPγ is the basis for lattice. Consider the basis and let be the vector returned byCVPγ(Bi,bi). The claim is that the shortest vector in the set is the shortest vector in the given lattice.
Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP.[22] UsingPCP tools,Arora et al. showed that CVP is hard to approximate within factor unless.[23] Dinur et al. strengthened this by giving a NP-hardness result with for.[24]
Algorithms for CVP, especially the Fincke and Pohst variant,[6] have been used for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals).[25][13] In this context it is calledsphere decoding due to the radius used internal to many CVP solutions.[26]
It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS).[27] It is called theLAMBDA method in that field. In the same field, the general CVP problem is referred to asInteger Least Squares.
This problem is similar to the GapSVP problem. For GapSVPβ, the input consists of a lattice basis and a vector, and the algorithm must answer whether one of the following holds:
there is a lattice vector such that the distance between it and is at most 1, and
every lattice vector is at a distance greater than away from.
The opposite condition is that the closest lattice vector is at a distance, hence the nameGapCVP.
The problem is trivially contained inNP for any approximation factor.
Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for.[28] Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of.[10]
In 1993, Banaszczyk showed that GapCVPn is in.[29] In 2000, Goldreich and Goldwasser showed that puts the problem in both NP andcoAM.[30] In 2005, Aharonov and Regev showed that for some constant, the problem with is in.[31]
For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for.[32]
Given a lattice L of dimensionn, the algorithm must outputnlinearly independent so that, where the right-hand side considers all bases of the lattice.
In the-approximate version, given a lattice L with dimensionn, one must findnlinearly independent vectors of length, where is theth successive minimum of.
This problem is similar to CVP. Given a vector such that its distance from the lattice is at most, the algorithm must output the closest lattice vector to it.
Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basis, output an equivalent basis such that the length of the longest vector in is as short as possible.
The approximation version SBPγ problem consist of finding a basis whose longest vector is at most times longer than the longest vector in the shortest basis.
Average-case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even againstquantum computers.
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis.Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearlyorthogonal vectors. TheLenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time.[33] This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool incryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice; however, this belief was challenged in the late 1990s, when several new results on the hardness of lattice problems were obtained, starting with the result ofAjtai.[2]
In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between theworst-case complexity andaverage-case complexity of some lattice problems.[2][3] Building on these results, Ajtai andDwork created apublic-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP,[34] thus making it the first result to have used worst-case hardness to create secure systems.[35]
^Kannan, Ravi (1983). "Improved algorithms for integer programming and related lattice problems".Proceedings of the fifteenth annual ACM symposium on Theory of computing - STOC '83. New York, NY, USA: ACM. pp. 193–206.doi:10.1145/800061.808749.ISBN978-0-89791-099-6.S2CID18181112.
^Micciancio, Daniele; Voulgaris, Panagiotis (2010). "A deterministic single exponential time algorithm for most lattice problems based on voronoi cell computations".Proceedings of the forty-second ACM symposium on Theory of computing. STOC '10. New York, NY, USA: ACM. pp. 351–358.CiteSeerX10.1.1.705.3304.doi:10.1145/1806689.1806739.ISBN978-1-4503-0050-6.S2CID2449948.
^Micciancio, Daniele;Goldwasser, Shafi (2002).Complexity of Lattice Problems. Springer.
^Goldreich, O.; et al. (1999). "Approximating shortest lattice vectors is not harder than approximating closest lattice vectors".Inf. Process. Lett.71 (2):55–61.doi:10.1016/S0020-0190(99)00083-6.
^Biglieri, E.; Calderbank, R.;Constantinides, Anthony G.; Goldsmith, A.; Paulraj, A.; Poor, H. V. (2007).MIMO Wireless Communications. Cambridge: Cambridge U. P.
^Wang, Ping; Le-Ngoc, Tho (2011). "A List Sphere Decoding Algorithm with Improved Radius Setting Strategies".Wireless Personal Communications.61 (1):189–200.doi:10.1007/s11277-010-0018-4.S2CID30919872.
^Schnorr, C. P. "Factoring integers and computingdiscrete logarithms via diophantine approximation".Advances in Cryptology – Proceedings of Eurocrypt '91.
^Cai, Jin-Yi (2000). "The Complexity of Some Lattice Problems".Algorithmic Number Theory. Lecture Notes in Computer Science. Vol. 1838. pp. 1–32.doi:10.1007/10722028_1.ISBN978-3-540-67695-9.