Movatterモバイル変換


[0]ホーム

URL:


Papers updated in last 365 days (3230 results)

2025/2103(PDF)
Last updated:  2026-02-20
Threshold Batched Identity-Based Encryption from Pairings in the Plain Model
Public-key cryptography
Junqing Gong, Brent Waters, Hoeteck Wee, and David J. Wu
Show abstract
Public-key cryptography
In a batched identity-based encryption (IBE) scheme, ciphertexts are associated with a batch label $\mathsf{tg}^\ast$ and an identity $\mathsf{id}^\ast$ while secret keys are associated with a batch label $\mathsf{tg}$ and a set of identities $S$. Decryption is possible whenever $\mathsf{tg} = \mathsf{tg}^\ast$ and $\mathsf{id}^\ast \in S$. The primary efficiency property in a batched IBE scheme is that the size of the decryption key for a set $S$ should be independent of the size of $S$. Batched IBE schemes provide an elegant cryptographic mechanism to support encrypted memory pools in blockchain applications.In this work, we introduce a new algebraic framework for building pairing-based batched IBE. Our framework gives the following:First, we obtain a selectively-secure batched IBE scheme under a $q$-type assumption in the plain model. Both the ciphertext and the secret key consist of a constant number of group elements. This is the first pairing-based batched IBE scheme in the plain model. Previous pairing-based schemes relied on the generic group model and the random oracle model.Next, we show how to extend our base scheme to a threshold batched IBE scheme with silent setup. In this setting, users independently choose their own public and private keys, and there is a non-interactive procedure to derive the master public key (for a threshold batched IBE scheme) for a group of users from their individual public keys. We obtain a statically-secure threshold batched IBE scheme with silent setup from a $q$-type assumption in the plain model. As before, ciphertexts and secret keys in this scheme contain a constant number of group elements. Previous pairing-based constructions of threshold batched IBE with silent setup relied on the generic group model, could only support a polynomial number of identities (where the size of the public parameters scaled linearly with this bound), and ciphertexts contained $O(\lambda / \log \lambda)$ group elements, where $\lambda$ is the security parameter.Finally, we show that if we work in the generic group model, then we obtain a (threshold) batched IBE scheme with shorter ciphertexts (by 1 group element) than all previous pairing-based constructions (and without impacting the size of the secret key).Our constructions rely on classic algebraic techniques underlying pairing-based IBE and do not rely on the signature-based witness encryption viewpoint taken in previous works.
2025/1884(PDF)
Last updated:  2026-02-19
PERSEUS – Probabilistic Evaluation of Random probing SEcurity Using efficient Sampling
Implementation
Sonia Belaïd and Gaëtan Cassiers
Show abstract
Implementation
Cryptographic implementations are inherently vulnerable to side-channel attacks, which exploit physical leakages such as power consumption. Masking has become the most widely adopted countermeasure to mitigate these threats, as it randomizes intermediate values and makes the leakage less exploitable. Yet, a central challenge remains: how to rigorously assess the concrete security level of masked implementations.To tackle this issue, the random probing model has emerged as a powerful abstraction. It formalizes leakage as random probes in the circuit and, importantly, the security in the noisy leakage model, which closely reflects the behavior of real embedded devices, reduces to security in the random probing model. Hence, proving security in the random probing model provides sound guarantees of practical resistance against side-channel attacks.Yet, the current state of the art on random probing compilers and verifiers suffers from a clear limitation: scalable approaches yield prohibitively large and inefficient circuits, while tighter methods do not scale to practical circuit sizes. In this work, we bridge this gap by introducing a new methodology that directly estimates the random probing security of large circuits through Monte Carlo sampling, combined with a pruning strategy that drastically reduces the sampling space.We implement our approach in a new tool, PERSEUS, which supports both gate and wire leakage models. Our experiments demonstrate that PERSEUS can efficiently evaluate masked implementations of AES-128 with $n=8$ shares, achieving security levels beyond 32 bits, thereby significantly advancing the state of the art in practical verification of side-channel countermeasures.
2025/2036(PDF)
Last updated:  2026-02-19
On new variants of funcCPA security and related CCA-secure constructions
Public-key cryptography
Caroline Fontaine, Marc Renard, Renaud Sirdey, and Oana Stan
Show abstract
Public-key cryptography
FuncCPA is a recent security notion in which the CPA game is extended by a functional re-encryption oracle in order to model setups in which a server performing FHE computations is allowed to interactively delegate part of the computation back to the client. In this paper, we study funcCPA-style variants of several CCA security notions, including CCA1 and the more recent vCCA security. Contrary to the CPA case where a strict separation holds between CPA and funcCPA, we show that these new variants are equivalent to their respective originating CCA security notions. Interestingly, funcCPA-style security notions also model setups where, rather than delegating part of the encrypted domain computation all the way back to the client, the server has the ability to perform this delegation towards a honest or semi-honest client proxy it hosts, such as a secure enclave. We then provide a number of blueprints for achieving both FHE-like capabilities and advanced CCA security properties which may then meaningfully be implemented by leveraging on the combination of a partially homormophic scheme and a semi-honest non-colluding enclave hosted within the server performing the encrypted domain calculations itself.
2026/192(PDF)
Last updated:  2026-02-19
Verification Theatre: False Assurance in Formally Verified Cryptographic Libraries
Attacks and cryptanalysis
Nadim Kobeissi
Show abstract
Attacks and cryptanalysis
Every formally verified system embeds a verification boundary: the interface between code with machine-checked proofs and code that is trusted without them.We study what happens when this boundary is not communicated clearly.Through a case study of Cryspen's libcrux and hpke-rs cryptographic libraries, we present thirteen vulnerabilities that escaped formal verification.Nine reside in unverified code, including a cross-backend endianness bug that caused real decryption failures in Signal's post-quantum ratchet, a missing mandatory X25519 validation, nonce reuse via integer overflow, and two FIPS~204 specification violations in the ML-DSA verifier.Four reside in formally verified specification and proof code: in ML-KEM, a wrong decompression constant, a missing inverse NTT, and a false serialization proof; in ML-DSA, a wrong multiplication specification that renders axiomatized AVX2 proofs unsound.From these findings, we develop a taxonomy of five verification boundary failure types, a lightweight auditing methodology for detecting them, and a comparative analysis with AWS's verified libcrypto.The same failure types arise in both projects, but their management---through systematic documentation, proof execution in CI, and clear scope communication---varies significantly.We call the gap between verification claims and verification reality "verification theatre", and propose concrete practices for closing it.
2025/1122(PDF)
Last updated:  2026-02-19
Mechanizing Nested Hybrid Arguments
Foundations
Markus Krabbe Larsen and Carsten Schürmann
Show abstract
Foundations
Hybrid arguments are prevalent in cryptographic security proofs, however, existing mechanizations in proof assistants are not as general as they could be. In this paper, we prove a general theorem about the admissibility of hybrid arguments that encompasses the query-counting, multi-instance, and nested case. We present three case studies about public-key encryption, which demonstrate the generality of the theorem. The results of this paper are achieved in the setting of state-separating proofs and the theory is integrated into and the case studies are mechanized in Nominal-SSProve.
2025/1514(PDF)
Last updated:  2026-02-19
Rigorous Methods for Computational Number Theory
Foundations
Koen de Boer, Alice Pellet-Mary, and Benjamin Wesolowski
Show abstract
Foundations
We present the first algorithm for computing class groups and unit groups of arbitrary number fields that provably runs in probabilistic subexponential time, assuming the Extended Riemann Hypothesis (ERH). Previous subexponential algorithms were either restricted to imaginary quadratic fields, or relied on several heuristic assumptions that have long resisted rigorous analysis.The heart of our method is a new general strategy to provably solve a recurring computational problem in number theory (assuming ERH): given an ideal class $[\mathfrak a]$ of a number field $K$, sample an ideal $\mathfrak b \in [\mathfrak a]$ belonging to a particular family of ideals (e.g., the family of smooth ideals, or near-prime ideals). More precisely, let $\mathcal{S}$ be an arbitrary family of ideals, and $\mathcal{S}_B$ the family of $B$-smooth ideals. We describe an efficient algorithm that samples ideals $\mathfrak b \in [\mathfrak a]$ such that $\mathfrak b \in \mathcal{S}\cdot\mathcal{S}_B$ with probability proportional to the density of $\mathcal{S}$ within the set of all ideals.The case where $\mathcal{S}$ is the set of prime ideals yields the family $\mathcal{S}\cdot\mathcal{S}_B$ of near-prime ideals, of particular interest in that it constitutes a dense family of efficiently factorable ideals. The case of smooth ideals $\mathcal{S} = \mathcal{S}_B$ regularly comes up in index-calculus algorithms (notably to compute class groups and unit groups), where it has long constituted a theoretical obstacle overcome only by heuristic arguments.
2026/197(PDF)
Last updated:  2026-02-19
Efficient Evaluation of Multivariate Polynomials over Structured Subsets of $\mathbb F_q^n$
Foundations
Vaibhav Dixit, Santanu Sarkar, Fukang Liu, and Willi Meier
Show abstract
Foundations
Efficient evaluation of a multivariate polynomial of degree $d$ over a finite space is a central primitive in algebraic cryptanalysis, particularly in exhaustive search attacks against multivariate public-key cryptosystems (MPKCs). For the Boolean space $\mathbb F_2^n$, Bouillaguet et al. introduced the fast exhaustive search (FES) algorithm at CHES 2010. This line of work was further developed by Dinur at EUROCRYPT 2021 and Bouillaguet at TOMS 2024. Extending beyond the Boolean setting, Furue and Takagi proposed an algorithm at PQCrypto 2023 that generalizes FES to the finite-field space $\mathbb F_q^n$, where $q$ is a prime number, achieving time complexity $\mathcal O\big(d\cdot q^n\big)$ with an initialization cost of $\binom{n+d}{d}^2$ and memory complexity $\mathcal{O}\big(\log(q\cdot n)\cdot n \cdot \binom{n+d}{d}\big)$. However, all these algorithms operate over the full space $\mathbb F_q^n$, which limits their applicability in many cryptanalytic scenarios where polynomial evaluation is required only over specific subsets of $\mathbb F_q^n$, such as those arising in the Syndrome Decoding Problem. Recently, Liu et al. proposed a memory-efficient algorithm for evaluating polynomials over the structured subset $P_{n_s}^{w_s} \times \cdots \times P_{n_1}^{w_1} \subseteq \mathbb F_2^n$, where $\sum_{i=1}^{s} n_i = n$ and $P_{n_i}^{w_i} \subseteq \mathbb F_2^{n_i}$ denotes the set of vectors of length $n_i$ with Hamming weight at most $w_i$. In this work, we extend the structured-subset evaluation paradigm from the Boolean setting to arbitrary finite fields $\mathbb F_q$. Building on the abstraction of evaluation rules and evaluation orders introduced by Liu et al., and combining it with higher-order derivative techniques over finite fields, we develop a unified theoretical framework for evaluating multivariate polynomials over the structured subset $S$ of $ \mathbb{F}_q^n$. We derive two methods for the initialization phase: a coefficient-based approach using the coefficients of the polynomial and a derivative-based approach exploiting its higher-order derivatives. The former achieves time complexity $\mathcal{O}\!\big(d \cdot \binom{2n+d}{d}\big)$ with memory requirement $\mathcal{O}\!\big(\log q \cdot \binom{n+d}{d} + \log q \cdot d^2\big)$, while the latter runs in time $\mathcal{O}\!\big(\binom{n+d}{d}^2\big)$ and requires $\mathcal{O}\!\big(\log q \cdot \binom{n+d}{d}\big)$ memory. Depending on the values of $n$ and $d$, the appropriate method is selected for initialization. After initialization, a degree-$d$ polynomial can be evaluated over a structured subset $S$ of $ \mathbb{F}_q^n$ with time complexity $\mathcal{O}\!\big(d \cdot |S|\big)$.
2025/1941(PDF)
Last updated:  2026-02-19
Adaptively-Secure Three-Round Threshold Schnorr from DL
Cryptographic protocols
Guilhem Niot, Michael Reichle, and Kaoru Takemure
Show abstract
Cryptographic protocols
Threshold signatures are an important tool for trust distribution, and preserving the interface of standardized signatures, such as Schnorr signatures, is crucial for their adoption. In practice, latency dominates end-to-end signing time, so minimizing the number of interaction rounds is critical. Ideally, this is achieved under minimal assumptions and with adaptive security, where the adversary can corrupt signers on-the-fly during the protocol. While Schnorr signatures are proven secure under the Discrete Logarithm (DL) assumption in the random oracle model, the most round-efficient adaptively-secure threshold Schnorr protocols rely on stronger assumptions such as Decisional Diffie-Hellman (DDH), the Algebraic One-More Discrete Logarithm (AOMDL) or even the Low-Dimensional Vector Representation (LDVR) assumptions. The only adaptively-secure threshold Schnorr signature from the DL assumption requires five rounds, leaving a significant gap in our understanding of this fundamental primitive. Achieving low-round protocols with adaptive security from the DL assumption has remained an open question.We resolve this question by presenting the first adaptively-secure threshold Schnorr scheme in three rounds (two online, one offline) in the random oracle model under the DL assumption. Our result demonstrates that achieving both low round complexity and adaptive security is possible while preserving the (so far) minimal assumptions for Schnorr signatures.To achieve this, we introduce new techniques, including a novel use of an equivocal commitment scheme paired with a simulation-extractable NIZK, and a masking-based aggregated opening strategy for homomorphic commitments. Our work also makes several contributions that might be of independent interest, including a formalization of a strong adaptive security notion, a stronger commitment equivocation property, and an analysis of the simulation-extractability of the randomized Fischlin transformation.
2026/272(PDF)
Last updated:  2026-02-19
On the Complexity of Interactive Arguments
Foundations
Idan Baril and Iftach Haitner
Show abstract
Foundations
We study the minimal hardness assumptions required for constructing interactive argumentsfor NP, focusing on succinct arguments—the total number of bits sent by the prover is smallerthan the witness size—and on relaxed forms of zero knowledge, such as witness indistinguisha-bility. Known constructions of succinct arguments rely on collision-resistant hash functions(Kilian, STOC 92), indistinguishability obfuscation (Sahai and Waters, STOC 14), the dis-crete logarithm assumption (Bootle et al., Eurocrypt 16), and lattice-based assumptions (Baumet al., Crypto 18), while known constructions of witness indistinguishability require one-wayfunctions (OWFs) (Goldreich et al., FOCS 86, JACM 91). This suggests that succinct witness-indistinguishable interactive arguments may require the existence of OWFs.Somewhat surprisingly, we prove that the existence of fully black-box reductions from OWFsto succinct witness-indistinguishable interactive arguments is unlikely: the existence of sucha reduction would imply (unconditionally) the existence of OWFs. More generally, we con-sider fully black-box reductions from OWFs to succinct witness-indistinguishable interactivearguments combined with an additional hardness assumption G (e.g., NP̸ ⊆ P/poly). Suchreductions would show that any algorithm breaking the candidate one-way function can betransformed into an algorithm that either breaks the soundness of the interactive argument orviolates the assumption G. We prove that the existence of such a reduction already implies ablack-box reduction from OWFs to G alone.
2024/1906(PDF)
Last updated:  2026-02-19
On Efficient Computations of $y^2=X^3+b/\mathbb{F}_p$ \\for Primes $p\equiv 1 \pmod 3$
Public-key cryptography
Guangwu Xu, Wei Yu, Ke Han, and Pengfei Lu
Show abstract
Public-key cryptography
Solinas' window $\tau$NAF algorithm has been a very powerful scalar multiplication method designed for the class of Koblitz curves over binary fields. The idea is to use the Frobenius endomorphism $\tau$ which is a notable efficiently computable endomorphism (viewed as multiplication by complex algebraic integer). The method consists of two major steps: the first is a reduction of a scalar $k$ to the form of $k_1+k_2\tau$, with $k_1, k_2$ being of half of the size of the group order; the second is to use $\tau$-adic non-adjacent form of $k_1+k_2\tau$ to evaluate $kP$, where an efficient pre-computation is enabled based on a carefully chosen set of coefficients. It would desirable to develop a similar method for a large family of curves, especially for those practical curves over prime fields. In this effort, the GLV method was proposed as a suboptimal solution whose essential part is to create a reduction with respect to some efficiently computable endomorphism, and then use simultaneous multiplication method to evaluate $kP$. The GLV method has been the state-of-the-art for scalar multiplication for the family of curves $E_b: y^2=x^3+b/\mathbb{F}_p$ for primes $p\equiv 1 \pmod 3$. This family of curves is of practical significance, e.g., some of its instances are used in Blockchain platforms such as Bitcoin and Ethereum. The curves $E_b/\mathbb{F}_p$ have the ring $\mathbb{Z}[\omega]$ of Eisenstein integers as their ring of endomorphisms, and the scalar multiplication by the unit group $U\subset\mathbb{Z}[\omega]$ is especially efficient as $\omega$ corresponds to the map $(x,y)\mapsto (\beta x,y)$ (and ${\cal O}\mapsto {\cal O}$) where $\beta$ is a cubic root of unity in $\mathbb{F}_p$. In this paper, we are able to settle a window $\tau$NAF algorithm for the class of curves $E_b/\mathbb{F}_p$, using the endomorphism $\tau = 1-\omega$. We first derives several efficient formulas for some special scalars. More precisely, let $P=(x,y)\in E_b(\mathbb{F}_p)$, we have\[\tau P = \left(\frac{x^3+4b}{(1-\beta)^2x^2}, y\frac{x^3-8b}{(1-\beta)^3x^3}\right).\]Converted to Jacobian projective coordinates, this operation of $\tau$ requires $6\mathbf{M}$ ($\mathbf{M}$ denotes the cost for a field multiplication) which is less expensive than a point doubling. The efficient formula of $\tau P$ can be further used to speed up the computation of $3P$, compared to the current record of $15\mathbf{M}$, our tripling formula just costs $10\mathbf{M}$. We also propose efficient computations of $(2-\omega)P$ and $(3-\omega)P$. As $N(\tau)=3$, the fast computation of $\tau$ and nice properties of Eisenstein integers enable us to develop a window $\tau$NAF method for curves $E_b/\mathbb{F}_p$. This result is not only of theoretical interest but also of a higher performance due to the facts that (1) the window $\tau$NAF can be further turned to a sparse form that uses only tripling operation (which costs $10\mathbf{M}$) and augmented-$\tau$ operation (which costs $5\mathbf{M}$); (2) a pre-computation is carefully designed by choosing a set of coefficients that is $U$-invariant, such a symmetry simplifies the pre-computation in that only about one-sixth of the coefficients need to be processed. With respect to the number of field multiplication used, our optimized algorithm achieves more than $16.7\%, 17.6\%$ and $18\%$ of improvement over the current state-of-the-art, for group orders of $256$-bit, $384$-bit, and $512$-bit respectively. This paper also explores a regular window $\tau$NAF as a countermeasure against side-channel attacks. This regularized version significantly outperforms the regular GLV method, cutting costs by $17.7\%, 19.5\%$ and $20.9\%$, for group orders of $256$-bit, $384$-bit, and $512$-bit respectively.
2025/897(PDF)
Last updated:  2026-02-19
SQIsign2DPush: Faster Signature Scheme Using 2-Dimensional Isogenies
Public-key cryptography
Kohei Nakagawa and Hiroshi Onuki
Show abstract
Public-key cryptography
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition for additional signature. SQIsign has attracted much attention because of its very short signature and key size among candidates for the NIST PQC standardization. Recently, many new signature schemes using high-dimensional isogenies have been proposed, such as: SQIsignHD, SQIsign2D-West, SQIsgn2D-East, and SQIPrime. Last year, SQIsign advanced to Round 2 of the NIST competition and was updated to version 2.0 (we call it SQIsign-v2.0), which is based on SQIsign2D-West. SQIsign-v2.0 achieves smaller signature sizes and faster verification. However, the signing costs are relatively high. In this paper, we propose a new signature scheme `SQIsign2DPush', which has a smaller signing cost than SQIsign-v2.0 while the signature size and the verification cost are almost the same.
2025/1688(PDF)
Last updated:  2026-02-19
SUMMER: Recursive Zero-Knowledge Proofs for Scalable RNN Training
Applications
Yuange Li and Xiong Fan
Show abstract
Applications
Zero-knowledge proofs of training (zkPoT) enable a prover to certify that a model was trained on a committed dataset under a prescribed algorithm without revealing the model or data. Proving recurrent neural network (RNN) training is challenging due to hidden-state recurrence and cross-step weight sharing, which require proofs to enforce recurrence, gradients, and nonlinear activations across time. We present SUMMER (SUMcheck and MERkle tree), a recursive zkPoT for scalable RNNs. SUMMER generates sumcheck-based proofs that backpropagation through time (BPTT) was computed correctly over a quantized finite field, while nonlinearities such as $\tanh$ and softmax are validated by lookup arguments. Per-step commitments and proofs are folded with Merkle trees, yielding a final commitment and a succinct proof whose size and verification time are independent of the number of iterations.SUMMER offers (i) the first end-to-end zkPoT for RNN training, including forward, backward, and parameter updates; (ii) the first use of LogUp for nonlinear operations with a batched interface; and (iii) efficient recursive composition of lookup and sumcheck proofs. On a Mini-Char-RNN with 12M parameters, the prover runs in 70.1 seconds per iteration, $8.5\times$ faster and $11.6\times$ more memory efficient than the IVC baseline, with 165 kilobyte proofs verified in 20 milliseconds.
2025/760(PDF)
Last updated:  2026-02-18
DGSP: An Efficient Scalable Fully Dynamic Group Signature Scheme Using $\rm{SPHINCS}^+$
Cryptographic protocols
Mojtaba Fadavi, Seyyed Arash Azimi, Sabyasachi Karati, and Samuel Jaques
Show abstract
Cryptographic protocols
A group signature scheme enables users of a group to anonymously sign messages on behalf of the group, while a designated authority can revoke anonymity when needed to ensure user accountability. Among group signature schemes, fully dynamic ones are particularly desirable as they allow new users to join and misbehaved existing users to be revoked without requiring system-wide updates.This paper introduces DGSP, a post-quantum fully dynamic group signature scheme that addresses key limitations of existing schemes like DGMT and SPHINX-in-the-Head (SITH). Leveraging the properties of ${\rm SPHINCS}^+$, DGSP achieves a superior balance between scalability and efficiency by (i) supporting up to $2^{60}$ users, (ii) requiring negligible setup time, and (iii) featuring efficient algorithms with short signatures of constant-size. Notably, while DGMT is limited to $2^{15}$ users, DGSP extends this to $2^{60}$ while keeping signatures compact—only 3.03 to 4.93 times larger than those of DGMT, yet just 0.021 to 0.004 times the size of SITH signatures. This makes DGSP a compelling solution for applications requiring both large-scale user support and signature efficiency in the post-quantum setting. Moreover, DGSP strengthens managerial accountability compared to DGMT by (i) enabling users to verify their certificates generated by the manager and (ii) ensuring public verifiability of the manager's signature attribution. Although SITH claims to support $2^{60}$ users, our analysis reveals that its opening algorithm is highly inefficient, making it impractical to handle such a large number of users. Our security analysis shows that DGSP achieves unforgeability, anonymity, and traceability in the standard model. We also provide a complete implementation of DGSP. Our experimental study exhibits that DGSP is superior over existing schemes in terms of efficiency and scalability.
2026/292(PDF)
Last updated:  2026-02-18
Crossing with Confidence: Formal Analysis and Model Checking of Blockchain Bridges
Cryptographic protocols
Pyrros Chaidos, Pooya Farshim, Denis Firsov, Dimitar Jetchev, Aggelos Kiayias, Markulf Kohlweiss, and Anca Nitulescu
Show abstract
Cryptographic protocols
We develop formal code-based security definitions for blockchain bridges and apply them to several bridge architectures deployed in practice. We derive both traditional pen-and-paper proofs and on the other, automated guarantees against bounded counterexamples. The latter is achieved via bounded model checking of our formally specified properties, implemented in Quint, a specification language and model checker closely related to TLA+.Our definitions are expressed in a precise, code-based variant of the Universal Composition (UC) framework. This enables the modular use of hybrid functionalities—even for property-based definitions—and is essential for bounded model checking, where underlying primitives must be idealized.Accordingly, we idealize and model-check all building blocks used in our protocols. Notably, we formulate a novel UC ideal functionality for Advanced Threshold Signatures (ATS) and modelcheck it for attacks to ensure its robustness
2025/1364(PDF)
Last updated:  2026-02-18
A Framework for Witness Encryption from Linearly Verifiable SNARKs and Applications
Public-key cryptography
Sanjam Garg, Mohammad Hajiabadi, Dimitris Kolonelos, Abhiram Kothapalli, and Guru-Vamsi Policharla
Show abstract
Public-key cryptography
Witness Encryption (WE) is a powerful cryptographic primitive, enabling applications that would otherwise appear infeasible. While general-purpose WE requires strong cryptographic assumptions, and is highly inefficient, recent works have demonstrated that it is possible to design special-purpose WE schemes for targeted applications that can be built from weaker assumptions and can also be concretely efficient. Despite the plethora of constructions in the literature that (implicitly) use witness encryption schemes, there has been no systematic study of special purpose witness encryption schemes.In this work we make progress towards this goal by designing a modular and extensible framework, which allows us to better understand existing schemes and further enables us to construct new witness encryption schemes. The framework is designed around simple but powerful building blocks that we refer to as "gadgets". Gadgets can be thought of as witness encryption schemes for small targeted relations (induced by linearly verifiable arguments) but they can be composed with each other to build larger, more expressive relations that are useful in applications. To highlight the power of our framework we methodically recover past results, improve upon them and even provide new feasibility results.The first application of our framework is a Registered Attribute-Based Encryption Scheme [Hohenberger et al. (Eurocrypt 23)] with linear sized common reference string (CRS). Numerous Registered Attribute-Based Encryption (R-ABE) constructions have introduced though a black-box R-ABE construction with a linear--in the number of users--CRS has been a persistent open problem, with the state-of-the-art concretely being N^{1.58} (Garg et al. [GLWW, CRYPTO 24]). Empowered by our Witness Encryption framework we provide the first construction of black-box R-ABE with linear-sized CRS. Our construction is based on a novel realization of encryption for DNF formulas that leverages encryption for set membership.Our second application is a feasibility result for Registered Threshold Encryption (RTE) with succinct ciphertexts. RTE (Branco et al. [ASIACRYPT 2024] is an analogue of the recently introduced Silent Threshold Encryption (Garg et al. [GKPW, CRYPTO 24]) in the Registered Setting. We revisit Registered Threshold Encryption and provide an efficient construction, with constant-sized encryption key and ciphertexts, that makes use of our WE framework.
2025/1547(PDF)
Last updated:  2026-02-18
Silent Threshold Cryptography from Pairings: Expressive Policies in the Plain Model
Public-key cryptography
Brent Waters and David J. Wu
Show abstract
Public-key cryptography
Threshold cryptography is a standard technique for distributing trust by splitting cryptographic keys into multiple shares held by different parties. The classic model of threshold cryptography assumes either that a trusted dealer distributes the shares to the different parties (and in doing so, also knows the overall secret) or that the users participate in an interactive distributed key-generation protocol to derive their individual shares. In recent years, several works have proposed a new model where users can independently choose a public key, and there is a public and deterministic function that derives the joint public key associated with a group of users from their individual keys. Schemes with this silent (i.e., non-interactive) setup procedure allow us to take advantage of the utility of threshold cryptography without needing to rely on a trusted dealer or an expensive interactive setup phase.Existing works have primarily focused on threshold policies. This includes notions like threshold signatures (resp., encryption) with silent setup (where only quorums with at least $T$ users can sign (resp., decrypt) a message) and distributed broadcast encryption (a special case of threshold encryption where the threshold is 1). Currently, constructions that support general threshold policies either rely on strong tools such as indistinguishability obfuscation and witness encryption, or analyze security in idealized models like the generic bilinear group model. The use of idealized models is due to the reliance on techniques for constructing succinct non-interactive arguments of knowledge (SNARKs).In this work, we introduce a new pairing-based approach for constructing threshold signatures and encryption schemes with silent setup. On the one hand, our techniques directly allow us to support expressive policies like monotone Boolean formulas in addition to thresholds. On the other hand, we only rely on basic algebraic tools (i.e., a simple cross-term cancellation strategy), which yields constructions with shorter signatures and ciphertexts compared to previous pairing-based constructions. As an added bonus, we can also prove (static) security under $q$-type assumptions in the plain model. Concretely, the signature size in our distributed threshold signature scheme is 3 group elements and the ciphertext size in our distributed threshold encryption scheme is 4 group elements (together with a short tag).
2026/122(PDF)
Last updated:  2026-02-18
The Motte-and-Bailey Framework for Leakage-Resilient Accordion Modes: Featuring Qaitbay and Alicante
Secret-key cryptography
Mario Marhuenda Beltrán and Mustafa Khairallah
Show abstract
Secret-key cryptography
Accordion modes have experienced a surge in popularity, partially motivated by the recent NIST Accordion modes project. None of the existing practical constructions is leakage-resilient by default. In this work, we design a leakage-resilient Accordion mode. We start by presenting a generic analysis of the Encode-then-Encipher (EtE) framework in the leakage-resilient setting, assuming the enciphering is a leakage-resilient STPRP (STPRPl2). We show that the resulting security, while strong, suffers from some limitations. Next, we introduce Motte-and-Bailey, a general framework for building leakage-resilient accordion modes, in the spirit of the PIV construction. Motte-and-Bailey, or MaB for short, is a leveled construction, requiring light assumptions on most of its components to guarantee good STPRPl2, CIML2 and CCAMl2 security. In particular, we require two fully protected calls to a TBC, a collision-resistant hash function (with unbounded or light leakage), and an ideal leakage-resilient PRG, secure against single-trace attacks. Additionally, we present particular instantiations, Qaitbay and Alicante. In Qaitbay the PRG and the hash function are replaced by the Sponge function, while an independent TBC is used for the leak-free calls. Alicante makes use of an ideal cipher, and uses the MDPH hash function and the 2PRG construction, while the leak-free calls are implemented using independent calls to the ideal cipher. We also give three flavours of how to instantiate the TBC inside Qaitbay. Last but not least, we show how to strengthen MaB, Qaitbay and Alicante to also achieve CCAmL2.
2026/090(PDF)
Last updated:  2026-02-18
On the Impossibility of Round-Optimal Pairing-Free Blind Signatures in the ROM
Foundations
Marian Dietz, Julia Kastner, and Stefano Tessaro
Show abstract
Foundations
Blind signatures play a central role in cryptographic protocols for privacy-preserving authentication and have attracted substantial attention in both theory and practice. A major line of research, dating back to the 1990s, has focused on constructing blind signatures from pairing-free groups. However, all known constructions in this setting require at least three moves of interaction between the signer and the user. These schemes treat the underlying group as a black box and rely on the random oracle in their security proofs. While computationally efficient, they suffer from the drawback that the signer must maintain state during a signing session. In contrast, round-optimal solutions are known under other assumptions and structures (e.g., RSA, lattices, and pairings), or via generic transformations such as Fischlin’s method (CRYPTO~'06), which employ non-black-box techniques.This paper investigates whether the three-round barrier for pairing-free groups is inherent. We provide the first negative evidence by proving that, in a model combining the Random Oracle Model (ROM) with Maurer’s Generic Group Model, no blind signature scheme can be secure if it signs sufficiently long messages while making at most a logarithmic number of random oracle queries. Our lower-bound techniques are novel in that they address the interaction of both models (generic groups and random oracles) simultaneously.
2026/265(PDF)
Last updated:  2026-02-18
Catalytic Tree Evaluation From Matching Vectors
Foundations
Alexandra Henzinger, Edward Pyne, and Seyoon Ragavan
Show abstract
Foundations
We give new algorithms for tree evaluation (S. Cook et al. TOCT 2012) in the catalytic-computing model (Buhrman et al. STOC 2014). Two existing approaches aim to solve tree evaluation in low space: on the one hand, J. Cook and Mertz (STOC 2024) give an algorithm for TreeEval running in super-logarithmic space $O(\log n\log\log n)$ and super-polynomial time $n^{O(\log\log n)}$. On the other hand, a simple reduction from TreeEval to circuit evaluation, combined with the result of Buhrman et al. (STOC 2014), gives a catalytic algorithm for TreeEval running in logarithmic $O(\log n)$ free space and polynomial time, but with polynomial catalytic space.We show that the latter result can be improved. We give a catalytic algorithm for TreeEval with logarithmic $O(\log n)$ free space, polynomial runtime, and subpolynomial $2^{\log^\epsilon n}$ catalytic space (for any $\epsilon > 0$). Our result opens a new line of attack on putting TreeEval in logspace, and immediately implies an improved simulation of time by catalytic space, by the reduction of Williams (STOC 2025). Our catalytic TreeEval algorithm is inspired by a connection to matching-vector families and private information retrieval, and improved constructions of (uniform) matching-vector families would imply improvements to our algorithm.
2026/269(PDF)
Last updated:  2026-02-18
Exact Error Analysis for Blind Rotation in Fully Homomorphic Encryption
Public-key cryptography
Sin Kim, Seunghwan Lee, Dohyuk Kim, and Dong-Joon Shin
Show abstract
Public-key cryptography
Blind rotation is the computational core of GINX, AP, and AP+ bootstrappings, yet its error behavior has not been precisely characterized. Prior analyses rely on heuristic independence assumptions that fail to capture the distinct error accumulation patterns of different algorithmic variants. We prove that no additional assumptions are needed: the (M)LWE assumption guaranteeing ciphertext indistinguishability also implies the independence properties required for an exact second-moment characterization of blind rotation error. We derive the first closed-form formulas covering all four combinations of decomposition type (full vs. approximated) and algorithmic variant (vanilla vs. unrolled). Our analysis reveals three structural phenomena previously overlooked: (i) a message-dependent residual term that halves the effective decomposition variance, (ii) a doubling coefficient inherent to unrolled implementations, and (iii) bounded cross-term correlations whose contribution remains constant in the security parameter. Our formulas match experimental measurements within 1%, whereas prior estimators deviate by up to 50% for unrolled blind rotation. Our results show that the (M)LWE assumption—already required for security— is sufficient to determine the exact error behavior of GINX, AP, and AP+ bootstrapping.
2026/308(PDF)
Last updated:  2026-02-18
Anamorphic E-Voting: Coercion-Resistant Through Fake and Real Votes
Cryptographic protocols
Antonis Michalas
Show abstract
Cryptographic protocols
The paper addresses the challenging and timely issue of vote buying in electronic voting. Electronic voting is a well established process in modern democracies. However, problems such as vote buying continue to pose a significant challenge. This paper aims at addressing this issue by leveraging the novel concept of Anamorphic Encryption to enable voters to cast their original vote together with a hidden ``fake" one. In this way voters can pursue their true preferences unobstructed while providing a compliance proof to potential vote buyers. The security of our construction is proved through a formal security analysis, that considers powerful adversaries who aim at breaching user privacy and influencing votes. We believe that this innovation e-voting approach can enhance the overall security of e-voting systems and pave the way to avoid electoral manipulation.
2026/307(PDF)
Last updated:  2026-02-18
Composition Theorems for Zero-Knowledge IOPs
Foundations
Himanshu Vashishth and Mor Weiss
Show abstract
Foundations
Interactive Oracle Proofs (IOPs) enable a probabilistic verifier interacting with a prover to verify NP statements while reading only few bits from the prover messages. Zero-Knowledge IOPs (ZK-IOPs) have the additional guarantee that a query-bounded (possibly malicious) verifier learns nothing about the NP witness.We initiate a systematic study of ZK preservation under IOP composition, and prove general composition theorems for ZK-IOPs in the 2- and multi-IOP setting. Our main result shows that ZK is preserved in the setting of perfect, black-box, straight-line ZK (the standard setting for ZK-IOPs), if the outer IOP has an additional mild property that is satisfied by existing ZK-IOPs. Contrary to common belief, this does not follow from composition theorems for multiparty protocols (Kushilevitz, Lindell and Rabin, STOC`06).Our composition theorems show that ZK-IOPs can be modularly designed by composing sub-protocols, and ZK of the composed system follows seamlessly from the ZK guarantees of its building blocks. Using our composition theorems, we easily derive both new and known results on ZK-IOPs in various settings, including ZK preservation under parallel/sequential composition, ZK of IOPs for sumcheck and codeswitching, ZK of IOPs for NP using arithmetization and sumcheck, and ZK preservation under IOP proof composition (reproving a result of Bootle, Chiesa and Liu, EC`22).
2026/263(PDF)
Last updated:  2026-02-18
Compact and Statistical NIZK Proofs of Knowledge for Disjunctions from $\Sigma$-Protocols
Cryptographic protocols
Gennaro Avitabile, Luisa Siniscalchi, and Ivan Visconti
Show abstract
Cryptographic protocols
The classical results of Cramer et al. [Crypto’94] showed how to compose $n$ $\Sigma$-protocols with statistical HVZK to obtain an efficient proof of knowledge of their disjunction maintaining statistical HVZK without adding hardness assumptions. The Fiat-Shamir (FS) transform applied to their construction produces a statistical NIZK proof of knowledge in the random oracle (RO) model, but, unfortunately, the proof size in their case is linear in $n$.Recently, there has been increasing interest in solving the major open problem of obtaining statistical NIZK proofs of knowledge for disjunctions starting from $\Sigma$-protocols, with improved communication and minimizing hardness assumptions. The current best results are due 1) to Goel et al. [Eurocrypt '22], which, unfortunately, require Dlog-based assumptions, and 2) to Boudgoust and Simkin [TCC '24] which, unfortunately, obtain computational ZK only.In this work, we solve the above open problem showing, for a large class of $\Sigma$-protocols, how to obtain a non-interactive compact statistical NIZK proof of knowledge without adding hardness assumptions, therefore only relying on random oracles.More precisely, the communication complexity of our construction is $O(\lambda^2\log{n})+\mathsf{CC}(\Sigma)$ where $\lambda$ is the security parameter and $\mathsf{CC}(\Sigma)$ is the communication of a single run of the underlying $\Sigma$-protocol, and this is obtained via calls to a random oracle and to a prover executing a stand-alone instance of the $\Sigma$-protocol.
2026/306(PDF)
Last updated:  2026-02-18
Skipping Class: Algebraic Attacks exploiting weak matrices and operation modes of Poseidon2(b)
Attacks and cryptanalysis
Simon-Philipp Merz and Àlex Rodríguez García
Show abstract
Attacks and cryptanalysis
We present new algebraic attacks on Poseidon2 and Poseidon2b. We exploit the specific structure of the matrices that define the linear layers in the hash function which allows us to improve round-skipping for the constrained-input constrained-output CICO problem. The security of many circuit-friendly hash functions has been measured by their resistance against attacks on the CICO problem. However, we show how to boost our round-skipping attack when directly modelling algebraic preimage attacks of Poseidon2(b) in compression and sponge mode. To the best of our knowledge, our attack provides the first examples where finding preimages is easier than solving the corresponding CICO problem in Poseidon2(b). Furthermore, we describe the first algebraic collision attack that outperforms its algebraic preimage counterpart. We improve over state-of-the-art algebraic attacks for a range of parameters, e.g. for one recommended $128$-bit parameter set we improve over previous state-of-the-art algebraic collision attacks by a factor of $2^{106}$. However, due to the algebraic security margin this does not mean the primitive falls short of its claimed security level. Finally, we discuss how our attacks can be mitigated without affecting the efficiency of Poseidon2(b).
2026/236(PDF)
Last updated:  2026-02-18
Sharing a Secret Anamorphically: Secret Shares Dressed Up as Signatures
Public-key cryptography
Gennaro Avitabile, Vincenzo Botta, and Daniele Friolo
Show abstract
Public-key cryptography
Anamorphic Encryption (Persiano, Phan and Yung, Eurocrypt '22) allows private communication in a challenging setting where encryption is severely controlled by a central authority (henceforth the dictator) who can demand the users to surrender their secret keys. Anamorphic Signatures (AS) (Kutylowski, Persiano, Phan, Yung, Zawada, Crypto '23) face the even more restrictive world where only authentication is allowed but users still want to send secret messages despite the repressive control of the dictator holding their signing key. Several constructions have been proposed so far, but they all come with some limitations.We propose a new flexible setting where digital signatures are used to covertly embed secret shares which $t$ out of $N$ designated receivers can combine to recover a covert secret. We formalize this new primitive called Anamorphic Secret Sharing Signatures (ASSS) together with new robustness, forward secrecy, and private unforgeability notions and we give a construction for Schnorr-like signatures overcoming the limitations of previous constructions. ASSS additionally imply (regular single-receiver) AS, but two signatures (instead of one) are needed to recover the covert message.
2025/2195(PDF)
Last updated:  2026-02-18
Refined Modelling of the Primal Attack, and Variants Against Module-Learning With Errors
Attacks and cryptanalysis
Paola de Perthuis and Filip Trenkić
Show abstract
Attacks and cryptanalysis
The primal attack reduces Learning with Errors (LWE) to the unique Shortest Vector Problem (uSVP), and then applies lattice reduction such as BKZ to solve the latter. Estimating the cost of the attack is required to evaluate the security of constructions based on LWE. Existing fine-grained estimators for the cost of the primal attack, due to Dachman-Soled--Ducas--Gong--Rossi (CRYPTO 2020) and Postlethwaite--Virdia (PKC 2021), differ from experimental data as they implicitly assume the unique shortest vector is resampled several times during the attack, changing its length. Furthermore, these estimators consider only the first two moments of the LWE secret and error, and therefore do not differentiate between distinct centred distributions with equal variances. We remedy both issues by initially fixing the short vector's length, and later integrating over its distribution. We provide extensive experimental evidence that our estimators are more accurate and faithfully capture the behaviour of different LWE distributions. In the case of Module-LWE, lattice reduction utilising the module structure could lead to cheaper attacks. We build upon the analysis of module lattice reduction by Ducas--Engelberts--Perthuis (Asiacrypt 2025), providing a simulator for Module-BKZ generalising the BKZ simulator of Chen--Nguyen (Asiacrypt 2011). We design estimators for a module variant of the primal attack, supporting our analysis with experimental evidence. Asymptotically, we show the module primal attack over a degree $d$ number field $K$ has a reduced cost, resulting in a subexponential gain, whenever the discriminant $\Delta_K$ satisfies $\vert \Delta_K \vert < d^d$, one such case being non-power-two cyclotomics.
2026/305(PDF)
Last updated:  2026-02-18
Quantum Truncated Differential Attacks using Convolutions
Attacks and cryptanalysis
Aurel Pichollet--Mugnier and André Schrottenloher
Show abstract
Attacks and cryptanalysis
This paper focuses on quantum key-recovery attacks on block ciphers. Previous works on quantum differential and truncated differential attacks like [Kaplan et al., ToSC 2016] have shown that classical algorithms for key-recovery, typically based on generating differential pairs and sieving them, can be accelerated by up to a quadratic speedup using variants of quantum search, quantum amplitude amplification, and quantum collision-finding.In this paper, we introduce a new quantum truncated differential key-recovery attack, which leverages the quantum convolution algorithm introduced in [Schrottenloher, CRYPTO 2022] and previously used in linear cryptanalysis. We adapt this algorithm to the case of differential cryptanalysis, by rewriting the probability of a differential of an $n$-bit cipher as a convolution of functions with $2n$-bit input. We then construct a quantum state whose amplitudes encode the probability of the differential for different key guesses, and use this as the starting point of a quantum search. In some cases (although not on practical ciphers so far), the speedup is better than quadratic compared to classical attacks. We also extend the framework to related-key differential attacks.We give applications to a 9-round attack on QARMAv2-64 adapted from [Ahmadian et al., DCC 2024] and a 12-round related-key attack on AES-256 from [Boura et al., CRYPTO 2023], which show improvement over classical attacks and over Kaplan et al.'s strategy when taking into account the amount of memory and the type of quantum memory used (as our attack requires only quantum-accessible classical memory).
2026/304(PDF)
Last updated:  2026-02-18
Linear-Communication ACSS with Guaranteed Termination and Lower Amortized Bound
Uncategorized
Chengyi Qin, Mingqiang Wang, and Haiyang Xue
Show abstract
Uncategorized
An Asynchronous Complete Secret Sharing (ACSS) protocol enables a dealer to distribute \( N \) Shamir shares such that all parties eventually receive their shares over an asynchronous network. It serves as a fundamental building block for asynchronous Secure Multiparty Computation (MPC) and Byzantine Agreement.In this work, we focus on the statistically secure ACSS with optimal resilience (\(t < n/3\)). Recently, Ji, Li, and Song~[CRYPTO'24] proposed the first ACSS protocol with amortized linear communication. However, their scheme lacks guaranteed termination, and they identified the construction of a linear-communication ACSS with guaranteed termination as an open problem. Furthermore, their protocol requires a large amortized bound of \( N = \Omega(n^{11} \kappa) \), where \( n \) is the number of parties and \( \kappa \) is the size of the secret. In this work, we resolve the open problem and significantly reduce the amortized bound by presenting a linear-communication ACSS protocol with guaranteed termination and a lower bound of \( N = \Omega(n^{4}+n\kappa) \). Our ACSS protocol can be directly applied to asynchronous MPC protocols, ensuring both guaranteed termination and improved communication per multiplication gate, as well as to asynchronous Byzantine Agreement.
2025/820(PDF)
Last updated:  2026-02-18
Less Than a Bit to Rule Them All – Key Recovery from Randomness Leakage in ML-DSA
Attacks and cryptanalysis
Simon Damm, Nicolai Kraus, Alexander May, Julian Nowakowski, and Jonas Thietke
Show abstract
Attacks and cryptanalysis
The Fiat-Shamir transform is one of the most widely applied methods for secure signature construction. Fiat-Shamir starts with an interactive zero-knowledge identification protocol and transforms this via a hash function into a non-interactive signature. The protocol's zero-knowledge property ensures that a signature does not leak information on its secret key $\mathbf s$, which is achieved by blinding $\vec s$ via proper randomness $\mathbf y$.Most prominent Fiat-Shamir examples are EC-DSA signatures and the new post-quantum standard ML-DSA (aka Dilithium).In practice, EC-DSA signatures have experienced fatal attacks via leakage of a few bits of the randomness $\mathbf y$ per signature.Similar attacks now emerge for lattice-based signatures, such as ML-DSA .We build on, improve and generalize the pioneering leakage attack on ML-DSA by Liu, Zhou, Sun, Wang, Zhang, and Ming.Using a transformation to Integer LWE (ILWE), their attack can recover a 256-dimensional subkey of ML-DSA-44 from leakage in a single bit of $\mathbf{y}$ per signature, in any bit position $j \geq 6$.However, the number of required signatures grows exponentially as $4^j$. In this work, we show that not all leaky signatures carry information about the secret subkey. We introduce the notion of informative signature relations. This notion allows us to define a preprocessing step, called filter-and-shift that leads to ILWE instances that require a smaller sample amount. Unlike the standard ILWE transformation, filter-and-shift exploits the smallness of secret keys, and therefore might be of independent cryptanalytic interest. In comparison to Liu et al., for $j=6$ we require only a quarter of the signatures and reduce the exponential growth to $2^j$.In addition, we show that the secret subkey can be recovered even with a leak bit corrupted by a large amount of noise, in theory up to the maximum of $50\%$. Experimentally, we still recover the secret with $43\%$ noise, where we need $170$ times as many signatures as in the noise-free setting.The attack applies more generally to all Fiat-Shamir-type lattice-based signatures.For a signature scheme based on module LWE over an $\ell$-dimensional module, the attack uses a 1-bit leak per signature to efficiently recover a $\frac{1}{\ell}$-fraction of the secret key.In the ring LWE setting, which can be seen as module LWE with $\ell = 1$, the attack recovers the whole key.
2026/303(PDF)
Last updated:  2026-02-18
$\mathsf{TalonG}$: Bandwidth-Efficient Two-Round Threshold Signatures from Lattices
Public-key cryptography
Liming Gao, Guofeng Tang, Dingding Jia, Yijian Liu, Bingqian Liu, Xianhui Lu, Kunpeng Wang, and Yongjian Yin
Show abstract
Public-key cryptography
Threshold signatures split the secret key among $n$ parties, where any subset of at least $t$ parties can collaboratively produce a valid signature. They have been widely deployed in applications such as blockchain systems. Lattice-based threshold signatures have also attracted considerable attention due to their post-quantum security guarantees. However, existing lattice-based constructions still face significant efficiency challenges, particularly when the number of parties becomes large. Recent lattice-based threshold signatures such as TRaccoon (Eurocrypt’24) and Ringtail (S\&P’25) support large thresholds, but they either require three interaction rounds or incur heavy communication costs in the two-round setting, limiting their practicality.In this paper, we present $\mathsf{TalonG}$, a novel two-round lattice-based threshold signature that overcomes these limitations via a new trapdoor semi-commitment technique. This variant of commitment relaxes the standard binding requirement to a weaker form, allowing an efficient instantiation from the NTRU assumption and enabling a compact two-round signing protocol with low communication. For $t=1024$ and 128-bit security, $\mathsf{TalonG}$ achieves significant improvements among existing lattice-based threshold signatures: its total communication per party and public key size are both minimal, at 26.9 KB and 2.0 KB, respectively. While the resulting signature size is larger (17.7 KB), it remains practical and highly competitive. $\mathsf{TalonG}$ is thus well-suited for real-world large-scale deployments where both round efficiency and communication load are critical.
2026/302(PDF)
Last updated:  2026-02-18
Non Interactive MPC, (Quantumly) Revisited
Foundations
Prabhanjan Ananth, Divyanshu Bhardwaj, and Aparna Gupte
Show abstract
Foundations
Classical non-interactive secure computation, despite being extensively studied, suffers from an inherent barrier: adversaries can learn the entire residual function via resetting attacks. We investigate whether quantum resources can circumvent this barrier and restrict adversarial leakage. Our results are as follows: 1. $\textbf{Definitions}$: We introduce new security definitions for the one-message MPC and 2PC settings that restrict the amount of adversarial leakage compared to prior classical definitions. 2. $\textbf{MPC}$: There exist information-theoretically secure one-message multi-party computation protocols in the oracle model in both the quantum pre-processing and classical pre-processing settings. 3. $\textbf{2PC}$: There exist semi-honest secure one-message two-party computation for (randomized) pseudorandom functionalities in the plain model based on LWE and maliciously secure one-message two-party computation for (randomized) constrained functionalities in the CRS model based on iO. Prior work by [Gupte, Liu, Raizes, Roberts and, Vaikuntanathan STOC 2025] achieved semi-honest security based on iO. Our results demonstrate the power of quantum information to circumvent barriers in classical secure computation.
2026/301(PDF)
Last updated:  2026-02-18
Blind Leakage: Rethinking Deep Learning-based Non-Profiled Side-Channel Analysis
Attacks and cryptanalysis
Jintong Yu
Show abstract
Attacks and cryptanalysis
Deep Learning-based Non-profiled Side-Channel Analysis (DL-NSCA) enables automated feature extraction and obviates the need for a profiling device. However, existing methods mostly rely on leakage from non-linear operations and require prior knowledge of the target algorithm and its plaintexts/ciphertexts, which restricts their applicability to black-box scenarios and proprietary implementations.Motivated by the ubiquity of plaintext-key XOR operations in cryptographic algorithms, this paper presents a new SCA perspective based on leakage from linear operations to enable cross-algorithm attacks. We first theoretically demonstrate that leakage from invertible linear operations, referred to as blind leakage, introduces two symmetric maxima of the correlations between key guesses and leakage (one for non-blind leakage), rendering existing distinguishers ineffective. To address this issue, we then propose a distinguisher, VS-GBA, applicable to both blind and non-blind leakage, and improve its robustness using a Gaussian Borda voting scheme. Experimental results on a high-noise ARM Cortex-M4 device demonstrate that our method enables effective cross-algorithm attacks on masked AES, ASCON, and PRESENT, and outperforms non-blind leakage attacks. Furthermore, attacking XTS-AES demonstrates that blind leakage exploitation extends the boundaries of DL-NSCA to scenarios where plaintexts/ciphertexts are XORed with a tweak.
2026/300(PDF)
Last updated:  2026-02-18
Quantum One Time Programs: Less Assumptions, More Feasibility and One Message 2PC
Foundations
Prabhanjan Ananth and Divyanshu Bhardwaj
Show abstract
Foundations
Quantum one-time programs, introduced by Broadbent, Gutowski, and Stebila [CRYPTO 2013], is a compiler that takes as input a program $f$ and converts it into a quantum state $\sigma_f$ such that given $\sigma_f$, it is possible to only obtain the output of $f$ on a single input. Our goal is to investigate the class of functionalities for which we can design one-time programs in the plain model. We show the following:1. We show that, assuming learning with errors, one-time programs for (randomized) pseudorandom functionalities exist in the plain model. In contrast, the result by Gupte, Liu, Raizes, Roberts, and Vaikuntanathan [STOC 2025] relied upon indistinguishability obfuscation. 2. We show that, assuming indistinguishability obfuscation, one-time programs for any class of randomized functionalities exist in the plain model, as long as the class of functionalities satisfy a natural security notion called constrained security. Additionally, we give constructions of one-time encryption and one-time message authentication codes based on learning with errors. Finally, we show that a novel variant of one-time programs imply one-message secure two-party computation protocols for non-trivial functionalities in the CRS model.
2026/299(PDF)
Last updated:  2026-02-18
Weak Zero-Knowledge and One-Way Functions
Foundations
Rohit Chatterjee, Yunqi Li, and Prashant Nalini Vasudevan
Show abstract
Foundations
We study the implications of the existence of weak Zero-Knowledge (ZK) protocols for worst-case hard languages. These are protocols that have completeness, soundness, and zero-knowledge errors (denoted $\epsilon_c$, $\epsilon_s$, and $\epsilon_z$, respectively) that might not be negligible. Under the assumption that there are worst-case hard languages in NP, we show the following:1. If all languages in NP have NIZK proofs or arguments satisfying $ \epsilon_c+\epsilon_s+\epsilon_z < 1 $, then One-Way Functions (OWFs) exist. This covers all possible non-trivial values for these error rates. It additionally implies that if all languages in NP have such NIZK proofs and ec is negligible, then they also have NIZK proofs where all errors are negligible. Previously, these results were known under the more restrictive condition $ \epsilon_c+\sqrt{\epsilon_s}+\epsilon_z < 1 $ [Chakraborty et al., CRYPTO 2025]. 2. If all languages in NP have k-round public-coin ZK proofs or arguments satisfying $ \epsilon_c+\epsilon_s+(2k-1)\epsilon_z < 1 $, then OWFs exist.3. If, for some constant k, all languages in NP have k-round public-coin ZK proofs or arguments satisfying $ \epsilon_c+\epsilon_s+k\epsilon_z < 1 $, then infinitely-often OWFs exist.
2026/298(PDF)
Last updated:  2026-02-18
Key Recovery Attacks on UOV Using p^l-truncated Polynomial Rings
Attacks and cryptanalysis
Hiroki Furue and Yasuhiko Ikematsu
Show abstract
Attacks and cryptanalysis
The unbalanced oil and vinegar signature scheme (UOV) was proposed by Kipnis et al. in 1999 as a multivariate-based scheme. UOV is regarded as one of the most promising candidates for post-quantum cryptography owing to its short signatures and fast performance. Recently, Ran proposed a new key recovery attack on UOV over a field of even characteristic, reducing the security of its proposed parameters. Furthermore, Jin et al. generalized Ran’s attack to schemes over a field of arbitrary characteristic by exploiting the structure of the symmetric algebra.In this work, we propose a new framework for recovering the secret subspace of UOV over a finite field $\mathbb{F}_{p^e}$ by generalizing these preceding results. First, we show that a key recovery against UOV can be successfully performed using the XL algorithm by exploiting the structure of the $p$-truncated polynomial ring $R^{(p)}=\mathbb{F}_{p^e}[x_1,\dots,x_n]/ \langle x_1^p,\dots,x_n^p\rangle$. This result simplifies the description of the attacks proposed by Jin et al.\ by formulating them in terms of the polynomial ring, independent of the structure of the symmetric algebra. Second, we generalize this result to the polynomial rings of more general forms, namely, the $p^\ell$-truncated polynomial rings $R^{(p^\ell)}$ for any $1 \le \ell \le e$. This result is due to our description in terms of the polynomial ring and can relax the constraints on the solving degree of the XL algorithm using $R^{(p^\ell)}$ by taking a larger $\ell$. Finally, we consider performing the reconciliation and intersection attacks using the $p^\ell$-truncated polynomial rings against UOV. In particular, we newly take into account the intersection attack using this framework, which has not been considered in previous analyses.Based on our complexity estimation, we confirm that the optimal complexity of the reconciliation attack using the proposed framework is consistent with that of the symmetric-algebra attack by Jin et al. We further show that the intersection attack using the proposed framework outperforms the reconciliation attack against the proposed parameters of UOV and reduces the security of multiple parameters compared to their claimed security levels. In addition, we show that our complexity estimation of the reconciliation attack using the proposed framework reduces the security of multiple parameters of SNOVA compared to previously known attacks.
2026/297(PDF)
Last updated:  2026-02-18
Scaling Sparse Matrix Computation for Secure Outsourced Computing
Applications
Wujie Xiong, Hao Zhou, Yutong Ye, Ruoming Jin, and Lei Xu
Show abstract
Applications
Sparse General Matrix-Matrix Multiplication (SpGEMM) is a fundamental but computationally intensive operation that underpins many scientific workloads, including numerous AI applications. With the increasing demands for data security, privacy-preserving computation techniques such as Fully Homomorphic Encryption (FHE) have gained significant attention for their ability to process sensitive data without decryption. Nonetheless, executing SpGEMM within the framework of FHE presents significant challenges. The most effective SpGEMM algorithms exploit matrix sparsity to minimize computational costs; however, FHE obscures both the data values and the sparsity structures. Prior FHE-based privacy-preserving computation frameworks either ignore the inherent sparsity of matrices and rely on dense General Matrix-Matrix Multiplication (GEMM), incurring substantial overhead from redundant homomorphic multiplications, or they attempt to exploit sparsity by encrypting only the non-zero values, which inadvertently exposes sensitive positional information. To address this gap and achieve a better balance between efficiency and privacy, we propose an efficient FHE-compatible SpGEMM framework that enables oblivious data and position processing under a Single Instruction Multiple Data (SIMD) scheme. Moreover, we extend our method to support an arbitrary number of sparse matrices (FHE-SpGEMCM). The efficiency analysis shows that our method achieves an average homomorphic computation cost of $(n_A n_B)^2 / (n^2 N)$, where $n_A$ and $n_B$ represent the number of nonzero elements in $A$ and $B$, respectively, $n$ is the shared inner dimension of the multiplication, and $N$ denotes the batch size used in FHE. Experimental results demonstrate that for square matrices of scale $2^9$, our scheme achieves an average speedup of $439.25\times$ and a $10.68\times$ reduction in memory consumption compared to state-of-the-art baselines that ignore sparsity. Furthermore, when the scale increases to $2^{13}$, our method yields up to a $1201.77\times$ speedup over baselines that only exploit the sparsity of a single matrix.
2026/296(PDF)
Last updated:  2026-02-18
Navigating the Deep: End-to-End Extraction on Deep Neural Networks
Attacks and cryptanalysis
Haolin Liu, Adrien Siproudhis, Samuel Experton, Peter Lorenz, Christina Boura, and Thomas Peyrin
Show abstract
Attacks and cryptanalysis
Neural network model extraction has recently emerged as an important security concern, as adversaries attempt to recover a network’s parameters via black-box queries. Carlini et al. proposed in CRYPTO’20 a model extraction approach inspired by differential cryptanalysis, consisting of two steps: signature extraction, which extracts the absolute values of network weights layer by layer, and sign extraction, which determines the signs of these signatures. However, in practice this signature-extraction method is limited to very shallow networks only, and the proposed sign-extraction method is exponential in time. Recently, Canales-Martínez et al. (Eurocrypt’24) proposed a polynomial-time sign-extraction method, but it assumes the corresponding signatures have already been successfully extracted and can fail on so-called low-confidence neurons.In this work, we first revisit and refine the signature extraction process by systematically identifying and addressing for the first time critical limitations of Carlini et al.'s signature-extraction method. These limitations include rank deficiency and noise propagation from deeper layers. To overcome these challenges, we propose efficient algorithmic solutions for each of the identified issues, greatly improving the capabilities of signature extraction. Our approach permits the extraction of much deeper networks than previously possible. In addition, we propose new methods to improve numerical precision in signature extraction, and enhance the sign extraction part by combining two polynomial methods to avoid exponential exhaustive search in the case of low-confidence neurons. This leads to the very first end-to-end model extraction method that runs in polynomial time. We validate our attack through extensive experiments on ReLU-based neural networks, demonstrating significant improvements in extraction depth. For instance, our attack extracts consistently at least eight layers of neural networks trained on either the MNIST or CIFAR-10 datasets, while previous works could barely extract the first three layers of networks of similar width. Our results represent a crucial step toward practical attacks on larger and more complex neural network architectures.
2025/1006(PDF)
Last updated:  2026-02-18
Permutation-Based Hash from Non-Idealized Assumptions: Adding Feed-Forward to Sponge
Secret-key cryptography
Chun Guo, Kai Hu, Shuntian Jiang, Yanhong Fan, Yong Fu, Bart Preneel, and Meiqin Wang
Show abstract
Secret-key cryptography
Avoiding feeding forward seems to be a major goal of the sponge construction. We make a step back and investigate adding feeding forward back to sponge. The obtained sponge-with-feeding-forward construction has a number of benefits: (1) In the random permutation model, its preimage and second preimage security bounds are much better than the standard sponge with the same capacity, while collision and indifferentiability security bounds are comparable; (2) Its collision and (second) preimage security can be reduced to well-defined properties of the underlying permutation, i.e., correlation intractability w.r.t. certain family of evasive relations.We further incorporate several somewhat new ideas to form detailed hash and XOF constructions SpongeFwd: (1) Feeding-forward is only applied to the capacity part, and the final output(s) is the capacity part (with the rate part discarded). In this way, when $c$ equals the primary hash output size $h$, a single permutation-call suffices for squeezing. This also simplifies the underlying evasive relations for the reduction security proof. (2) We replace the hash IV with the first message block to have the best possible efficiency. (3) We employ a parallel structure to define an XOF variant. (4) We use HAIFI-style counter inputs to achieve both length-independent second-preimage security and domain separation for XOF.The better (second) preimage security enables constructing 512-bit output hash function from Keccak-p[800]: with 512-bit capacity, its collision and (second) preimage security bounds are the same as the standard SHA-3-512, while its hardware area is reduced by roughly 38% (according to our preliminary estimation).
2022/1438(PDF)
Last updated:  2026-02-17
Plug-and-play sanitization for TFHE
Public-key cryptography
Florian Bourse and Malika Izabachène
Show abstract
Public-key cryptography
Fully Homomorphic encryption allows the evaluation of any circuits over encrypted data while preserving the privacy of the data. However, without any additional properties, no guarantee is provided for the privacy of the circuits which are evaluated. A sanitization algorithm allows to destroy all previous information about how a ciphertext was obtained, ensuring that the circuit which was evaluated remains secret. In this paper, we present two new techniques to randomize RLWE ciphertexts, and show how they can be used to achieve ciphertext sanitization for the TFHE scheme proposed by Chillotti et al. (Asiacrypt 2016), by modifying the bootstrapping procedure internally.Our first technique relies on a generalization of the strategy proposed by Bourse et al. (Crypto 2016) to the ring setting. Using a backward induction over the circuit size, we also improve on their proof technique to avoid randomization at each step of the computation, enabling faster randomization and smaller noise growth. While this first approach adapts well in theory, we show evidence that it fails to provide a practical solution and propose a second solution with more realistic parameters at the cost of using an additional public key.As an additional contribution, we improve on the prohibitive size of the public key by relaxing the circuit privacy property to its computational counterpart, and build an efficient public randomizer composed of an RLWE-based public key encryption with additional properties on the ciphertexts distribution. We show that this randomizer can be used in the soak-and-spin paradigm of Ducas and Stehlé (Eurocrypt 2016) as well, and that it yields a significant improvement, mainly in the size of the public key.As a proof of concept, we provide a C implementation of our sanitization strategy, which showsthat a sanitized LWE ciphertext can be obtained almost for free compared to a bootstrapped LWE ciphertext assuming many discrete Gaussian samples at hand.
2026/295(PDF)
Last updated:  2026-02-17
From OT to OLE with Almost-Linear Communication
Cryptographic protocols
Geoffroy Couteau and Naman Kumar
Show abstract
Cryptographic protocols
Following the recent work of (Doerner et al., CCS 2025), we introduce an improved reduction of OLE to OT. We prove the following: there is a perfectly-secure OLE-to-OT reduction over any $n$-bit field $\mathbb{F}$ using almost-linear communication $n\cdot 2^{O(\log^{*} n)}$ and almost-constant rounds $O((\log^{*} n)^2)$. In the course of proving our result, we also introduce new perfectly-secure protocols for computing shares of the equality and greater-than predicate on $n$-bit strings in the OT-hybrid model using $O(n)$ communication and $\log^{*} n$ rounds. These results are of independent interest.
2024/1955(PDF)
Last updated:  2026-02-17
Gold OPRF: Post-Quantum Oblivious Power-Residue PRF
Cryptographic protocols
Yibin Yang, Fabrice Benhamouda, Shai Halevi, Hugo Krawczyk, and Tal Rabin
Show abstract
Cryptographic protocols
We propose plausible post-quantum (PQ) oblivious pseudorandom functions (OPRFs) based on the Power-Residue PRF (Damgård CRYPTO’88), a generalization of the Legendre PRF. For security parameter $\lambda$, we consider the PRF $\mathsf{Gold}_k(x)$ that maps an integer $x$ modulo a public prime $p = 2^\lambda\cdot g + 1$ to the element $(k + x)^g \bmod p$, where $g$ is public and $\log g \approx 2\lambda$.At the core of our constructions are efficient novel methods for evaluating $\mathsf{Gold}$ within two-party computation ($\mathsf{2PC}\text{-}\mathsf{Gold}$), achieving different security requirements. Here, the server $\mathcal{P}_s$ holds the PRF key $k$ whereas the client $\mathcal{P}_c$ holds the PRF input $x$, and they jointly evaluate $\mathsf{Gold}$ in 2PC. $\mathsf{2PC}\text{-}\mathsf{Gold}$ uses standard Vector Oblivious Linear Evaluation (VOLE) correlations and is information-theoretic and constant-round in the (V)OLE-hybrid model. We show:• For a semi-honest $\mathcal{P}_s$ and a malicious $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses a single (V)OLE correlation, and has a communication complexity of $3$ field elements ($2$ field elements if we only require a uniformly sampled key) and a computational complexity of $\mathcal{O}(\lambda)$ field operations. We refer to this as half-malicious security.• For malicious $\mathcal{P}_s$ and $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses $\frac{\lambda}{4} + \mathcal{O}(1)$ VOLE correlations, and has a communication complexity of $\frac{\lambda}{4} + \mathcal{O}(1)$ field elements and a computational complexity of $\mathcal{O}(\lambda)$ field operations.These constructions support additional features and extensions, e.g., batched evaluations with better amortized costs where $\mathcal{P}_c$ repeatedly evaluates the PRF under the same key.Furthermore, we extend $\mathsf{2PC}\text{-}\mathsf{Gold}$ to Verifiable OPRFs and use the methodology from Beullens et al. (ePrint’24) to obtain strong OPRF security in the universally composable setting.All the protocols are efficient in practice. We implemented $\mathsf{2PC}\text{-}\mathsf{Gold}$—with (PQ) VOLEs—and benchmarked them. For example, our half-malicious (resp. malicious) $n$-batched PQ OPRFs incur about $100$B (resp. $1.9$KB) of amortized communication for $\lambda = 128$.
2026/294(PDF)
Last updated:  2026-02-17
Post-Quantum Adaptor Signatures with Strong Security from Cryptographic Group Actions
Cryptographic protocols
Ryann Cartor, Nathan Daly, Giulia Gaggero, Jason T. LeGrow, Andrea Sanguineti, and Silvia Sconza
Show abstract
Cryptographic protocols
We present One Round "Cheating" Adaptor Signatures (OR-CAS): a novel and efficient construction of adaptor signature schemesfrom CSI-FiSh. Our protocol improves substantially on existing groupaction-based schemes: Unlike IAS (Tairi et al., FC 2021), our schemedoes not require expensive non-interactive zero-knowledge proofs, andunlike adaptor MCSI-FiSh (Jana et al., CANS 2024) our constructiondoes not require any modification to the underlying digital signaturescheme. We prove the protocol’s security under the strong security no-tions of Dai et al. (Indocrypt 2022) and Gerhart et al. (Eurocrypt 2024).
2026/293(PDF)
Last updated:  2026-02-17
Quantum Oracle Distribution Switching and its Applications to Fully Anonymous Ring Signatures
Public-key cryptography
Marvin Beckmann and Christian Majenz
Show abstract
Public-key cryptography
Ring signatures are a powerful primitive that allows a member to sign on behalf of a group, without revealing their identity. Recently, ring signatures have received additional attention as an ingredient for post-quantum deniable authenticated key exchange, e.g., for a post-quantum version of the Signal protocol, employed by virtually all end-to-end-encrypted messenger services. While several ring signature constructions from post-quantum assumptions offer suitable security and efficiency for use in deniable key exchange, they are currently proven secure in the random oracle model (ROM) only, which is insufficient for post-quantum security.In this work, we provide four security reductions in the quantum-accessible random oracle model (QROM) for two generic ring signature constructions: two for the AOS framework and two for a construction paradigm based on ring trapdoors, whose generic backbone we formalize. The two security proofs for AOS ring signatures differ in their requirements on the underlying sigma protocol and their tightness. The two reductions for the ring-trapdoor-based ring signatures exhibit various differences in requirements and the security they provide. We employ the measure-and-reprogram technique, QROM straightline extraction tools based on the compressed oracle, history-free reductions and QROM reprogramming tools. To make use of Rényi divergence properties in the QROM, we study the behavior of quantum algorithms that interact with an oracle whose distribution is based on one of two different distributions over the set of outputs. We provide tight bounds for the statistical distance, show that the Rényi divergence can not be used to replace the entire oracle and provide a workaround.
2025/573(PDF)
Last updated:  2026-02-17
Forking Lemma in EasyCrypt
Foundations
Denis Firsov and Jakub Janků
Show abstract
Foundations
Formal methods are becoming an important tool for ensuring correctness and security of cryptographic constructions. However, the support for certain advanced proof techniques, namely rewinding, is scarce among existing verification frameworks, which hinders their application to complex schemes such as multi-party signatures and zero-knowledge proofs.We expand the support for rewinding in EasyCrypt by implementing a version of the general forking lemma by Bellare and Neven. We demonstrate its usability by proving EUF-CMA security of Schnorr signatures.
2026/291(PDF)
Last updated:  2026-02-17
Tight Reductions for SIS-with-Hints Assumptions with Applications to Anonymous Credentials
Public-key cryptography
Ngoc Khanh Nguyen and Jan Niklas Siemer
Show abstract
Public-key cryptography
In this work, we investigate the landscape of emerging lattice-based assumptions tailored for anonymous credentials, focusing on variants of the Short Integer Solution (SIS) problem augmented with auxiliary hints. We provide a tight reduction from the Generalised ISISf (GenISISf) (Dubois et al., PKC 2025) assumption to its interactive variant IntGenISISf, enabling the construction of proof-friendly signature schemes without incurring the significant efficiency loss observed in prior works. In particular, our results directly apply to the anonymous credential scheme proposed by Bootle et al. (CRYPTO 2023), and circumvent the 4X blow-up in the credential size due to their security loss. We also identify families of functions f for which GenISISf is as hard as SIS, leading to the first (strongly) unforgeable standard-model signature scheme from SIS without relying on chameleon hash functions.Moreover, we analyse the “one-more”-type lattice assumptions, showing in particular that Randomised One-More-ISIS (Baldimtsi et al., ASIACRYPT 2024) is at least as hard as standard One-More-ISIS (Agrawal et al., ACM CCS 2022). Further, we inspect different, yet equivalent, variations of Randomised One-More-ISIS which could be of independent interest. Finally, we compare the structural properties of GenISISf and One-More-ISIS, highlighting both shared techniques and fundamental differences. We believe our results contribute to a clearer understanding of the assumptions underpinning efficient, lattice-based anonymous credential systems.
2025/986(PDF)
Last updated:  2026-02-17
The Rényi Smoothing Parameter and Its Applications in Lattice-Based Cryptography
Foundations
Cong Ling, Laura Luzzi, and Hao Yan
Show abstract
Foundations
The smoothing parameter is a cornerstone concept in lattice-based cryptography. Traditionally defined using the \( L^{\infty} \) distance, this standard formulation can be overly stringent compared to the \( L^1 \) (or statistical) distance more commonly employed in cryptographic contexts. Recent work has proposed relaxed definitions based on Kullback-Leibler (KL) divergence and \( L^1 \) distance, thereby loosening the constraints required for the distance to vanish. However, the additive nature of the \( L^1 \) distance can be limiting for cryptographic applications where probability preservation is essential. In this paper, we introduce the {Rényi smoothing parameter} of a lattice, based on Rényi divergence, to address this limitation. The advantages of Rényi divergence in cryptographic settings are well known thanks to its multiplicative nature. The Rényi smooting parameter provides a tunable framework that interpolates between the \( L^1 \) and \( L^{\infty} \) distances, offering enhanced flexibility. We present two complementary methods to study the averaging behavior of the Rényi flatness factor: one uses classical tools such as the Minkowski-Hlawka ensemble and Rogers’ formula for computing lattice function moments; the other employs Construction A lattices derived from random codes. Finally, we illustrate how this new perspective yields improvements in lattice-based cryptographic constructions.
2026/290(PDF)
Last updated:  2026-02-17
Improved Cryptanalysis of HFERP
Attacks and cryptanalysis
Max Cartor, Ryann Cartor, Hiroki Furue, and Daniel Smith-Tone
Show abstract
Attacks and cryptanalysis
In this paper we introduce a new attack on the multivariateencryption scheme HFERP, a big field scheme including an extra variableset, additional equations of the UOV or Rainbow shape as well as addi-tional random polynomials. Our attack brings several parameter sets wellbelow their claimed security levels. The attack combines novel methodsapplicable to multivariate schemes with multiple equation types with in-sights from the Simple Attack that broke Rainbow in early 2022, thoughinterestingly the technique is applied in an orthogonal way. In addition tothis attack, we apply support minors techniques on a MinRank instancedrawing coefficients from the big field, which was effective against othermultivariate big field schemes. This work demonstrates that there existpreviously unknown impacts of the above works well beyond the scopein which they were derived.
2025/913(PDF)
Last updated:  2026-02-17
A Little LESS Secure - Side-Channel Attacks Exploiting Randomness Leakage
Public-key cryptography
Dina Hesse, Elisabeth Krahmer, Yi-Fu Lai, and Jonas Meers
Show abstract
Public-key cryptography
Schnorr and (EC)DSA signatures famously become completely insecure once a few bits of the random nonce are revealed to an attacker. In this work, we explore whether the Fiat-Shamir based post-quantum signature scheme LESS is vulnerable to analogous attacks. In particular, we investigate the impact of partial leakage of the commitment randomness – a scenario that falls under the broader class of Hidden Number Problems – on the security of the secret key.We present an efficient attack on LESS that requires knowledge of a single bit of the randomness with less than 1200 signatures to fully recover the secret key. Our attack leverages the observation that knowledge of one bit is sufficient to distinguish secret key entries from random candidates. In addition, we describe a variant of this attack that requires one-bit leakage of multiple randomness values, but succeeds with only two signatures.To demonstrate the practicality of our attacks, we identify and exploit two different side-channels that are present in the reference implementation: One timing-based attack and one exploiting the power side-channel leakage. Both show that the assumptions regarding the required single-bit leakage can be obtained in practice and that our attack poses a realistic threat to the current implementation of LESS. To our knowledge, these are the first practically verified side-channel attacks on LESS.
2026/289(PDF)
Last updated:  2026-02-17
Zero-Knowledge Proof-Carrying Data from Accumulation Schemes
Cryptographic protocols
Tianyu Zheng, Shang Gao, and Xun Liu
Show abstract
Cryptographic protocols
Proof-carrying data (PCD) is a powerful cryptographic primitive for ensuring computational integrity in distributed settings. State-of-the-art PCD constructions based on accumulation schemes achieve practical prover efficiency and support a wide range of applications. However, realizing zero-knowledge for accumulation-based PCD remains challenging, particularly for high-degree relations. Existing solutions often incur substantial overhead due to the need for zero-knowledge in both the underlying non-interactive arguments of knowledge (NARKs) and accumulation schemes. In this work, we present new theoretical and practical improvements for zero-knowledge PCD. First, we propose a novel construction that eliminates the need for zero-knowledge NARKs by separating the compliance predicate and accumulation verification, thereby reducing proof size and improving efficiency. Second, we design an efficient zero-knowledge accumulation scheme for special sound protocols, introducing techniques such as masking vectors and zero-knowledge sum-check protocols to ensure privacy with minimal overhead. The theoretical analysis demonstrates that our construction achieves logarithmic proof size and verification time for d-degree NP relations, and outperforms existing solutions in both asymptotic and concrete complexity.
2023/1579(PDF)
Last updated:  2026-02-17
KiloNova: Preprocessing Folding-based SNARKs for Machine Executions
Cryptographic protocols
Tianyu Zheng, Shang Gao, Yu Guo, and Bin Xiao
Show abstract
Cryptographic protocols
Succinct non-interactive arguments of knowledge have enabled efficient verification of complex computations, but practical applications such as zero-knowledge virtual machines face scalability challenges due to rapidly growing circuit sizes. While folding-based incrementally verifiable computation and proof-carrying data frameworks offer improved performance, existing constructions struggle to efficiently support non-uniform circuits. In this work, we present KiloNova, a preprocessing recursive SNARK for universal machine execution. KiloNova introduces a novel holographic folding scheme that enables efficient folding across multiple high-degree relations with non-uniform indices, achieving both asymptotic and concrete efficiency. Building on this, we construct a multi-predicate PCD system in which the prover maintains constant-size intermediate instances, thereby significantly reducing recursion overhead and memory usage. Finally, we instantiate KiloNova as a preprocessing SNARK by compiling the PCD proof with corresponding polynomial commitment schemes. Theoretical analysis and experimental results show that KiloNova achieves state-of-the-art performance in multi-predicate settings and therefore provides a scalable, practical foundation for zero-knowledge proofs in real-world applications.
2026/288(PDF)
Last updated:  2026-02-17
Bypassing the Random-Probing Model in Masking Security Proofs
Implementation
Julien Béguinot, Gianluca Brian, and Loïc Masure
Show abstract
Implementation
Masking, i.e., computing over secret-shared data, is one of the main counter-measures against side-channel analysis, provably secure in the standard noisy-leakage model. However, all the state-of-the-art security proofs rely on a reduction to a more abstract model called random probing (Eurocrypt’14). As a result, the noise requirements of such proofs must scale with the field size of the circuit which, beyond not reflecting real-world physics of target devices, is often prohibitive, especially in the post-quantum era. That is why it is critical to find alternative strategies to the reduction to the random-probing model. In this paper, we establish for the first time a masking security proof bypassing this reduction, answering positively to the above question. Contrary to the common belief that directly working in the noisy-leakage model is not convenient, we show how to reach this goal by leveraging an extension of the Xor lemma. We also show how to leverage the IOS framework (CHES’21) in order to derive composable security directly in the noisy-leakage model. As a result, our bound relies on relaxed noise requirements characterized in a weaker metric than the so far state of the art (Crypto’24, ’19). Moreover, our new proof strategy allows us to derive a security bound for circuits masked with the first-order ISW compiler, for which the optimal noise requirement scales as \(\Theta\left(\left|\mathbb{F}\right|^{1/2}\right)\), whereas proofs using the random-probing model work in a regime of \(\mathcal{O}\left(\left|\mathbb{F}\right|\right)\). The latter contribution illustrates how some current design choices for masked implementations could be concretely affected: we exhibit two exemplary masked implementations such that the former one is more secure in the (more abstract) random-probing model, whereas the latter one is more secure in the (more realistic) noisy-leakage model.
2024/1850(PDF)
Last updated:  2026-02-17
Single-trace side-channel attacks on MAYO exploiting leaky modular multiplication
Attacks and cryptanalysis
Sönke Jendral and Elena Dubrova
Show abstract
Attacks and cryptanalysis
In response to the quantum threat, new post-quantum cryptographic algorithms will soon be deployed to replace existing public-key schemes. MAYO is a quantum-resistant digital signature scheme whose small keys and signatures make it suitable for widespread adoption, including on embedded platforms with limited security resources. This paper demonstrates two single-trace side-channel attacks on a MAYO implementation in ARM Cortex-M4 that recover a secret key with probabilities of 99.9% and 91.6%, respectively. Both attacks use deep learning-assisted power analysis exploiting information leakage during modular multiplication to reveal a vector in the oil space. This vector is then extended to a full secret key using algebraic techniques.
2026/287(PDF)
Last updated:  2026-02-17
Network-Agnostic Multidimensional Approximate Agreement with Optimal Resilience
Cryptographic protocols
Diana Ghinea, Darya Melnyk, and Tijana Milentijević
Show abstract
Cryptographic protocols
The Multidimensional Approximate Agreement problem ($D$-AA) considers a setting with $n$ parties with inputs in $\mathbb{R}^D$. Out of the $n$ parties, up to $t$ may be byzantine (malicious). The goal is for the honest parties to obtain $\varepsilon$-close outputs that lie in the honest inputs' convex hull. While tight bounds on the resilience of $D$-AA have been found for the purely synchronous and asynchronous models, this is still an open question for the network-agnostic model. Here, the type of network is not known a priori: it may be synchronous, and then the number of byzantine parties is up to $t_s$, or asynchronous, and then the number of byzantine parties is up to $t_a \leq t_s$. In this model, it is known that $n > (D + 1) \cdot t_s + t_a$ is sufficient for deterministic protocols [GLW, SPAA'23], tight for $D = 1$ [GLW, PODC'22], while $n > \max\{(D + 1) \cdot t_s, t_s + (D + 1) \cdot t_a\}$ is tight for randomized protocols concerned with exact agreement [CGWW, DISC'24].In this work, we establish that, for $D > 1$ the condition $n > \max\{(D + 1) \cdot t_s, t_s + (D + 1) \cdot t_a\}$ is tight for deterministic protocols as well. We identify that the gap in prior deterministic protocols is not geometric, but stems from an asymmetry in the communication primitive that produces parties' "views".The core technical contribution of our work is hence a strengthened network-agnostic Gather primitive that enforces a global structural property on the number of values received by honest parties, eliminating the problematic asymmetry - so that standard safe-area geometric convergence arguments apply under the optimal thresholds.
2026/286(PDF)
Last updated:  2026-02-17
Upper Bound on Information-Theoretic Security of Permutation-Based Pseudorandom Functions
Secret-key cryptography
Chun Guo, Jian Guo, Xinnian Li, and Wenjie Nan
Show abstract
Secret-key cryptography
We present the first general upper bound on permutation-based pseudorandom functions in the information-theoretic setting. We show that any non-compressing PRF, with input and output domain at least \([N]\), making \(t\) black-box calls to any \(t\) public permutations on \([N]\), can be distinguished from a random function over the output domain with at most \(\widetilde{O}\big(N^{t/(t+1)}\big)\) total queries to the PRF and the permutations. Our results suggest that the designs of Chen et al. (Crypto~2019) are optimal, among all possible constructions, in terms of information-theoretic security.In particular, we propose the generalized key alternating construction, which captures permutation-based PRFs. We then prove that, for any such construction, there exists an explicit distinguisher achieving the tradeoff$Q_fQ_p^{t}=\widetilde{O}\big((2t^2)^{t+1}N^{t}\big) $with constant advantage, where \(Q_f\) counts PRF queries and \(Q_p\) counts queries to each public permutation. We further extend our bound to blockcipher-based PRFs and to an adaptive setting in which each round may adaptively choose a permutation from a public family of permutations \(\mathcal P\). In this case, the general upper bound becomes \(\widetilde{O}\big(|\mathcal P|\,N^{t/(t+1)}\big)\).
2025/1705(PDF)
Last updated:  2026-02-17
Security Amplification of Threshold Signatures in the Standard Model
Foundations
Karen Azari, Cecilia Boschini, Kristina Hostáková, and Michael Reichle
Show abstract
Foundations
The current standardization calls for threshold signatures have highlighted the need for appropriate security notions providing security guarantees strong enough for broad application. To address this, Bellare et al. [Crypto'22] put forward a hierarchy of unforgeability notions for threshold signatures. Recently, Navot and Tessaro [Asiacrypt'24] introduced a new game-based definition of strong (one-more) unforgeability for threshold signatures, which however does not achieve Bellare's strongest level of security. Navot and Tessaro analyzed several existing schemes w.r.t. their strong unforgeability security notion, but all positive results rely on idealized models. This is in contrast to the weaker security notion of (standard) unforgeability, for which standard-model constructions exist. This leaves open a fundamental question: is getting strong unforgeability fundamentally harder than standard unforgeability for threshold signatures?In this paper we bridge this gap, by showing a generic construction lifting any unforgeable threshold signature scheme to strong unforgeability. The building blocks of our construction can be instantiated in the standard model under standard assumptions. The achieved notion of strong unforgeability extends the definition of Navot and Tessaro to achieve the strongest level of security according to the hierarchy of Bellare et al. (following a recent classification of security notions for (blind) threshold signatures by Lehmann, Nazarian, and Özbay [Eurocrypt'25]).The starting point for our transformation is an existing construction for single-user signatures from chameleon hash functions by Steinfeld, Pieprzyk and Wang [RSA'07]. We first simplify their construction by relying on a stronger security notion for chameleon hash functions. The bulk of our technical contribution is then to translate this framework into the threshold setting. Towards this goal, we introduce a game-based definition for threshold chameleon hash functions (TCHF) and provide a construction of TCHF that is secure under DLOG in the standard model. We believe that our new notion of TCHF might also be of independent interest.
2026/285(PDF)
Last updated:  2026-02-17
How (not) to Switch FHE Schemes: Framework and Attacks in the IND-CPA-D Model
Public-key cryptography
Giacomo Santato and Riccardo Zanotto
Show abstract
Public-key cryptography
In this paper, we study the IND-CPA-D security of FHE schemes when combined through scheme switching.We introduce a formal framework that captures security in this setting and provide a rigorous analysis of different constructions. Within this framework, we identify sufficient conditions for the switching algorithm under which the combined schemes achieve IND-CPA-D security, assuming the individual schemes are IND-CPA-D-secure.We then focus on the specific case of scheme switching from CKKS to TFHE. We analyze how existing switching algorithms designed for vanilla CKKS can be modified to preserve IND-CPA-D security.We demonstrate an attack against the IND-CPA-D security of the PEGASUS construction [LHHMQ21] and we implement a proof-of-concept of this attack against the CKKS-to-FHEW switching mechanism in the OpenFHE library.Finally, we present a generic transformation that converts a vanilla CKKS-based switch into an IND-CPA-D-secure one.
2025/1436(PDF)
Last updated:  2026-02-17
VOLE-in-the-Head Signatures Based on the Linear Code Equivalence Problem
Cryptographic protocols
Michele Battagliola, Laura Mattiuz, and Alessio Meneghetti
Show abstract
Cryptographic protocols
The Vector Oblivious Linear Evaluation in the Head (VOLEitH) paradigm has proven to be a versatile tool to design zero-knowledge proofs and signatures in post-quantum cryptography. In this paper, we propose three VOLE-friendly modellings for Proofs of Knowledge (PoK) of a solution of an instance of the Linear Code Equivalence Problem (LEP). For the first two schemes, we propose two new reductions from LEP to the Multivariate Quadratic (MQ) problem, that may be of independent interest for the cryptanalysis of LEP. Instead, the last model is obtained by generalizing a recent work by Bettaieb et al. to the context of monomial matrices instead of permutation matrices.While our proposed schemes exhibit larger signature sizes compared to LESS, they improve the computational efficiency, reducing the overall complexity from $O(n^3)$ to $O(n^2\log n )$ and $O(n^2\log^2 n )$, where $n$ is the length of the code.
2026/224(PDF)
Last updated:  2026-02-17
Usage of Mixed Integer Linear Programming in Cryptanalysis of Block Ciphers
Attacks and cryptanalysis
Halil İbrahim Kaplan
Show abstract
Attacks and cryptanalysis
This paper presents a comprehensive approach to the cryptanalysis of block ciphers using Mixed Integer Linear Programming (MILP).By formulating the cipher’s components including substitution boxes,linear layers, and key schedules as systems of linear inequalities, MILPenables the automated discovery of optimal cryptanalytic characteristics. Our methodology is demonstrate the MILP modelling through theanalysis of the ITUbee cipher under three attack models: differential, linear, and related-key differential cryptanalysis. This work highlights thepotential of MILP as a robust, generalizable tool for evaluating ciphersecurity and guiding the design of resilient cryptographic primitives.
2025/1622(PDF)
Last updated:  2026-02-17
General Modularity Lemmata about Random Variable Commitment Schemes, and a Certified Laplace Mechanism
Foundations
Fredrik Meisingseth, Christian Rechberger, and Fabian Schmid
Show abstract
Foundations
At CRYPTO'24, Bell et al. propose a definition of random variable commitment schemes (RVCS) and show that it leads to a notion of certified differential privacy (DP). We prove that there exists RVCS's for all efficiently sampleable distributions, under the discrete logarithm assumption. This follows from three lemmata that additionally enable simple modular design of new RVCS's; First, we show that the properties of RVCS's are closed under polynomial sequential composition. Secondly, we show that homomorphically evaluating a function $f$ on the output of an RVCS for distribution $Z$ leads to an RVCS for distribution $f(Z)$. Thirdly, we show that applying a `Commit-and-Prove'-style argument of knowledge for a function $f$ onto the output of an RVCS for distribution $Z$ results in an RVCS for distribution $f(Z)$. Further, we observe that Bell et al. use $\Sigma$-protocols and a knowledge soundness property seemingly significantly stronger than those typically used when studying $\Sigma$-protocols. To align the RVCS definition with the literature on $\Sigma$-protocols, we relax it slightly and show that known constructions fulfill this adapted definition. We propose another orthogonal relaxation of the RVCS definition which allows for sampling algorithms that, due to rejection sampling, abort with non-zero probability. We demonstrate the usefulness of the lemmata and definitional adaptations by constructing the first RVCS's for arbitrarily biased coins and a discrete Laplace distribution, leading to the first certified DP protocol for a discrete Laplace mechanism.
2026/284(PDF)
Last updated:  2026-02-17
Knowledge Soundness of Polynomial Commitments in the Algebraic Group Model Does Not Guarantee Extractability
Foundations
Petr Chmel, Pavel Hubáček, and Dominik Stejskal
Show abstract
Foundations
The Algebraic Group Model (AGM) has become a standard framework for analyzing the knowledge soundness of group-based polynomial commitment schemes. In this work, we formally establish inherent limitations of this methodology. We isolate a structural property satisfied by essentially all practical group-based polynomial commitments, which we term AGM-clarity. We prove that for AGM-clear schemes, evaluation binding implies knowledge soundness in the AGM. This collapse reveals that the AGM definition of knowledge soundness does not capture a distinct security property, but is merely a structural consequence of evaluation binding.To precisely characterize the guarantees on extractability provided by the AGM, we introduce Weak Interpolation Knowledge Soundness (WIKS) in the standard model, which is an extreme relaxation of extractability. We show that WIKS is implied by standard evaluation binding and prove that, for AGM-clear schemes, knowledge soundness in the AGM is equivalent to WIKS. Ultimately, our results demonstrate that proofs in the AGM for practical polynomial commitment schemes do not certify ``knowledge'' in the sense of immediate extractability, as they yield no guarantees beyond those already implied by standard model evaluation binding.
2026/229(PDF)
Last updated:  2026-02-17
ANIMAGUS: A Provably Secure Accordion Mode of Operation
Secret-key cryptography
Gülnihal Öztürk, Onur Koçak, and Oğuz Yayla
Show abstract
Secret-key cryptography
Block ciphers are designed to operate on fixed length blocks of bits. A block cipher mode of operation is used in order to encrypt variable-length input. These modes process multiple data blocks and ensure information security through the application of block cipher algorithms. While there are several NIST-approved block cipher modes, they exhibit certain inherent limitations and security vulnerabilities. Hence, the necessity for a novel mode has emerged. NIST aims to design an accordion mode, characterized as a tweakable variable-input-length strong pseudorandom permutation, offering enhanced security and performance attributes compared to existing modes. It lastly proposed to develop accordion mode types from different variants of HCTR2 algorithm. Thus, we propose a new design based on HCTR2 structure for an accordion mode in this article. The proposed design accommodates variable-length inputs while ensuring strong pseudorandom permutation security. It also exhibits adaptability with any underlying block cipher and facilitates variable-length tweaks. This ultimately provides a more versatile framework compared to numerous existing schemes in the literature.
2025/1817(PDF)
Last updated:  2026-02-17
Improved Search-to-Decision Reduction for Random Local Functions
Foundations
Kel Zin Tan and Prashant Nalini Vasudevan
Show abstract
Foundations
A random local function defined by a $d$-ary predicate $P$ is one where each output bit is computed by applying $P$ to $d$ randomly chosen bits of its input. These represent natural distributions of instances for constraint satisfaction problems. They were put forward by Goldreich as candidates for low-complexity one-way functions, and have subsequently been widely studied also as potential pseudo-random generators. We present a new search-to-decision reduction for random local functions defined by any predicate of constant arity. Given any efficient algorithm that can distinguish, with advantage $\epsilon$, the output of a random local function with $m$ outputs and $n$ inputs from random, our reduction produces an efficient algorithm that can invert such functions with $\tilde{O}(m(n/\epsilon)^2)$ outputs, succeeding with probability $\Omega(\epsilon)$. This implies that if a family of local functions is one-way, then a related family with shorter output length is family of pseudo-random generators. Prior to our work, all such reductions that were known required the predicate to have additional sensitivity properties, whereas our reduction works for any predicate. Our results also generalise to some super-constant values of the arity $d$, and to noisy predicates.
2026/283(PDF)
Last updated:  2026-02-17
Malicious Security Comes Free in SPDZ
Cryptographic protocols
Junru Li and Yifan Song
Show abstract
Cryptographic protocols
In this work, we study the concrete efficiency of SPDZ-type MPC protocols in the dishonest majority setting with maximum corruption, where $t=n-1$ out of $n$ parties can be corrupted. In the semi-honest setting, the state-of-the-art SPDZ protocol achieves an amortized communication cost of $4n$ field elements per multiplication gate, assuming a pseudorandom correlation generator (PCG) that prepares random Beaver triples silently in the offline phase. However, achieving security against malicious adversaries typically incurs a substantial overhead. Existing works either require communication of at least $10n$ field elements per gate, or additionally require an additively homomorphic encryption scheme. In this work, we construct a maliciously secure SPDZ-type protocol with the same amortized communication complexity as the semi-honest variant, i.e., $4n$ field elements per multiplication gate, assuming a PCG for tensor-product correlations. We note that the tensor-product correlations come free as by-products from almost all existing constructions of PCGs for random Beaver triples. These by-products allow us to construct an efficient preprocessing phase and a recursive check protocol, which enables an online evaluation of the circuit without authenticating all the wire values. The correctness of the online computation can also be checked efficiently with a sublinear communication cost in the circuit size.
2025/1571(PDF)
Last updated:  2026-02-17
Attribute-based Quantum Broadcast Encryption with Composite Policies via Symmetric Unitary t-Designs
Cryptographic protocols
Sayatan Ganguly and Shion Samadder Chaudhury
Show abstract
Cryptographic protocols
In an emerging era of quantum technologies, securing quantum communications is paramount. In this paper, we introduce two frameworks for attribute-based quantum broadcast encryption (AB-QBE), enabling fine-grained access control over sensitive quantum data. The first scheme, Multi-Policy Quantum Private Broadcast Encryption (MP-QPBE), leverages symmetric unitary \(t\)-designs to construct a protocol where decryption is possible upon satisfying a composite set of access policies simultaneously. This approach ensures that a user can only access the broadcasted quantum state if their attributes fulfill multiple predefined criteria. In our MP-QPBE, we first perform symmetrization of the initial quantum message for the encryption of the tensor product of distinct pure states, a scenario not directly addressed by previous quantum private broadcasting schemes. We demonstrate that this method is information-theoretically secure. The second scheme, Component-wise Independent Quantum Broadcast Encryption (CI-QBE), offers an alternative, lossless approach where each quantum message is encrypted independently using a unitary \(1\)-design. It provides greater flexibility and is applicable to arbitrary quantum states, including mixed states, without the information loss inherent in symmetrization. We provide a comprehensive security analysis for both constructions, proving their robustness against unauthorized users and adversaries with quantum side information. A comparative analysis highlights the trade-offs between the two schemes in terms of security guarantees, quantum resource requirements, and practical applicability, offering a nuanced perspective on designing secure multi-user quantum communication systems.
2026/282(PDF)
Last updated:  2026-02-17
Unforgeable Watermarks for Language Models via Robust Signatures
Foundations
Huijia Lin, Kameron Shahabi, and Min Jae Song
Show abstract
Foundations
Language models now routinely produce text that is difficult to distinguish from human writing, raising the need for robust tools to verify content provenance. Watermarking has emerged as a promising countermeasure, with existing work largely focused on model quality preservation and robust detection. However, current schemes provide limited protection against false attribution.We strengthen the notion of soundness by introducing two novel guarantees: unforgeability and recoverability. Unforgeability prevents adversaries from crafting false positives, texts that are far from any output from the watermarked model but are nonetheless flagged as watermarked. Recoverability provides an additional layer of protection: whenever a watermark is detected, the detector identifies the source text from which the flagged content was derived. Together, these properties strengthen content ownership by linking content exclusively to its generating model, enabling secure attribution and fine-grained traceability.We construct the first undetectable watermarking scheme that is robust, unforgeable, and recoverable with respect to substitutions (i.e., perturbations in Hamming metric). The key technical ingredient is a new cryptographic primitive called robust (or recoverable) digital signatures, which allow verification of messages that are close to signed ones, while preventing forgery of messages that are far from all previously signed messages. We show that any standard digital signature scheme can be boosted to a robust one using property-preserving hash functions (Boyle, LaVigne, and Vaikuntanathan, ITCS 2019).
2026/258(PDF)
Last updated:  2026-02-17
Lightning, Field-Agnostic Super-Efficient Polynomial Commitment Scheme
Cryptographic protocols
Wenjie Qu, Yanpei Guo, and Jiaheng Zhang
Show abstract
Cryptographic protocols
Polynomial commitment schemes (PCS) are a fundamental building block of modern zkSNARKs. In this paper, we propose Lightning, a new coding-based PCS that achieves state-of-the-art prover efficiency. Our main technical contribution is a new linear code family, the Lightning code, which can be instantiated from any base code with constant relative distance. Compared to the base code, Lightning code significantly reduces encoding cost by trading off relative distance. We integrate Lightning code into the standard coding-based PCS framework of Ligero and Brakedown. Experimental results show that Lightning PCS reduces prover commitment time by up to $2.7\times$ compared to the fastest prover configuration of Brakedown, at the cost of a $2.4\times$ increase in proof size. Overall, Lightning provides a practical mechanism for trading proof size for prover efficiency in coding-based PCS constructions.
2026/281(PDF)
Last updated:  2026-02-17
Do Androids Dream of a Dead Internet: Interactive Watermarks for Bot Detection
Cryptographic protocols
Brennon Brimhall, Harry Eldridge, Maurice Shih, Ian Miers, and Matthew Green
Show abstract
Cryptographic protocols
A number of recent works propose watermarking the outputs of large language models (LLMs) but fail to describe who is authorized to watermark the text or check for a watermark. To resolve these problems, we propose interactive watermarking schemes. Our technique leverages the fact that, for many of the cases in which detecting synthetic text is useful, the detector is able to control some part of the prompt that is passed to the LLM.In other words, we propose poisoning the prompt, through which the examining user establishes a steganographic channel with the LLM provider. This lets the user and the LLM provider agree on a shared key, which is then used to embed a a symmetric watermark and permits the end-user examiner to learn if the entity they are conversing with is a bot. Because the steganographic prompt and the LLM response are indistinguishable from their natural distributions, this approach simultaneously sidesteps prior impossibility results from Zhang et al. [ICML'24] and resolves the authorization questions unanswered by previous work.Our primary construction is based on elliptic curve Diffie-Hellman; we sketch a more sophisticated version using broadcast encryption. Our secondary construction uses a symmetric key protocol with a pre-shared key. To improve efficiency, we introduce steganographic synchronization codes. We experimentally validate our theoretical findings and outline directions for future work.
2026/280(PDF)
Last updated:  2026-02-17
Reducing the Number of Qubits in Quantum Discrete Logarithms on Elliptic Curves
Attacks and cryptanalysis
Clémence Chevignard, Pierre-Alain Fouque, and André Schrottenloher
Show abstract
Attacks and cryptanalysis
Solving the Discrete Logarithm problem on the group of points of an elliptic curve is one of the major cryptographic applications of Shor's algorithm. However, current estimates for the number of qubits required remain relatively high, and notably, higher than the best recent estimates for factoring of RSA moduli. For example, recent work by Gidney (arXiv 2025) estimates 2043 logical qubits for breaking 3072-bit RSA, while previous work by Häner et al. (PQCrypto 2020) estimates a requirement of 2124 logical qubits for solving discrete logarithm instances on 256-bit elliptic curves over prime fields. Indeed, for an $n$-bit elliptic curve, the most space-optimized optimized implementation by Proos and Zalka (Quant. Inf. Comput. 2003) gives $5n + o(n)$ qubits, as more additional space is required to store the coordinates of points and compute the addition law.In this paper, we propose an alternative approach to the computation of point multiplication in Shor's algorithm (on input $k$, computing $k P$ where $P$ is a fixed point). Instead of computing the point multiplication explicitly, we use a Residue Number System to compute directly the projective coordinates of $k P$ with low space usage. Then, to avoid performing any modular inversion, we compress the result to a single bit using a Legendre symbol.This strategy allows us to obtain the most space-efficient polynomial-time algorithm for the ECDLP to date, with only $3.12n + o(n)$ qubits, at the expense of an increase in gate count, from $\mathcal{O}(n^3)$ to $\widetilde{\mathcal{O}}(n^3)$. For $n = 256$ we estimate that 1098 qubits would be necessary, with 22 independent runs, using $2^{38.10}$ Toffoli gates each. This represents a much higher gate count than the previous estimate by Häner et al. (roughly $2^{30}$), but half of the corresponding number of qubits (2124).
2026/058(PDF)
Last updated:  2026-02-16
Zero Knowledge (About) Encryption: A Comparative Security Analysis of Three Cloud-based Password Managers
Attacks and cryptanalysis
Matteo Scarlata, Giovanni Torrisi, Matilda Backendal, and Kenneth G. Paterson
Show abstract
Attacks and cryptanalysis
Zero Knowledge Encryption is a term widely used by vendors of cloud-based password managers. Although it has no strict technical meaning, the term conveys the idea that the server, who stores encrypted password vaults on behalf of users, is unable to learn anything about the contents of those vaults.The security claims made by vendors imply that this should hold even if the server is fully malicious. This threat model is justified in practice by the high sensitivity of vault data, which makes password manager servers an attractive target for breaches (as evidenced by a history of attacks).We examine the extent to which security against a fully malicious server holds true for three leading vendors who make the Zero Knowledge Encryption claim: Bitwarden, LastPass and Dashlane. Collectively, they have more than 60 million users and 23% market share. We present 12 distinct attacks against Bitwarden, 7 against LastPass and 6 against Dashlane. The attacks range in severity, from integrity violations of targeted user vaults to the complete compromise of all the vaults associated with an organisation. The majority of the attacks allow recovery of passwords. We have disclosed our findings to the vendors and remediation is underway.Our attacks showcase the importance of considering the malicious server threat model for cloud-based password managers. Despite vendors’ attempts to achieve security in this setting, we uncover several common design anti-patterns and cryptographic misconceptions that resulted in vulnerabilities. We discuss possible mitigations and also reflect more broadly on what can be learned from our analysis by developers of end-to-end encrypted systems.
2026/279(PDF)
Last updated:  2026-02-16
On the Concrete Hardness Gap Between MLWE and LWE
Attacks and cryptanalysis
Tabitha Ogilvie
Show abstract
Attacks and cryptanalysis
Concrete security estimates for Module-LWE (MLWE) over an appropriate ring are often obtained by translating to an "equivalent" unstructured LWE instance, which implicitly treats algebraic structure as a pure efficiency gain with no impact on security. We show that this heuristic fails for realistic parameters. In common MLWE/RLWE instantiations, an attacker can exploit symmetries to obtain hybrid attacks that are strictly stronger than the best corresponding attack on LWE, translating to a concrete hardness gap between MLWE and LWE.Our starting point is the observation that many cryptographically relevant rings admit coefficient isometries: ring elements whose multiplication acts as a signed permutation on coefficient vectors and preserves the secret and error distributions of interest. Multiplying an MLWE instance by such an isometry creates many derived instances that share the same public matrix and are therefore compatible with the same expensive offline preprocessing in hybrid attacks. We formalise this mechanism and incorporate it into both primal and dual hybrid frameworks.We instantiate coefficient isometries for power-of-two cyclotomic rings, and quantify the resulting advantage in two regimes. For sparse-secret RLWE (popular in homomorphic encryption), isometry-enabled hybrids yield gaps of up to 15 bits over LWE-based estimates. For the standardised Kyber/ML-KEM parameters, we obtain a consistent 2--3 bit gap under standard cost models. Our results demonstrate that the widely assumed equivalence between LWE and MLWE in power-of-two cyclotomics does not hold, with real world consequences for deployed schemes.
2026/278(PDF)
Last updated:  2026-02-16
Exploiting PDF Obfuscation in LLMs, arXiv, and More
Applications
Zhongtang Luo, Jianting Zhang, and Zheng Zhong
Show abstract
Applications
Many modern systems parse PDF files to extract semantic information, including multimodal large language models and academic submission platforms. We show that this practice is extremely vulnerable in real-world use cases. By exploiting standard-compliant features of the PDF page description language, an adversary can craft PDFs whose parsed content differs arbitrarily from what is visually rendered to human readers and whose metadata can be manipulated to mislead automated systems.We demonstrate two concrete vulnerabilities. First, we build adversarial PDFs using font-level glyph remapping that cause several widely deployed multimodal language models to extract incorrect text, while remaining visually indistinguishable from benign documents. Across six platforms, most systems that rely on PDF text extraction are vulnerable, whereas OCR-based pipelines are robust. Second, we analyze arXiv's TeX-detection mechanism and show that it relies on brittle metadata and font heuristics, which can be fully bypassed without changing the visual output.Our findings reveal a potential risk arising from discrepancies between automated PDF parsing and human-visible semantics. We argue that rendering-based interpretation, followed by computer vision, is one better approach for security-sensitive PDF interpretation.
2026/277(PDF)
Last updated:  2026-02-16
Collusion-Minimized TLS Attestation Protocol for Decentralized Applications
Cryptographic protocols
Uğur Şen, Murat Osmanoğlu, Oğuz Yayla, Ali Aydın Selçuk, and Ali Doğanaksoy
Show abstract
Cryptographic protocols
Transport Layer Security (TLS) attestation protocols are a key building block for decentralized applications that require authenticated off-chain data. However, existing Designed Commitment TLS (DCTLS) constructions rely on designated verifiers, which prevents public verifiability and enables prover–verifier collusion in on-chain settings.To address these limitations, we propose a collusion-minimized TLS attestation framework $\Pi_{\textsf{coll-min}}$ that enables jointly verifiable attestations with distributed verifiers. The framework combines two complementary components: a modified and exportable variant of DCTLS, denoted as dx-DCTLS, which enables third-party verification by replacing non-verifiable components with verifiable counterparts, and a decentralized validation layer based on distributed verifiable random functions (DVRF) and a threshold signature scheme (TSS). Together, these two components allow multiple verifiers to jointly validate TLS attestations while minimizing prover–verifier collusion. In this study, we formalize a threshold attestation unforgeability notion capturing adversarial behaviors in multi-verifier environments and prove security under standard assumptions. Specifically, by transitioning from independent multi-session validations, as commonly employed in decentralized oracle networks (DONs), to a unified and exportable attestation framework, we achieve a reduction in the prover complexity from $O(n)$ to $O(1)$. To evaluate practicality, we provide a prototype implementation of the DVRF–TSS component and a performance analysis of dx-DCTLS. The results show that the proposed framework remains efficient at high threshold sizes and introduces only modest additional overhead, demonstrating the feasibility of collusion-minimized and jointly verifiable TLS attestations for smart contract environments.
2026/276(PDF)
Last updated:  2026-02-16
On the conversion of module representations for higher dimensional supersingular isogenies
Foundations
Aurel Page, Damien Robert, and Julien Soumier
Show abstract
Foundations
We expand the well developed toolbox between quaternionic ideals and supersingular elliptic curves into its higher dimensional version, namely (Hermitian) modules and maximal supersingular principally polarised abelian varieties. One of our main result is an efficient algorithm to compute an unpolarised isomorphism $A \simeq E_0^g$ given the abstract module representation of $A$. This algorithm relies on a subroutine that solves the Principal Ideal Problem in matrix rings over quaternion orders, combined with a higher dimensional generalisation of the Clapotis algorithm. To illustrate the flexibility of our framework, we also use it to reduce the degree of the output of the KLPT$^2$ algorithm, from $O(p^{25})$ to $O(p^{15.5})$.
2026/275(PDF)
Last updated:  2026-02-16
PhantomCrypt: Second-Order Deniable Encryption with Post-Quantum Security
Cryptographic protocols
Shahzad Ahmad, Stefan Rass, and Zahra Seyedi
Show abstract
Cryptographic protocols
Traditional deniable encryption primarily focuses on denying the $content$ of secret communications, allowing plausible alternative plaintexts to be presented in the event of coercion. However, even the recognizable use of deniable encryption may already defeat its purpose, making any revealed plaintext suspicious to a coercer. Hence, for practical deniability, not only does the content need to be deniable, but also the entire use of deniable encryption must be considered. We call this $second~order~deniability$. This notion aims to hide the whole use of deniable encryption, where covert communication is indistinguishable from innocuous data, and enhanced content deniability, enabling multiple, cryptographically plausible decryptions that are computationally indistinguishable from the true message. To show its practicality, we present PhantomCrypt, which combines conventional deniable encryption (DE) with steganographic methods to hide the use of DE itself, while retaining the ability to decrypt a ciphertext into several distinct plaintexts, even if not under pressure. We prove the security of PhantomCrypt using a formalization of second-order deniability in standard cryptographic terms.
2025/1674(PDF)
Last updated:  2026-02-16
Secure Rate-Distortion-Perception Trade-Off with Side Information
Foundations
Gustaf Åhlgren and Onur Günlü
Show abstract
Foundations
Secure rate-distortion-perception (RDP) trade-offs are relevant for applications such as semantic compression, where the perceptual quality needs to be maximized. We study a framework for secure RDP over an ideal public communication channel in the presence of an eavesdropper, where the legitimate parties also have access to side information correlated with the source. The exact rate region for the secure RDP trade-off is established when both the encoder and the decoder have access to the side information. We then characterize an inner bound when only the decoder has access to the side information and establish the exact region for a special case. Moreover, we provide an RDP example to illustrate remarkable gains in communication rate due to common randomness, which is not possible to obtain for rate-distortion trade-offs. Our results show that binning-based schemes can achieve high perceptual quality, low distortion, and strong secrecy simultaneously, establishing the information-theoretic limits for next-generation trustworthy semantic compression systems.
2025/1875(PDF)
Last updated:  2026-02-16
Generic-compatible distinguishers for linear regression based attacks
Attacks and cryptanalysis
Sana Boussam
Show abstract
Attacks and cryptanalysis
Non profiled attacks aim to recover secret information from a device without prior knowledge of its leakage model. However, most practical non profiled attacks, such as Differential Power Analysis, Correlation Power Analysis, and Linear Regression-based Attacks (LRA), still depend on a priori leakage assumptions. Designing a generic attack that does not rely on any such assumption therefore remains an open problem and has been actively investigated by the side-channel community for more than a decade. Although Whitnall et al. showed that LRA can be considered generic when all predictors are included, this is not feasible in practice due to inherent multicollinearity issues and the inadequacy of classical distinguishers, which lose discriminating power when targeting injective functions. In this work, we overcome these limitations and propose the first fully generic-compatible non profiled attack. We show that using a Walsh-Hadamard basis enables generic LRA by eliminating multicollinearity and allowing all predictors to be considered without loss of precision. We also introduce new generic-compatible distinguishers tailored to LRA and formally prove their soundness for both linear and non-linear cryptographic operations. Finally, we validate our approach experimentally using simulations and publicly available datasets.
2025/204(PDF)
Last updated:  2026-02-16
On the Composable Security of MDVS and MDRS-PKE Constructions
Uncategorized
Chen-Da Liu-Zhang, Christopher Portmann, and Guilherme Rito
Show abstract
Uncategorized
Off-The-Record (OTR) messaging applications are designed to provide so-called Deniable Authentication guarantees. These allow a sender Alice to designate a receiver Bob and sign him a message m that only he can validate. While Bob is guaranteed Alice signed m, he cannot convince non-designated Judy that Alice signed m, even if he gives Judy his secret keys. This is because, as Judy knows, Bob could have forged that signature.In recent work, Liu-Zhang, Portmann and Rito construct the first fully Off-The-Record group messaging application (ePrint 2024/1593). Central to their construction are new idealized communication channels that provide rather strong Deniable Authentication guarantees. While these channels are introduced to capture the guarantees provided by Multi-Designated Verifier Signatures (MDVS) and Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE) schemes, no scheme is proven to construct them. More, the only composable treatment of MDVS schemes does not guarantee deniability for messages that are read by honest parties (Maurer, Portmann and Rito, ASIACRYPT ’21). In contrast, however, the channels assumed for the construction of the OTR messenger provide this guarantee, meaning their OTR messenger has no known provably secure instantiations.We close this gap by providing a new (generic) composable treatment of MDVS and MDRS-PKE schemes. Interestingly, our treatment allowed us to identify a new property, Forgery Invalidity, without which we do not know how to prove the deniability of neither MDVS nor MDRS-PKE schemes when honest receivers read. We show that any MDVS and MDRS-PKE scheme providing this guarantee (plus other known ones) constructs the idealized channel semantics that are assumed by Liu-Zhang, Portmann and Rito’s OTR messenger construction. Then, we prove that Chakraborty et al.’s MDVS (EUROCRYPT ’23) has this property, and that Maurer et al.’s MDRS-PKE (EUROCRYPT ’22) preserves it from the underlying MDVS. Together, our results imply that Liu-Zhang, Portmann and Rito’s OTR messenger can be securely instantiated from Chakraborty et al.’s MDVS, or from Maurer et al.’s MDRS-PKE.
2026/274(PDF)
Last updated:  2026-02-16
From linear regression to generative model for explainable non profiled side-channel attacks
Attacks and cryptanalysis
Sana Boussam, Mathieu Carbone, Benoît Gérard, Guénaël Renault, and Gabriel Zaid
Show abstract
Attacks and cryptanalysis
Non profiled side-channel attacks aim at exploiting leakage traces from a targeted embedded system to extract secret information, without a priori knowledge on the true leakage model of the device. To automate and simplify attacks, deep learning techniques have thus been introduced in the side-channel community. Most of the works published have mainly explored the use of discriminative models in the profiled or non profiled context. However, the lack of interpretability and explainability of these models constitutes a major limitation for security laboratory. Indeed, the lack of theoretical results regarding the choice of neural network architectures or the selection of appropriate loss functions poses significant challenges for evaluators seeking to construct a relevant and suitable model for the targeted leakage traces. To address the aforementioned limitations, we propose in this work a novel conditional generative model, specifically designed to carry out non profiled attacks, which is fully interpretable and explainable. To do so, we develop a novel model that fits to the non profiled context. To guarantee the interpretability and explainability of our model, we provide theoretical results to justify both its architecture, and a new loss function for its optimization process. We further propose a key recovery strategy based on our model that requires no leakage model assumptions.As a consequence, our work represents thus the first interpretable and generic (i.e. no a priori knowledge on the leakage model is required) non profiled deep learning-based side-channel attacks. Moreover, to emphasise the benefits of our new model in comparison with conventional linear regression based attack (LRA), we also provide a theoretical comparative analysis on the deterministic part estimation for different Gaussian noise configurations. Finally, we experimentally validate and compare the attack performances of our model with LRA and state-of-the-art discriminative-models-based non profiled attacks using simulations and various publicly available datasets.
2026/273(PDF)
Last updated:  2026-02-16
Weighted Cryptography with Weight-Independent Complexity
Cryptographic protocols
Aarushi Goel, Swagata Sasmal, and Mingyuan Wang
Show abstract
Cryptographic protocols
Cryptographic primitives involving multiple participants, such as secure multiparty computation (MPC), threshold signatures, and threshold encryption, are typically designed under the assumption that at least a threshold number of participants remain honest and non-colluding. However, many real-world applications require more expressive access structures beyond simple thresholds. A prominent example is the weighted threshold access structure, where each party is assigned a weight and security holds as long as the total weight of corrupted parties does not exceed a specified threshold.Despite the practical relevance of such access structures, our understanding of efficient constructions supporting them remains limited. For instance, existing approaches for weighted MPC and weighted threshold encryption incur costs that scale with the total assigned weights to all parties or rely on non-black-box use of cryptography.In this work, we present the first black-box constructions of the following weighted cryptosystems with weight-independent complexity in the trusted setup model: (i) a weighted MPC protocol with guaranteed output delivery, (ii) a semi-honest weighted threshold encryption scheme and (iii) a semi-honest weighted threshold Schnorr signature scheme. At the heart of our constructions is a new succinct computational secret sharing scheme with linear homomorphism for weighted threshold access structures. We provide two concrete instantiations of this primitive, based on the Decisional Composite Residuosity (DCR) assumption and the Learning With Errors (LWE) assumption, respectively. Furthermore, our constructions extend to any general access structure that can be represented efficiently as a monotone Boolean circuit.
2025/1851(PDF)
Last updated:  2026-02-16
Locally Recoverable Data Availability Sampling
Foundations
Seunghyun Cho, Eunyoung Seo, and Young-Sik Kim
Show abstract
Foundations
Data Availability Sampling (DAS) enables light clients to probabilistically verify that a large published object remains retrievable by sampling a small number of coded symbols and checking consistency. Most existing DAS designs are predicated on MDS-style global recovery thresholds and therefore yield an essentially binary availability signal: either sufficient evidence accumulates to enable global reconstruction, or it does not. In this work we introduce Locally Recoverable Data Availability Sampling (LR-DAS), which leverages locally recoverable codes with disjoint recovery groups to provide graded availability evidence. A verifier sequentially certifies groups: once it has verified $r$ distinct openings in a group, it can reconstruct the entire group locally and monotonically increase a certified availability fraction.The central cryptographic challenge is preventing splicing attacks, where an adversary mixes locally consistent views that do not arise from a single global codeword. We define locality-aware erasure-code commitments that formalize position binding, local binding, and an explicit local-global consistency requirement, and we give a construction that enforces local-global consistency using a single out-of-domain algebraic linking check per certified group. We prove graded soundness via overshoot bounds that decompose into a sampling term and a negligible cryptographic failure term, covering both missing-groups and missing-fraction withholding patterns. We provide two concrete instantiations: a succinct pairing-based realization and a transparent realization based on low-degree/proximity testing, and we evaluate the resulting trade-offs against representative DAS and repair-oriented approaches.
2026/271(PDF)
Last updated:  2026-02-16
Defining Quantum-Secure Message Authentication
Secret-key cryptography
Ashwin Jha, Mustafa Khairallah, Jannis Leuther, and Stefan Lucks
Show abstract
Secret-key cryptography
The classical EUF-CMA notion for the security of message authentication codes (MACs) is based on "freshness": messages chosen by the adversary are authenticated, and then the adversary has to authenticate a fresh message on its own. In a quantum setting, where classical messages are authenticated but adversaries can make queries in superposition, "freshness" is undefinable. Instead of requiring the adversary to be unable to forge a fresh message, one can require "stability" (the adversary cannot authenticate more messages than queried before), or "exclusiveness" (the adversary cannot authenticate a message from a subset of messages it did not query before). The "plus-one" security definition, proposed by Boneh and Zhandry at Eurocrypt 2013, maintains stability, but fails at exclusiveness. The "blind unforgeability" notion from Alagic et al. (Eurocrypt 2020) maintains exclusiveness, but it is unknown if it maintains stability. This paper proposes a new security definition: "splitting unforgeability" (SU). A MAC is SU-secure, if it maintains both stability and exclusiveness. Against classical adversaries, SU is equivalent to EUF-CMA. Against quantum adversaries, SU implies both plus-one security and blind unforgeability. With respect to $q$-query adversaries, $(2q-1)$-wise independent functions do not suffice for SU, but $(3q+1)$-wise independent functions do, as does a qPRF. Boneh and Zhandry's "Quantum Carter-Wegman MAC" (BZq-MAC), which combines a qPRF and an $\epsilon$-AXU hash function, is SU-secure up to the quantum birthday bound.We additionally analyse the security of different instantiations of the Hash-then-MAC composition of a SU-secure MAC $F$ and a hash function $H$ which is either collapsing or Bernoulli-preserving.
2026/243(PDF)
Last updated:  2026-02-16
Towards Making Doubly-Efficient PIR Practical
Cryptographic protocols
Pan Xiao, Heng Zhang, Rending Ouyang, Cong Zhang, Jian Liu, Kui Ren, and Chun Chen
Show abstract
Cryptographic protocols
Doubly-efficient private information retrieval (DEPIR) enables sublinear per-query work (in the database size $N$) for both client and server, while requiring no client state. Despite its theoretical promise, single-server DEPIR exhibits a prohibitive concrete efficiency gap: for $N=2^{23}$, the state-of-the-art construction (Eurocrypt '25) requires a 733TB server state and over $2^{37}$ online RAM/Disk reads, rending it infeasible to execute.This paper advances single-server DEPIR towards practicality through a series of algorithmic innovations.Compared with the state-of-the-art, we achieve a 4 orders of magnitude reduction in server state and a 6 orders of magnitude reduction in query time.In particular, for the same level database ($N=2^{23}$), querying $5461$ elements in a single batch requires only 171GB of server state and $2^{24}$ online RAM/disk reads, yielding a 112s total query time and a 21ms amortized query time.
2026/270(PDF)
Last updated:  2026-02-16
Pseudorandomness of Knapsacks over a Number Ring
Foundations
Biswajit Mandal and Shashank Singh
Show abstract
Foundations
In this work, we study the pseudorandomness of the bounded Knapsack function family defined over a number ring ${R}$. We establish that if the Knapsack function family is one-way and certain related folded Knapsack function families are pseudorandom, then the original Knapsack is pseudorandom. This can be seen as a generalisation of the work of Micciancio and Mol presented at the CRYPTO-2011, for the case of Knapsack function families defined over arbitrary number ring.
2025/1869(PDF)
Last updated:  2026-02-16
Just How Secure is SRP, Really?
Cryptographic protocols
Jiayu Xu and Zhiyuan Zhao
Show abstract
Cryptographic protocols
An asymmetric Password-Authenticated Key Exchange (aPAKE) protocol allows a client and a server to agree upon a cryptographic key, with the only information shared in advance being a low-entropy password (memorized by the client) and a corresponding password file (stored in the server). The standard security definition for aPAKE is in the Universally Composable (UC) framework. Since aPAKE does not rely on a PKI, such protocols have received much attention in recent years due to their potential of replacing the traditional “password-over-TLS” approach for client-server authentication on the Internet.One of the only two aPAKE protocols currently deployed in practice is Secure Remote Password (SRP) (Wu, NDSS 1998), which is used by millions of users on the Internet. A formal security analysis of SRP has long been elusive; the only security proof to date is the one by Dayanikli and Lehmann (CSF 2024) which is in a (somewhat non-standard) variant of UC called UC with angels. This framework allows the simulator access to an additional oracle that solves certain hard cryptographic problems.In this work, we present a comprehensive new analysis of SRP, and clarify a number of ambiguities about its security:1. We formally prove that the SRP is UC-secure if and only if one computational problem in the field is hard and one decisional problem is easy. As the decisional problem is likely hard in the field, this strongly suggests that SRP is not UC-secure and hence justifies the usage of UC with angels in the security analysis;2. On the other hand, we show that the “angel” given to the simulator in the Dayanikli–Lehmann analysis is stronger than necessary, and can be replaced by a weaker oracle;3. Finally, we prove that UC-with-angels-security is still stronger than the game-based security for aPAKE, i.e., SRP is still game-based secure under some reasonable assumptions.Overall, we pinpoint the exact conditions under which SRP can be proven secure, reducing its security to a number of underlying hardness problems. Along the way we also identify and bridge multiple gaps in the Dayanikli–Lehmann analysis — most notably, the concrete security bound — and apply novel proof techniques that might find application in other contexts.
2025/2037(PDF)
Last updated:  2026-02-16
On the Simulation-Extractability of Proof-Carrying Data
Cryptographic protocols
Behzad Abdolmaleki, Matteo Campanelli, Quang Dao, and Hamidreza Khoshakhlagh
Show abstract
Cryptographic protocols
With proof-carrying data (PCD), nodes in a distributed computation can certify its correct execution obtaining proofs with low-verification overhead (relative to the complexity of the computation). As PCD systems—and their special case, incrementally verifiable computation (IVC)—see rapid adoption in practice, understanding their robustness against malleability attacks becomes crucial. In particular, it remains unexplored whether recursive proof systems satisfy simulation extractability (SIM-EXT)—a property ensuring non-malleability and composability.This work provides the first systematic study of simulation extractability for PCD. We begin by observing that the standard SIM-EXT notion for non-recursive zkSNARKs does not directly extend to PCD/IVC settings. To address this, we propose a new, tailored definition of SIM-EXT for proof-carrying data that accounts for their idiosyncratic features. Using this framework, we prove two general results: (1) that a simulation-extractable SNARK implies a simulation-extractable PCD when used recursively, and (2) that even lighter PCD constructions—built from a (not necessarily succinct) argument of knowledge (NARK) combined with a split-accumulation scheme—achieve SIM-EXT of PCD by requiring SIM-EXT only from the underlying NARK. Our results show that many modern PCD systems are already simulation-extractable by design.
2025/1992(PDF)
Last updated:  2026-02-16
Improved Concurrent-Secure Blind Schnorr Signatures
Public-key cryptography
Pierpaolo Della Monica and Ivan Visconti
Show abstract
Public-key cryptography
Since the work of Chaum in ’82, the problem of designing secure blind signature protocols for existing signature schemes has been of great interest. In particular, when considering Schnorr signatures, nowadays used in Bitcoin, designing corresponding efficient and secure blind signatures is very challenging in light of the ROS attack [BLL+21] (Eurocrypt’21), which affected all previous efficient constructions.Currently, the main positive result about concurrent-secure blind Schnorr signatures is the one of Fuchsbauer and Wolf [FW24] (Eurocrypt’24). Their construction, is quite demanding, indeed it requires trusted parameters, non-interactive zero-knowledge arguments and CPA-secure public-key encryption. Moreover, it is proven secure under a game-based definition only, is limited to computational blindness and is vulnerable to harvest now “link” later quantum attacks. Nicely, their construction is also a predicate blind signature (PBS) scheme, allowing signers to have some partial control on the content of the blindly signed message.In this work, we show neat improvements to the state-of-the-art presenting a new construction for concurrent-secure blind Schnorr signatures that relies on milder/reduced cryptographic assumptions, enjoys statistical blindness, replaces the problematic trusted setup with a non-programmable random oracle (NPRO), and satisfies also a one-sided simulation-based property providing deniability guarantees to users involved in PBSs.
2026/145(PDF)
Last updated:  2026-02-16
Round-Optimal GUC-Secure Blind Signatures from Minimal Computational and Setup Assumptions
Cryptographic protocols
Michele Ciampi, Pierpaolo Della Monica, and Ivan Visconti
Show abstract
Cryptographic protocols
A blind signature scheme is an interactive protocol that enables a user to obtain a signature on a message without revealing any information about the message–signature pair to the signer. Despite more than 40 years of research, all existing constructions suffer from at least two of the following limitations.1. The protocol is not round-optimal, requiring more than two messages to be exchanged during the signature phase.2. There is only game-based security and/or lack of composability with global and observable setup (i.e., there is a need for trusted parameters or to program random oracles).3. There is a need (unlike regular signatures) of demanding hardness assumptions, especially when targeting post-quantum security.In this work, we bypass prior results by showing how to blindly sign a message, simultaneously overcoming all of the above three limitations. Specifically, we construct a (Global) Universally Composable (UC), two-round (optimal) blind signature protocol that relies only on one-way functions (optimal), without trusted parameters. The only deviation from the plain model is the need for a global non-programmable random oracle (NPRO). Nicely, our scheme can be instantiated from a variety of assumptions believed to be post-quantum secure (e.g., AES). A central technical component of our scheme is the construction of a novel commitment scheme that enjoys a special (mild) form of composability, which may be of independent interest. Finally, we argue that a concrete instantiation of our scheme has signature sizes and communication complexity suitable for practical applications.
2025/286(PDF)
Last updated:  2026-02-16
Verifiable Computation for Approximate Homomorphic Encryption Schemes
Cryptographic protocols
Ignacio Cascudo, Anamaria Costache, Daniele Cozzo, Dario Fiore, Antonio Guimarães, and Eduardo Soria-Vazquez
Show abstract
Cryptographic protocols
We address the problem of proving the validity of computation on ciphertexts of homomorphic encryption (HE) schemes, a feature that enables outsourcing of data and computation while ensuring both data privacy and integrity.We propose a new solution that handles computations in RingLWE-based schemes, particularly the CKKS scheme for approximate arithmetic. Our approach efficiently handles ciphertext arithmetic in the polynomial ring $R_q$ without emulation overhead and manages ciphertexts maintenance operations, such as modulus switching, key switching, and rescaling, with small cost.Our main result is a succinct argument that efficiently handles arithmetic computations and range checks over the ring $R_q$. To build this argument system, we construct new polynomial interactive oracle proofs (PIOPs) and multilinear polynomial commitments supporting polynomials over $R_q$, unlike prior work which focused on finite fields. We validate the concrete complexity of our approach through implementation and experimentation. Compared to the current state-of-the-art on verifiable HE for RNS schemes, we present similar performance for small circuits while being able to efficiently scale to larger ones, which was a major challenge for previous constructions as it requires verifying procedures such as relinearization.
2026/268(PDF)
Last updated:  2026-02-16
One Pair to Rule Them All: An Optimal Algorithm for Solving Code Equivalence via Codeword Search
Attacks and cryptanalysis
Alessandro Budroni and Andre Esser
Show abstract
Attacks and cryptanalysis
Two linear codes $\mathcal{C},\mathcal{C}’$ over $\mathbb{F}_q$ are linearly equivalent if one can be mapped to the other via a monomial transformation. Recovering this monomial from $\mathcal{C}$ and $\mathcal{C}’$ is known as the Linear Code Equivalence (LCE) problem.The most efficient algorithms to solve the LCE problem follow a common framework based on finding low-weight codewords. This framework admits a natural lower bound obtained by assuming that among the found low-weight codewords, a single equivalent codeword pair can be identified and used to reconstruct the monomial without overhead. Whether this lower bound can be achieved by a constructive instantiation has remained an open problem. Existing algorithms all require multiple equivalent pairs for monomial reconstruction, resulting in both concrete and asymptotic gaps to the lower bound.In this work, we answer the question of whether there exists such an optimal framework instantiation in the affirmative. We introduce a canonical labeling technique, as a generalization of canonical forms, that allows for monomial reconstruction from a single pair of equivalent codewords. Crucially, this labeling procedure, even if not necessarily polynomial-time, can be embedded into the codeword-search framework to identify equivalent codewords and perform final monomial recovery without overhead. This gives rise to the first framework instantiation that meets its lower bound both asymptotically and concretely up to negligible tolerance.For the parameter sets proposed for the LESS signature scheme, an active second-round contender in the NIST PQC standardization process, our analysis reduces the estimated bit security by up to 15 bits.
2026/267(PDF)
Last updated:  2026-02-16
Beyond the Linear Barrier: Secret Sharing for Evolving (Weighted) Threshold Access Structures with Poly-logarithmic Share Size
Foundations
Danilo Francati, Sara Giammusso, and Daniele Venturi
Show abstract
Foundations
Evolving secret sharing allows a dealer to share a secret message between a growing number of $n$ parties. Crucially, the dealer does not know an upper bound on $n$, neither it knows the access structure before party $n$ arrives; furthermore, the dealer is not allowed to update the shares of old parties.We construct new secret sharing schemes for so-called evolving (weighted) threshold access, in which the arrival of party $n$ determines the number of parties $t_n \le n$ that are required in order to reconstruct the secret. We also consider the more general case in which party $n$ has associated a weight $w_n$ with logarithmic size, and the authorized subsets of parties are those for which the sum of the corresponding weights exceeds the current threshold $t_n$. In particular, we obtain:- A secret sharing scheme for evolving threshold access structures with adaptive privacy in the plain model, and with share size $\mathsf{poly}(\lambda,\log n)$. This construction requires one-way functions (OWFs) and indistinguishability obfuscation (iO) for Turing machines. - A secret sharing scheme for evolving weighted threshold access structures with adaptive privacy in the plain model, and with share size $\mathsf{poly}(\lambda,\log W_n)$ where $W_n$ is the sum of the weights up to party $n$. This construction requires OWFs and iO for Turing machines, and additionally assumes that the weights are fixed (i.e., cannot change over time).- A secret sharing scheme for evolving weighted threshold access structures with static privacy in the plain model, and with share size $\mathsf{poly}(\lambda,\log W_n)$. This construction allows the weight of old parties to change over time, but it requires somewhere statistically binding hash functions and achieves only static privacy.Previous constructions of secret sharing schemes for evolving (weighted) threshold access structures achieved (much worse) share sizes linear in $W_n$ (and in the security parameter) and, when considering adaptive privacy, they require the random oracle model.
2025/1118(PDF)
Last updated:  2026-02-16
Extracting Some Layers of Deep Neural Networks in the Hard-Label Setting
Attacks and cryptanalysis
Isaac A. Canales-Martínez and David Santos
Show abstract
Attacks and cryptanalysis
Although studied for several years now, parameter extraction of Deep Neural Networks (DNNs) has seen the major advances only in recent years. Carlini et al. (Crypto 2020) and Canales-Martínez et al. (Eurocrypt 2024) showed how to extract the parameters of ReLU-based DNNs efficiently (polynomial time and polynomial number of queries, as a function on the number of neurons) in the raw-output setting, i.e., when the attacker has access to the raw output of the DNN. On the other hand, the more realistic hard-label setting gives the attacker access only to the most likely label after the DNN's raw output has been processed. Recently, Carlini et al. (Eurocrypt 2025) presented an efficient parameter extraction attack in the hard-label setting applicable to DNNs having a large number of parameters.The work in Eurocrypt 2025 recovers the parameters of all layers except the output layer. The techniques presented there are not applicable to this layer due to its lack of ReLUs. In this work, we fill this gap and present a technique that allows recovery of the output layer. Additionally, we show parameter extraction methods that are more efficient when the DNN has contractive layers, i.e., when the number of neurons decreases in those layers. We successfully apply our methods to some networks trained on the CIFAR-10 dataset. Asymptotically, our methods have polynomial complexity in time and number of queries. Thus, a complete extraction attack combining the techniques by Carlini et al. and ours remains with polynomial complexity. Moreover, real execution time is decreased when attacking DNNs with the required contractive architecture.
2026/266(PDF)
Last updated:  2026-02-16
UltraFold: Efficient Distributed BaseFold from Packed Interleaved Merkle Trees
Cryptographic protocols
Wenhao Wang and Fan Zhang
Show abstract
Cryptographic protocols
Transparent, code-based polynomial commitment schemes (PCSs) such as BaseFold (CRYPTO’24) are a compelling building block for large-scale zero-knowledge proofs (ZKPs): they avoid trusted setup, rely on standard hash assumptions, offer post-quantum security, and achieve high performance. As ZKP workloads grow, the polynomials that PCSs must commit to and open increasingly exceed the memory and throughput of a single machine, motivating a scalable distributed version of BaseFold. However, existing distribution attempts either only support polynomials of specific structures, or their proof size grows with the number of workers, or they do not scale to arbitrarily many workers. In this paper, we present UltraFold, the first distributed BaseFold PCS that works with general polynomials, scales to any number of workers, and maintains succinct proofs whose size does not depend on the worker count. To enable efficient distributed commitment and opening, UltraFold introduces an interleaved Merkle leaf layout that is realized via a single all-to-all exchange of partially encoded values, and each worker’s computation becomes local after this exchange. To mitigate hashing overhead, UltraFold further uses packed Merkle trees, reducing both prover time in practice and the resulting proof size. We implement UltraFold and evaluate it in a distributed setting: using 256 single-core workers, we commit to and open a 134M-coefficient polynomial in under 2 seconds, with a proof size of 216 KB.
2025/2135(PDF)
Last updated:  2026-02-16
Robust Elections and More: Fast MPC in the Preprocessing Model
Cryptographic protocols
Charanjit S. Jutla, Nathan Manohar, and Arnab Roy
Show abstract
Cryptographic protocols
In this paper, we present an MPC protocol in the preprocessing model with essentially the same concrete online communication and rounds as the state-of-the-art MPC protocols such as online-BGW (with precomputed Beaver tuples) for $t < n/3$ malicious corruptions. However, our protocol additionally guarantees robustness and correctness against up to $t < n/2$ malicious corruptions while the privacy threshold remains at $n/3$. This is particularly useful in settings (e.g. commodity/stock market auctions, national elections) where it is paramount that the correct outcome is certified, while maintaining the best possible fast-tracked online speed. In addition, this honest-majority correctness allows us to use optimistic Berlekamp-Welch decoding in contrast to BGW. Moreover, just like online-BGW, our protocol is responsive until a final attestation phase. We also give a complementary verifiable input-sharing scheme for the multi-client distributed-server setting which satisfies both robustness and correctness against up to $t < n/2$ malicious servers. This is accomplished by having the servers first run a preprocessing phase that does not involve the clients. The novelty of this input-sharing scheme is that a client only interacts for one round, and hence need not be online, which, again, is highly desirable in applications such as elections/auctions.We prove our results in the universally-composable model with statistical security against static corruptions. Our protocol is achieved by combining global authenticators of SPDZ with an augmented Reed-Solomon code in a novel manner. This augmented code enables honest-majority decoding of degree $n/2$ Reed-Solomon codes. Our particular augmentation (often referred to as robust sharing) has the additional property that the preprocessing phase can generate this augmented sharing with a factor $n$ speedup over prior information-theoretic robust sharing schemes.
2023/1435(PDF)
Last updated:  2026-02-16
Identity-Based Matchmaking Encryption with Enhanced Privacy Against Chosen-Ciphertext Attacks
Public-key cryptography
Sohto Chiku, Keitaro Hashimoto, Keisuke Hara, and Junji Shikata
Show abstract
Public-key cryptography
Identity-based matchmaking encryption (IB-ME) proposed by Ateniese et al. [CRYPTO 2019; Journal of Cryptology 2021] enables users to communicate privately, anonymously, and authentically. Following the seminal work of Ateniese et al., considerable research has been conducted on the security and construction of IB-ME schemes, but no scheme with enhanced privacy against chosen-ciphertext attacks is known.In this paper, we propose two IB-ME schemes that simultaneously achieve enhanced privacy against chosen-ciphertext attacks. The first scheme is built on the bilinear Diffie–Hellman assumption in the random oracle model. Inspired by the scheme of Ateniese et al., our construction attains stronger security guarantees while providing more compact decryption keys and ciphertexts than the scheme of Ateniese et al.The second scheme is a generic construction built upon anonymous identity-based encryption, digital signatures, NIZK systems, and reusable extractors. This scheme is obtained by generalizing the specific scheme of Francati et al. [INDOCRYPT 2021] and constitutes the first IB-ME scheme achieving the stronger security notions in the standard model.As an additional contribution, we classify existing authenticity notions for IB-ME into four categories—no message attacks (NMA) and chosen message attacks (CMA), each considered against both insiders and outsiders. This taxonomy enables a precise comparison among existing IB-ME schemes.
2025/115(PDF)
Last updated:  2026-02-16
Signatures with Tight Adaptive Corruptions from Search Assumptions
Public-key cryptography
Keitaro Hashimoto, Wakaha Ogata, and Yusuke Sakai
Show abstract
Public-key cryptography
In this paper, we show that tightly secure signature schemes in the multi-user setting with adaptive corruptions can be achieved from various weak assumptions, such as classical discrete logarithm, RSA, factoring, and even the post-quantum group action discrete logarithm assumption, at the cost of signature length. To this end, we introduce a new framework to construct a signature scheme. Our construction is a combination of an identification scheme with multiple secret keys per public key (with additional specific features) and (randomized) Fischlin transformation. From the straight-line extractability of the transformation, intuitively unforgeability can be tightly reduced to the soundness of an identification scheme in the programmable random oracle model (PROM). Our careful security proof succeeds in a tight reduction in the non-programmable random oracle model (NPROM).As a complementary study, we analyze the multi-user security of hash-based signatures. Although hash-based signatures seem to be tightly secure in the multi-user setting with adaptive corruptions (at the cost of signature lengths similar to our construction), their security has not been closely analyzed. We show the tight security based on collision resistance of the underlying hash function by modeling the pseudo-random function for key generation as a (programmable) random oracle.By using a hash function whose security is tightly reduced to other assumptions, such as the discrete logarithm problem, we also obtain a tightly secure signature based on them. However, the concrete signature size is larger than our proposal based on an identification scheme, and the security reduction is in the PROM in contrast to our new framework.
2024/636(PDF)
Last updated:  2026-02-15
Regev Factoring Beyond Fibonacci: Optimizing Prefactors
Foundations
Seyoon Ragavan
Show abstract
Foundations
In this note, we improve the space-efficient variant of Regev's quantum factoring algorithm [Reg23] proposed by Ragavan and Vaikuntanathan [RV24] by constant factors in space and/or size. This allows us to bridge the significant gaps in concrete efficiency between the circuits by [Reg23] and [RV24]; [Reg23] uses far fewer gates, while [RV24] uses far fewer qubits. The main observation is that the space-efficient quantum modular exponentiation technique by [RV24] can be modified to work with more general sequences of integers than the Fibonacci numbers. We parametrize this in terms of a linear recurrence relation, and through this formulation construct three different circuits for quantum factoring: - A circuit that uses $\approx 10.4n$ qubits and $\approx 45.7n^{1/2}$ multiplications of $n$-bit integers. - A circuit that uses $(9+\epsilon)n$ qubits and $O_\epsilon(n^{1/2})$ multiplications of $n$-bit integers, for any $\epsilon > 0$. - A circuit that uses $(24+\epsilon)n^{1/2}$ multiplications of $n$-bit integers, and $O_\epsilon(n)$ qubits, for any $\epsilon > 0$.In comparison, the original circuit by [Reg23] uses at least $\approx 3n^{3/2}$ qubits and $\approx 6n^{1/2}$ multiplications of $n$-bit integers, while the space-efficient variant by [RV24] uses $\approx 10.32n$ qubits and $\approx 129.6n^{1/2}$ multiplications of $n$-bit integers (although a very simple modification of their Fibonacci-based circuit uses $\approx 11.32n$ qubits and only $\approx 86.4n^{1/2}$ multiplications of $n$-bit integers). The improvements proposed in this note take effect for sufficiently large values of $n$; it remains to be seen whether they can also provide benefits for practical problem sizes.
2023/1501(PDF)
Last updated:  2026-02-15
Space-Efficient and Noise-Robust Quantum Factoring
Foundations
Seyoon Ragavan and Vinod Vaikuntanathan
Show abstract
Foundations
We provide two improvements to Regev's quantum factoring algorithm (Journal of the ACM 2025), addressing its space efficiency and its noise-tolerance. Our first contribution is to improve the quantum space efficiency of Regev's algorithm while keeping the circuit size the same. Our main result constructs a quantum factoring circuit using $O(n \log n)$ qubits and $O(n^{3/2} \log n)$ gates. We achieve the best of Shor and Regev (upto a logarithmic factor in the space complexity): on the one hand, Regev's circuit requires $O(n^{3/2})$ qubits and $O(n^{3/2} \log n)$ gates, while Shor's circuit requires $O(n^2 \log n)$ gates but only $O(n \log n)$ qubits. As with Regev, to factor an $n$-bit integer $N$, we run our circuit independently $O(\sqrt{n})$ times and apply Regev's classical postprocessing procedure. Our optimization is achieved by implementing efficient and reversible exponentiation with Fibonacci numbers in the exponent, rather than the usual powers of 2, adapting work by Kaliski (arXiv:1711.02491) from the classical reversible setting to the quantum setting. This technique also allows us to perform quantum modular exponentiation that is efficient in both space and size without requiring significant precomputation, a result that may be useful for other quantum algorithms. A key ingredient of our exponentiation implementation is an efficient circuit for a function resembling \emph{in-place} quantum-quantum modular multiplication. This implementation works with only black-box access to any quantum circuit for \emph{out-of-place} modular multiplication, which we believe is yet another result of potentially broader interest.Our second contribution is to show that Regev's classical postprocessing procedure can be modified to tolerate a constant fraction of the quantum circuit runs being corrupted by errors. In contrast, Regev's analysis of his classical postprocessing procedure requires all $O(\sqrt{n})$ runs to be successful. In a nutshell, we achieve this using lattice reduction techniques to detect and filter out corrupt samples.
2024/1826(PDF)
Last updated:  2026-02-15
Cloning Games, Black Holes and Cryptography
Foundations
Alexander Poremba, Seyoon Ragavan, and Vinod Vaikuntanathan
Show abstract
Foundations
In this work, we introduce a new toolkit for analyzing \emph{cloning games}, a notion that captures stronger and more quantitative versions of the celebrated quantum no-cloning theorem. This framework allows us to analyze a new cloning game based on \emph{binary phase states}. Our results provide evidence that these games may be able to overcome important limitations of previous candidates based on BB84 states and subspace coset states: in a model where the adversaries are restricted to making a single oracle query, we show that the binary phase variant is $t$-copy secure when $t=o(n/\log n)$. Moreover, for constant $t$, we obtain the \emph{first} optimal bounds of $O(2^{-n})$, asymptotically matching the value attained by a trivial adversarial strategy. We also show a worst-case to average-case reduction which allows us to show the same quantitative results for the new and natural notion of \emph{Haar cloning games}.Our analytic toolkit, which we believe will find further applications, is based on binary subtypes and uses novel bounds on the operator norms of block-wise tensor products of matrices. To illustrate the effectiveness of these new techniques, we present two applications: first, in black-hole physics, where our asymptotically optimal bound offers quantitative insights into information scrambling in idealized models of black holes; and second, in unclonable cryptography, where we (a) construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, and (b) propose and provide evidence for the security of multi-copy unclonable encryption schemes.
2025/2008(PDF)
Last updated:  2026-02-15
Two-Server Private Information Retrieval in Sublinear Time and Quasilinear Space
Cryptographic protocols
Alexandra Henzinger and Seyoon Ragavan
Show abstract
Cryptographic protocols
We build two-server private information retrieval (PIR) that achieves information-theoretic security and strong double-efficiency guarantees. On a database of $n > 10^6$ bits, the servers store a preprocessed data structure of size $1.5 \sqrt{\log_2 n} \cdot n$ bits and then answer each PIR query by probing $12 \cdot n^{0.82}$ bits in this data structure. To our knowledge, this is the first information-theoretic PIR with any constant number of servers that has quasilinear server storage $n^{1+o(1)}$ and polynomially sublinear server time $n^{1-\Omega(1)}$.Our work builds on the PIR-with-preprocessing protocol of Beimel, Ishai, and Malkin (CRYPTO 2000). The insight driving our improvement is a compact data structure for evaluating a multivariate polynomial and its derivatives. Our data structure and PIR protocol leverage the fact that Hasse derivatives can be efficiently computed on-the-fly by taking finite differences between the polynomial's evaluations. We further extend our techniques to improve the state-of-the-art in PIR with three or more servers, building on recent work by Ghoshal, Li, Ma, Dai, and Shi (TCC 2025).On an 11 GB database with 1-byte records, our two-server PIR encodes the database into a 1 TB data structure – which is 4,500,000$\times$ smaller than that of prior two-server PIR-with-preprocessing schemes, while maintaining the same communication and time per query. To answer a PIR query, the servers fetch and send back 4.4 MB from this data structure, requiring 2,560$\times$ fewer memory accesses than linear-time PIR. The main limitation of our protocol is its large communication complexity, which we show how to shrink to $n^{0.31} \cdot \mathsf{poly}(\lambda)$ using compact linearly homomorphic encryption.
2025/1887(PDF)
Last updated:  2026-02-15
Parallel Spooky Pebbling Makes Regev Factoring More Practical
Foundations
Gregory D. Kahanamoku-Meyer, Seyoon Ragavan, and Katherine Van Kirk
Show abstract
Foundations
"Pebble games," an abstraction from classical reversible computing, have found use in the design of quantum circuits for inherently sequential tasks. Gidney showed that allowing Hadamard basis measurements during pebble games can dramatically improve costs---an extension termed "spooky pebble games" because the measurements leave temporary phase errors called ghosts. In this work, we define and study parallel spooky pebble games. Previous work by Blocki, Holman, and Lee (TCC 2022) and Gidney studied the benefits offered by either parallelism or spookiness individually; here we show that these resources can yield impressive gains when used together. First, we show by construction that a line graph of length $\ell$ can be pebbled in depth $2\ell$ (which is exactly optimal) using space $\leq 2.47\log \ell$. Then, to explore pebbling schemes using even less space, we use a highly optimized $A^*$ search implemented in Julia to find the lowest-depth parallel spooky pebbling possible for a range of concrete line graph lengths $\ell$ given a constant number of pebbles $s$.We show that these techniques can be applied to Regev's factoring algorithm (Journal of the ACM 2025) to significantly reduce the cost of its arithmetic. For example, we find that 4096-bit integers $N$ can be factored in multiplication depth 193, which outperforms the 680 required of previous variants of Regev and the 444 reported by Ekerå and Gärtner for Shor's algorithm (IACR Communications in Cryptology 2025). While space-optimized implementations of Shor's algorithm remain likely the best candidates for first quantum factorization of large integers, our results show that Regev's algorithm may have practical importance in the future, especially given the possibility of further optimization. Finally, we believe our pebbling techniques will find applications in quantum cryptanalysis beyond integer factorization.
IACR Logo
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.

[8]ページ先頭

©2009-2026 Movatter.jp