Movatterモバイル変換


[0]ホーム

URL:


All papers in 2026 (307 results)

2026/308(PDF)
Last updated:  2026-02-18
Anamorphic E-Voting: Coercion-Resistant Through Fake and Real Votes
Cryptographic protocols
Antonis Michalas
Show abstract
Cryptographic protocols
The paper addresses the challenging and timely issue of vote buying in electronic voting. Electronic voting is a well established process in modern democracies. However, problems such as vote buying continue to pose a significant challenge. This paper aims at addressing this issue by leveraging the novel concept of Anamorphic Encryption to enable voters to cast their original vote together with a hidden ``fake" one. In this way voters can pursue their true preferences unobstructed while providing a compliance proof to potential vote buyers. The security of our construction is proved through a formal security analysis, that considers powerful adversaries who aim at breaching user privacy and influencing votes. We believe that this innovation e-voting approach can enhance the overall security of e-voting systems and pave the way to avoid electoral manipulation.
2026/307(PDF)
Last updated:  2026-02-18
Composition Theorems for Zero-Knowledge IOPs
Foundations
Himanshu Vashishth and Mor Weiss
Show abstract
Foundations
Interactive Oracle Proofs (IOPs) enable a probabilistic verifier interacting with a prover to verify NP statements while reading only few bits from the prover messages. Zero-Knowledge IOPs (ZK-IOPs) have the additional guarantee that a query-bounded (possibly malicious) verifier learns nothing about the NP witness.We initiate a systematic study of ZK preservation under IOP composition, and prove general composition theorems for ZK-IOPs in the 2- and multi-IOP setting. Our main result shows that ZK is preserved in the setting of perfect, black-box, straight-line ZK (the standard setting for ZK-IOPs), if the outer IOP has an additional mild property that is satisfied by existing ZK-IOPs. Contrary to common belief, this does not follow from composition theorems for multiparty protocols (Kushilevitz, Lindell and Rabin, STOC`06).Our composition theorems show that ZK-IOPs can be modularly designed by composing sub-protocols, and ZK of the composed system follows seamlessly from the ZK guarantees of its building blocks. Using our composition theorems, we easily derive both new and known results on ZK-IOPs in various settings, including ZK preservation under parallel/sequential composition, ZK of IOPs for sumcheck and codeswitching, ZK of IOPs for NP using arithmetization and sumcheck, and ZK preservation under IOP proof composition (reproving a result of Bootle, Chiesa and Liu, EC`22).
2026/306(PDF)
Last updated:  2026-02-18
Skipping Class: Algebraic Attacks exploiting weak matrices and operation modes of Poseidon2(b)
Attacks and cryptanalysis
Simon-Philipp Merz and Àlex Rodríguez García
Show abstract
Attacks and cryptanalysis
We present new algebraic attacks on Poseidon2 and Poseidon2b. We exploit the specific structure of the matrices that define the linear layers in the hash function which allows us to improve round-skipping for the constrained-input constrained-output CICO problem. The security of many circuit-friendly hash functions has been measured by their resistance against attacks on the CICO problem. However, we show how to boost our round-skipping attack when directly modelling algebraic preimage attacks of Poseidon2(b) in compression and sponge mode. To the best of our knowledge, our attack provides the first examples where finding preimages is easier than solving the corresponding CICO problem in Poseidon2(b). Furthermore, we describe the first algebraic collision attack that outperforms its algebraic preimage counterpart. We improve over state-of-the-art algebraic attacks for a range of parameters, e.g. for one recommended $128$-bit parameter set we improve over previous state-of-the-art algebraic collision attacks by a factor of $2^{106}$. However, due to the algebraic security margin this does not mean the primitive falls short of its claimed security level. Finally, we discuss how our attacks can be mitigated without affecting the efficiency of Poseidon2(b).
2026/305(PDF)
Last updated:  2026-02-18
Quantum Truncated Differential Attacks using Convolutions
Attacks and cryptanalysis
Aurel Pichollet--Mugnier and André Schrottenloher
Show abstract
Attacks and cryptanalysis
This paper focuses on quantum key-recovery attacks on block ciphers. Previous works on quantum differential and truncated differential attacks like [Kaplan et al., ToSC 2016] have shown that classical algorithms for key-recovery, typically based on generating differential pairs and sieving them, can be accelerated by up to a quadratic speedup using variants of quantum search, quantum amplitude amplification, and quantum collision-finding.In this paper, we introduce a new quantum truncated differential key-recovery attack, which leverages the quantum convolution algorithm introduced in [Schrottenloher, CRYPTO 2022] and previously used in linear cryptanalysis. We adapt this algorithm to the case of differential cryptanalysis, by rewriting the probability of a differential of an $n$-bit cipher as a convolution of functions with $2n$-bit input. We then construct a quantum state whose amplitudes encode the probability of the differential for different key guesses, and use this as the starting point of a quantum search. In some cases (although not on practical ciphers so far), the speedup is better than quadratic compared to classical attacks. We also extend the framework to related-key differential attacks.We give applications to a 9-round attack on QARMAv2-64 adapted from [Ahmadian et al., DCC 2024] and a 12-round related-key attack on AES-256 from [Boura et al., CRYPTO 2023], which show improvement over classical attacks and over Kaplan et al.'s strategy when taking into account the amount of memory and the type of quantum memory used (as our attack requires only quantum-accessible classical memory).
2026/304(PDF)
Last updated:  2026-02-18
Linear-Communication ACSS with Guaranteed Termination and Lower Amortized Bound
Uncategorized
Chengyi Qin, Mingqiang Wang, and Haiyang Xue
Show abstract
Uncategorized
An Asynchronous Complete Secret Sharing (ACSS) protocol enables a dealer to distribute \( N \) Shamir shares such that all parties eventually receive their shares over an asynchronous network. It serves as a fundamental building block for asynchronous Secure Multiparty Computation (MPC) and Byzantine Agreement.In this work, we focus on the statistically secure ACSS with optimal resilience (\(t < n/3\)). Recently, Ji, Li, and Song~[CRYPTO'24] proposed the first ACSS protocol with amortized linear communication. However, their scheme lacks guaranteed termination, and they identified the construction of a linear-communication ACSS with guaranteed termination as an open problem. Furthermore, their protocol requires a large amortized bound of \( N = \Omega(n^{11} \kappa) \), where \( n \) is the number of parties and \( \kappa \) is the size of the secret. In this work, we resolve the open problem and significantly reduce the amortized bound by presenting a linear-communication ACSS protocol with guaranteed termination and a lower bound of \( N = \Omega(n^{4}+n\kappa) \). Our ACSS protocol can be directly applied to asynchronous MPC protocols, ensuring both guaranteed termination and improved communication per multiplication gate, as well as to asynchronous Byzantine Agreement.
2026/303(PDF)
Last updated:  2026-02-18
$\mathsf{TalonG}$: Bandwidth-Efficient Two-Round Threshold Signatures from Lattices
Public-key cryptography
Liming Gao, Guofeng Tang, Dingding Jia, Yijian Liu, Bingqian Liu, Xianhui Lu, Kunpeng Wang, and Yongjian Yin
Show abstract
Public-key cryptography
Threshold signatures split the secret key among $n$ parties, where any subset of at least $t$ parties can collaboratively produce a valid signature. They have been widely deployed in applications such as blockchain systems. Lattice-based threshold signatures have also attracted considerable attention due to their post-quantum security guarantees. However, existing lattice-based constructions still face significant efficiency challenges, particularly when the number of parties becomes large. Recent lattice-based threshold signatures such as TRaccoon (Eurocrypt’24) and Ringtail (S\&P’25) support large thresholds, but they either require three interaction rounds or incur heavy communication costs in the two-round setting, limiting their practicality.In this paper, we present $\mathsf{TalonG}$, a novel two-round lattice-based threshold signature that overcomes these limitations via a new trapdoor semi-commitment technique. This variant of commitment relaxes the standard binding requirement to a weaker form, allowing an efficient instantiation from the NTRU assumption and enabling a compact two-round signing protocol with low communication. For $t=1024$ and 128-bit security, $\mathsf{TalonG}$ achieves significant improvements among existing lattice-based threshold signatures: its total communication per party and public key size are both minimal, at 26.9 KB and 2.0 KB, respectively. While the resulting signature size is larger (17.7 KB), it remains practical and highly competitive. $\mathsf{TalonG}$ is thus well-suited for real-world large-scale deployments where both round efficiency and communication load are critical.
2026/302(PDF)
Last updated:  2026-02-18
Non Interactive MPC, (Quantumly) Revisited
Foundations
Prabhanjan Ananth, Divyanshu Bhardwaj, and Aparna Gupte
Show abstract
Foundations
Classical non-interactive secure computation, despite being extensively studied, suffers from an inherent barrier: adversaries can learn the entire residual function via resetting attacks. We investigate whether quantum resources can circumvent this barrier and restrict adversarial leakage. Our results are as follows: 1. $\textbf{Definitions}$: We introduce new security definitions for the one-message MPC and 2PC settings that restrict the amount of adversarial leakage compared to prior classical definitions. 2. $\textbf{MPC}$: There exist information-theoretically secure one-message multi-party computation protocols in the oracle model in both the quantum pre-processing and classical pre-processing settings. 3. $\textbf{2PC}$: There exist semi-honest secure one-message two-party computation for (randomized) pseudorandom functionalities in the plain model based on LWE and maliciously secure one-message two-party computation for (randomized) constrained functionalities in the CRS model based on iO. Prior work by [Gupte, Liu, Raizes, Roberts and, Vaikuntanathan STOC 2025] achieved semi-honest security based on iO. Our results demonstrate the power of quantum information to circumvent barriers in classical secure computation.
2026/301(PDF)
Last updated:  2026-02-18
Blind Leakage: Rethinking Deep Learning-based Non-Profiled Side-Channel Analysis
Attacks and cryptanalysis
Jintong Yu
Show abstract
Attacks and cryptanalysis
Deep Learning-based Non-profiled Side-Channel Analysis (DL-NSCA) enables automated feature extraction and obviates the need for a profiling device. However, existing methods mostly rely on leakage from non-linear operations and require prior knowledge of the target algorithm and its plaintexts/ciphertexts, which restricts their applicability to black-box scenarios and proprietary implementations.Motivated by the ubiquity of plaintext-key XOR operations in cryptographic algorithms, this paper presents a new SCA perspective based on leakage from linear operations to enable cross-algorithm attacks. We first theoretically demonstrate that leakage from invertible linear operations, referred to as blind leakage, introduces two symmetric maxima of the correlations between key guesses and leakage (one for non-blind leakage), rendering existing distinguishers ineffective. To address this issue, we then propose a distinguisher, VS-GBA, applicable to both blind and non-blind leakage, and improve its robustness using a Gaussian Borda voting scheme. Experimental results on a high-noise ARM Cortex-M4 device demonstrate that our method enables effective cross-algorithm attacks on masked AES, ASCON, and PRESENT, and outperforms non-blind leakage attacks. Furthermore, attacking XTS-AES demonstrates that blind leakage exploitation extends the boundaries of DL-NSCA to scenarios where plaintexts/ciphertexts are XORed with a tweak.
2026/300(PDF)
Last updated:  2026-02-18
Quantum One Time Programs: Less Assumptions, More Feasibility and One Message 2PC
Foundations
Prabhanjan Ananth and Divyanshu Bhardwaj
Show abstract
Foundations
Quantum one-time programs, introduced by Broadbent, Gutowski, and Stebila [CRYPTO 2013], is a compiler that takes as input a program $f$ and converts it into a quantum state $\sigma_f$ such that given $\sigma_f$, it is possible to only obtain the output of $f$ on a single input. Our goal is to investigate the class of functionalities for which we can design one-time programs in the plain model. We show the following:1. We show that, assuming learning with errors, one-time programs for (randomized) pseudorandom functionalities exist in the plain model. In contrast, the result by Gupte, Liu, Raizes, Roberts, and Vaikuntanathan [STOC 2025] relied upon indistinguishability obfuscation. 2. We show that, assuming indistinguishability obfuscation, one-time programs for any class of randomized functionalities exist in the plain model, as long as the class of functionalities satisfy a natural security notion called constrained security. Additionally, we give constructions of one-time encryption and one-time message authentication codes based on learning with errors. Finally, we show that a novel variant of one-time programs imply one-message secure two-party computation protocols for non-trivial functionalities in the CRS model.
2026/299(PDF)
Last updated:  2026-02-18
Weak Zero-Knowledge and One-Way Functions
Foundations
Rohit Chatterjee, Yunqi Li, and Prashant Nalini Vasudevan
Show abstract
Foundations
We study the implications of the existence of weak Zero-Knowledge (ZK) protocols for worst-case hard languages. These are protocols that have completeness, soundness, and zero-knowledge errors (denoted $\epsilon_c$, $\epsilon_s$, and $\epsilon_z$, respectively) that might not be negligible. Under the assumption that there are worst-case hard languages in NP, we show the following:1. If all languages in NP have NIZK proofs or arguments satisfying $ \epsilon_c+\epsilon_s+\epsilon_z < 1 $, then One-Way Functions (OWFs) exist. This covers all possible non-trivial values for these error rates. It additionally implies that if all languages in NP have such NIZK proofs and ec is negligible, then they also have NIZK proofs where all errors are negligible. Previously, these results were known under the more restrictive condition $ \epsilon_c+\sqrt{\epsilon_s}+\epsilon_z < 1 $ [Chakraborty et al., CRYPTO 2025]. 2. If all languages in NP have k-round public-coin ZK proofs or arguments satisfying $ \epsilon_c+\epsilon_s+(2k-1)\epsilon_z < 1 $, then OWFs exist.3. If, for some constant k, all languages in NP have k-round public-coin ZK proofs or arguments satisfying $ \epsilon_c+\epsilon_s+k\epsilon_z < 1 $, then infinitely-often OWFs exist.
2026/298(PDF)
Last updated:  2026-02-18
Key Recovery Attacks on UOV Using p^l-truncated Polynomial Rings
Attacks and cryptanalysis
Hiroki Furue and Yasuhiko Ikematsu
Show abstract
Attacks and cryptanalysis
The unbalanced oil and vinegar signature scheme (UOV) was proposed by Kipnis et al. in 1999 as a multivariate-based scheme. UOV is regarded as one of the most promising candidates for post-quantum cryptography owing to its short signatures and fast performance. Recently, Ran proposed a new key recovery attack on UOV over a field of even characteristic, reducing the security of its proposed parameters. Furthermore, Jin et al. generalized Ran’s attack to schemes over a field of arbitrary characteristic by exploiting the structure of the symmetric algebra.In this work, we propose a new framework for recovering the secret subspace of UOV over a finite field $\mathbb{F}_{p^e}$ by generalizing these preceding results. First, we show that a key recovery against UOV can be successfully performed using the XL algorithm by exploiting the structure of the $p$-truncated polynomial ring $R^{(p)}=\mathbb{F}_{p^e}[x_1,\dots,x_n]/ \langle x_1^p,\dots,x_n^p\rangle$. This result simplifies the description of the attacks proposed by Jin et al.\ by formulating them in terms of the polynomial ring, independent of the structure of the symmetric algebra. Second, we generalize this result to the polynomial rings of more general forms, namely, the $p^\ell$-truncated polynomial rings $R^{(p^\ell)}$ for any $1 \le \ell \le e$. This result is due to our description in terms of the polynomial ring and can relax the constraints on the solving degree of the XL algorithm using $R^{(p^\ell)}$ by taking a larger $\ell$. Finally, we consider performing the reconciliation and intersection attacks using the $p^\ell$-truncated polynomial rings against UOV. In particular, we newly take into account the intersection attack using this framework, which has not been considered in previous analyses.Based on our complexity estimation, we confirm that the optimal complexity of the reconciliation attack using the proposed framework is consistent with that of the symmetric-algebra attack by Jin et al. We further show that the intersection attack using the proposed framework outperforms the reconciliation attack against the proposed parameters of UOV and reduces the security of multiple parameters compared to their claimed security levels. In addition, we show that our complexity estimation of the reconciliation attack using the proposed framework reduces the security of multiple parameters of SNOVA compared to previously known attacks.
2026/297(PDF)
Last updated:  2026-02-18
Scaling Sparse Matrix Computation for Secure Outsourced Computing
Applications
Wujie Xiong, Hao Zhou, Yutong Ye, Ruoming Jin, and Lei Xu
Show abstract
Applications
Sparse General Matrix-Matrix Multiplication (SpGEMM) is a fundamental but computationally intensive operation that underpins many scientific workloads, including numerous AI applications. With the increasing demands for data security, privacy-preserving computation techniques such as Fully Homomorphic Encryption (FHE) have gained significant attention for their ability to process sensitive data without decryption. Nonetheless, executing SpGEMM within the framework of FHE presents significant challenges. The most effective SpGEMM algorithms exploit matrix sparsity to minimize computational costs; however, FHE obscures both the data values and the sparsity structures. Prior FHE-based privacy-preserving computation frameworks either ignore the inherent sparsity of matrices and rely on dense General Matrix-Matrix Multiplication (GEMM), incurring substantial overhead from redundant homomorphic multiplications, or they attempt to exploit sparsity by encrypting only the non-zero values, which inadvertently exposes sensitive positional information. To address this gap and achieve a better balance between efficiency and privacy, we propose an efficient FHE-compatible SpGEMM framework that enables oblivious data and position processing under a Single Instruction Multiple Data (SIMD) scheme. Moreover, we extend our method to support an arbitrary number of sparse matrices (FHE-SpGEMCM). The efficiency analysis shows that our method achieves an average homomorphic computation cost of $(n_A n_B)^2 / (n^2 N)$, where $n_A$ and $n_B$ represent the number of nonzero elements in $A$ and $B$, respectively, $n$ is the shared inner dimension of the multiplication, and $N$ denotes the batch size used in FHE. Experimental results demonstrate that for square matrices of scale $2^9$, our scheme achieves an average speedup of $439.25\times$ and a $10.68\times$ reduction in memory consumption compared to state-of-the-art baselines that ignore sparsity. Furthermore, when the scale increases to $2^{13}$, our method yields up to a $1201.77\times$ speedup over baselines that only exploit the sparsity of a single matrix.
2026/296(PDF)
Last updated:  2026-02-18
Navigating the Deep: End-to-End Extraction on Deep Neural Networks
Attacks and cryptanalysis
Haolin Liu, Adrien Siproudhis, Samuel Experton, Peter Lorenz, Christina Boura, and Thomas Peyrin
Show abstract
Attacks and cryptanalysis
Neural network model extraction has recently emerged as an important security concern, as adversaries attempt to recover a network’s parameters via black-box queries. Carlini et al. proposed in CRYPTO’20 a model extraction approach inspired by differential cryptanalysis, consisting of two steps: signature extraction, which extracts the absolute values of network weights layer by layer, and sign extraction, which determines the signs of these signatures. However, in practice this signature-extraction method is limited to very shallow networks only, and the proposed sign-extraction method is exponential in time. Recently, Canales-Martínez et al. (Eurocrypt’24) proposed a polynomial-time sign-extraction method, but it assumes the corresponding signatures have already been successfully extracted and can fail on so-called low-confidence neurons.In this work, we first revisit and refine the signature extraction process by systematically identifying and addressing for the first time critical limitations of Carlini et al.'s signature-extraction method. These limitations include rank deficiency and noise propagation from deeper layers. To overcome these challenges, we propose efficient algorithmic solutions for each of the identified issues, greatly improving the capabilities of signature extraction. Our approach permits the extraction of much deeper networks than previously possible. In addition, we propose new methods to improve numerical precision in signature extraction, and enhance the sign extraction part by combining two polynomial methods to avoid exponential exhaustive search in the case of low-confidence neurons. This leads to the very first end-to-end model extraction method that runs in polynomial time. We validate our attack through extensive experiments on ReLU-based neural networks, demonstrating significant improvements in extraction depth. For instance, our attack extracts consistently at least eight layers of neural networks trained on either the MNIST or CIFAR-10 datasets, while previous works could barely extract the first three layers of networks of similar width. Our results represent a crucial step toward practical attacks on larger and more complex neural network architectures.
2026/295(PDF)
Last updated:  2026-02-17
From OT to OLE with Almost-Linear Communication
Cryptographic protocols
Geoffroy Couteau and Naman Kumar
Show abstract
Cryptographic protocols
Following the recent work of (Doerner et al., CCS 2025), we introduce an improved reduction of OLE to OT. We prove the following: there is a perfectly-secure OLE-to-OT reduction over any $n$-bit field $\mathbb{F}$ using almost-linear communication $n\cdot 2^{O(\log^{*} n)}$ and almost-constant rounds $O((\log^{*} n)^2)$. In the course of proving our result, we also introduce new perfectly-secure protocols for computing shares of the equality and greater-than predicate on $n$-bit strings in the OT-hybrid model using $O(n)$ communication and $\log^{*} n$ rounds. These results are of independent interest.
2026/294(PDF)
Last updated:  2026-02-17
Post-Quantum Adaptor Signatures with Strong Security from Cryptographic Group Actions
Cryptographic protocols
Ryann Cartor, Nathan Daly, Giulia Gaggero, Jason T. LeGrow, Andrea Sanguineti, and Silvia Sconza
Show abstract
Cryptographic protocols
We present One Round "Cheating" Adaptor Signatures (OR-CAS): a novel and efficient construction of adaptor signature schemesfrom CSI-FiSh. Our protocol improves substantially on existing groupaction-based schemes: Unlike IAS (Tairi et al., FC 2021), our schemedoes not require expensive non-interactive zero-knowledge proofs, andunlike adaptor MCSI-FiSh (Jana et al., CANS 2024) our constructiondoes not require any modification to the underlying digital signaturescheme. We prove the protocol’s security under the strong security no-tions of Dai et al. (Indocrypt 2022) and Gerhart et al. (Eurocrypt 2024).
2026/293(PDF)
Last updated:  2026-02-17
Quantum Oracle Distribution Switching and its Applications to Fully Anonymous Ring Signatures
Public-key cryptography
Marvin Beckmann and Christian Majenz
Show abstract
Public-key cryptography
Ring signatures are a powerful primitive that allows a member to sign on behalf of a group, without revealing their identity. Recently, ring signatures have received additional attention as an ingredient for post-quantum deniable authenticated key exchange, e.g., for a post-quantum version of the Signal protocol, employed by virtually all end-to-end-encrypted messenger services. While several ring signature constructions from post-quantum assumptions offer suitable security and efficiency for use in deniable key exchange, they are currently proven secure in the random oracle model (ROM) only, which is insufficient for post-quantum security.In this work, we provide four security reductions in the quantum-accessible random oracle model (QROM) for two generic ring signature constructions: two for the AOS framework and two for a construction paradigm based on ring trapdoors, whose generic backbone we formalize. The two security proofs for AOS ring signatures differ in their requirements on the underlying sigma protocol and their tightness. The two reductions for the ring-trapdoor-based ring signatures exhibit various differences in requirements and the security they provide. We employ the measure-and-reprogram technique, QROM straightline extraction tools based on the compressed oracle, history-free reductions and QROM reprogramming tools. To make use of Rényi divergence properties in the QROM, we study the behavior of quantum algorithms that interact with an oracle whose distribution is based on one of two different distributions over the set of outputs. We provide tight bounds for the statistical distance, show that the Rényi divergence can not be used to replace the entire oracle and provide a workaround.
2026/292(PDF)
Last updated:  2026-02-18
Crossing with Confidence: Formal Analysis and Model Checking of Blockchain Bridges
Cryptographic protocols
Pyrros Chaidos, Pooya Farshim, Denis Firsov, Dimitar Jetchev, Aggelos Kiayias, Markulf Kohlweiss, and Anca Nitulescu
Show abstract
Cryptographic protocols
We develop formal code-based security definitions for blockchain bridges and apply them to several bridge architectures deployed in practice. We derive both traditional pen-and-paper proofs and on the other, automated guarantees against bounded counterexamples. The latter is achieved via bounded model checking of our formally specified properties, implemented in Quint, a specification language and model checker closely related to TLA+.Our definitions are expressed in a precise, code-based variant of the Universal Composition (UC) framework. This enables the modular use of hybrid functionalities—even for property-based definitions—and is essential for bounded model checking, where underlying primitives must be idealized.Accordingly, we idealize and model-check all building blocks used in our protocols. Notably, we formulate a novel UC ideal functionality for Advanced Threshold Signatures (ATS) and modelcheck it for attacks to ensure its robustness
2026/291(PDF)
Last updated:  2026-02-17
Tight Reductions for SIS-with-Hints Assumptions with Applications to Anonymous Credentials
Public-key cryptography
Ngoc Khanh Nguyen and Jan Niklas Siemer
Show abstract
Public-key cryptography
In this work, we investigate the landscape of emerging lattice-based assumptions tailored for anonymous credentials, focusing on variants of the Short Integer Solution (SIS) problem augmented with auxiliary hints. We provide a tight reduction from the Generalised ISISf (GenISISf) (Dubois et al., PKC 2025) assumption to its interactive variant IntGenISISf, enabling the construction of proof-friendly signature schemes without incurring the significant efficiency loss observed in prior works. In particular, our results directly apply to the anonymous credential scheme proposed by Bootle et al. (CRYPTO 2023), and circumvent the 4X blow-up in the credential size due to their security loss. We also identify families of functions f for which GenISISf is as hard as SIS, leading to the first (strongly) unforgeable standard-model signature scheme from SIS without relying on chameleon hash functions.Moreover, we analyse the “one-more”-type lattice assumptions, showing in particular that Randomised One-More-ISIS (Baldimtsi et al., ASIACRYPT 2024) is at least as hard as standard One-More-ISIS (Agrawal et al., ACM CCS 2022). Further, we inspect different, yet equivalent, variations of Randomised One-More-ISIS which could be of independent interest. Finally, we compare the structural properties of GenISISf and One-More-ISIS, highlighting both shared techniques and fundamental differences. We believe our results contribute to a clearer understanding of the assumptions underpinning efficient, lattice-based anonymous credential systems.
2026/290(PDF)
Last updated:  2026-02-17
Improved Cryptanalysis of HFERP
Attacks and cryptanalysis
Max Cartor, Ryann Cartor, Hiroki Furue, and Daniel Smith-Tone
Show abstract
Attacks and cryptanalysis
In this paper we introduce a new attack on the multivariateencryption scheme HFERP, a big field scheme including an extra variableset, additional equations of the UOV or Rainbow shape as well as addi-tional random polynomials. Our attack brings several parameter sets wellbelow their claimed security levels. The attack combines novel methodsapplicable to multivariate schemes with multiple equation types with in-sights from the Simple Attack that broke Rainbow in early 2022, thoughinterestingly the technique is applied in an orthogonal way. In addition tothis attack, we apply support minors techniques on a MinRank instancedrawing coefficients from the big field, which was effective against othermultivariate big field schemes. This work demonstrates that there existpreviously unknown impacts of the above works well beyond the scopein which they were derived.
2026/289(PDF)
Last updated:  2026-02-17
Zero-Knowledge Proof-Carrying Data from Accumulation Schemes
Cryptographic protocols
Tianyu Zheng, Shang Gao, and Xun Liu
Show abstract
Cryptographic protocols
Proof-carrying data (PCD) is a powerful cryptographic primitive for ensuring computational integrity in distributed settings. State-of-the-art PCD constructions based on accumulation schemes achieve practical prover efficiency and support a wide range of applications. However, realizing zero-knowledge for accumulation-based PCD remains challenging, particularly for high-degree relations. Existing solutions often incur substantial overhead due to the need for zero-knowledge in both the underlying non-interactive arguments of knowledge (NARKs) and accumulation schemes. In this work, we present new theoretical and practical improvements for zero-knowledge PCD. First, we propose a novel construction that eliminates the need for zero-knowledge NARKs by separating the compliance predicate and accumulation verification, thereby reducing proof size and improving efficiency. Second, we design an efficient zero-knowledge accumulation scheme for special sound protocols, introducing techniques such as masking vectors and zero-knowledge sum-check protocols to ensure privacy with minimal overhead. The theoretical analysis demonstrates that our construction achieves logarithmic proof size and verification time for d-degree NP relations, and outperforms existing solutions in both asymptotic and concrete complexity.
2026/288(PDF)
Last updated:  2026-02-17
Bypassing the Random-Probing Model in Masking Security Proofs
Implementation
Julien Béguinot, Gianluca Brian, and Loïc Masure
Show abstract
Implementation
Masking, i.e., computing over secret-shared data, is one of the main counter-measures against side-channel analysis, provably secure in the standard noisy-leakage model. However, all the state-of-the-art security proofs rely on a reduction to a more abstract model called random probing (Eurocrypt’14). As a result, the noise requirements of such proofs must scale with the field size of the circuit which, beyond not reflecting real-world physics of target devices, is often prohibitive, especially in the post-quantum era. That is why it is critical to find alternative strategies to the reduction to the random-probing model. In this paper, we establish for the first time a masking security proof bypassing this reduction, answering positively to the above question. Contrary to the common belief that directly working in the noisy-leakage model is not convenient, we show how to reach this goal by leveraging an extension of the Xor lemma. We also show how to leverage the IOS framework (CHES’21) in order to derive composable security directly in the noisy-leakage model. As a result, our bound relies on relaxed noise requirements characterized in a weaker metric than the so far state of the art (Crypto’24, ’19). Moreover, our new proof strategy allows us to derive a security bound for circuits masked with the first-order ISW compiler, for which the optimal noise requirement scales as \(\Theta\left(\left|\mathbb{F}\right|^{1/2}\right)\), whereas proofs using the random-probing model work in a regime of \(\mathcal{O}\left(\left|\mathbb{F}\right|\right)\). The latter contribution illustrates how some current design choices for masked implementations could be concretely affected: we exhibit two exemplary masked implementations such that the former one is more secure in the (more abstract) random-probing model, whereas the latter one is more secure in the (more realistic) noisy-leakage model.
2026/287(PDF)
Last updated:  2026-02-17
Network-Agnostic Multidimensional Approximate Agreement with Optimal Resilience
Cryptographic protocols
Diana Ghinea, Darya Melnyk, and Tijana Milentijević
Show abstract
Cryptographic protocols
The Multidimensional Approximate Agreement problem ($D$-AA) considers a setting with $n$ parties with inputs in $\mathbb{R}^D$. Out of the $n$ parties, up to $t$ may be byzantine (malicious). The goal is for the honest parties to obtain $\varepsilon$-close outputs that lie in the honest inputs' convex hull. While tight bounds on the resilience of $D$-AA have been found for the purely synchronous and asynchronous models, this is still an open question for the network-agnostic model. Here, the type of network is not known a priori: it may be synchronous, and then the number of byzantine parties is up to $t_s$, or asynchronous, and then the number of byzantine parties is up to $t_a \leq t_s$. In this model, it is known that $n > (D + 1) \cdot t_s + t_a$ is sufficient for deterministic protocols [GLW, SPAA'23], tight for $D = 1$ [GLW, PODC'22], while $n > \max\{(D + 1) \cdot t_s, t_s + (D + 1) \cdot t_a\}$ is tight for randomized protocols concerned with exact agreement [CGWW, DISC'24].In this work, we establish that, for $D > 1$ the condition $n > \max\{(D + 1) \cdot t_s, t_s + (D + 1) \cdot t_a\}$ is tight for deterministic protocols as well. We identify that the gap in prior deterministic protocols is not geometric, but stems from an asymmetry in the communication primitive that produces parties' "views".The core technical contribution of our work is hence a strengthened network-agnostic Gather primitive that enforces a global structural property on the number of values received by honest parties, eliminating the problematic asymmetry - so that standard safe-area geometric convergence arguments apply under the optimal thresholds.
2026/286(PDF)
Last updated:  2026-02-17
Upper Bound on Information-Theoretic Security of Permutation-Based Pseudorandom Functions
Secret-key cryptography
Chun Guo, Jian Guo, Xinnian Li, and Wenjie Nan
Show abstract
Secret-key cryptography
We present the first general upper bound on permutation-based pseudorandom functions in the information-theoretic setting. We show that any non-compressing PRF, with input and output domain at least \([N]\), making \(t\) black-box calls to any \(t\) public permutations on \([N]\), can be distinguished from a random function over the output domain with at most \(\widetilde{O}\big(N^{t/(t+1)}\big)\) total queries to the PRF and the permutations. Our results suggest that the designs of Chen et al. (Crypto~2019) are optimal, among all possible constructions, in terms of information-theoretic security.In particular, we propose the generalized key alternating construction, which captures permutation-based PRFs. We then prove that, for any such construction, there exists an explicit distinguisher achieving the tradeoff$Q_fQ_p^{t}=\widetilde{O}\big((2t^2)^{t+1}N^{t}\big) $with constant advantage, where \(Q_f\) counts PRF queries and \(Q_p\) counts queries to each public permutation. We further extend our bound to blockcipher-based PRFs and to an adaptive setting in which each round may adaptively choose a permutation from a public family of permutations \(\mathcal P\). In this case, the general upper bound becomes \(\widetilde{O}\big(|\mathcal P|\,N^{t/(t+1)}\big)\).
2026/285(PDF)
Last updated:  2026-02-17
How (not) to Switch FHE Schemes: Framework and Attacks in the IND-CPA-D Model
Public-key cryptography
Giacomo Santato and Riccardo Zanotto
Show abstract
Public-key cryptography
In this paper, we study the IND-CPA-D security of FHE schemes when combined through scheme switching.We introduce a formal framework that captures security in this setting and provide a rigorous analysis of different constructions. Within this framework, we identify sufficient conditions for the switching algorithm under which the combined schemes achieve IND-CPA-D security, assuming the individual schemes are IND-CPA-D-secure.We then focus on the specific case of scheme switching from CKKS to TFHE. We analyze how existing switching algorithms designed for vanilla CKKS can be modified to preserve IND-CPA-D security.We demonstrate an attack against the IND-CPA-D security of the PEGASUS construction [LHHMQ21] and we implement a proof-of-concept of this attack against the CKKS-to-FHEW switching mechanism in the OpenFHE library.Finally, we present a generic transformation that converts a vanilla CKKS-based switch into an IND-CPA-D-secure one.
2026/284(PDF)
Last updated:  2026-02-17
Knowledge Soundness of Polynomial Commitments in the Algebraic Group Model Does Not Guarantee Extractability
Foundations
Petr Chmel, Pavel Hubáček, and Dominik Stejskal
Show abstract
Foundations
The Algebraic Group Model (AGM) has become a standard framework for analyzing the knowledge soundness of group-based polynomial commitment schemes. In this work, we formally establish inherent limitations of this methodology. We isolate a structural property satisfied by essentially all practical group-based polynomial commitments, which we term AGM-clarity. We prove that for AGM-clear schemes, evaluation binding implies knowledge soundness in the AGM. This collapse reveals that the AGM definition of knowledge soundness does not capture a distinct security property, but is merely a structural consequence of evaluation binding.To precisely characterize the guarantees on extractability provided by the AGM, we introduce Weak Interpolation Knowledge Soundness (WIKS) in the standard model, which is an extreme relaxation of extractability. We show that WIKS is implied by standard evaluation binding and prove that, for AGM-clear schemes, knowledge soundness in the AGM is equivalent to WIKS. Ultimately, our results demonstrate that proofs in the AGM for practical polynomial commitment schemes do not certify ``knowledge'' in the sense of immediate extractability, as they yield no guarantees beyond those already implied by standard model evaluation binding.
2026/283(PDF)
Last updated:  2026-02-17
Malicious Security Comes Free in SPDZ
Cryptographic protocols
Junru Li and Yifan Song
Show abstract
Cryptographic protocols
In this work, we study the concrete efficiency of SPDZ-type MPC protocols in the dishonest majority setting with maximum corruption, where $t=n-1$ out of $n$ parties can be corrupted. In the semi-honest setting, the state-of-the-art SPDZ protocol achieves an amortized communication cost of $4n$ field elements per multiplication gate, assuming a pseudorandom correlation generator (PCG) that prepares random Beaver triples silently in the offline phase. However, achieving security against malicious adversaries typically incurs a substantial overhead. Existing works either require communication of at least $10n$ field elements per gate, or additionally require an additively homomorphic encryption scheme. In this work, we construct a maliciously secure SPDZ-type protocol with the same amortized communication complexity as the semi-honest variant, i.e., $4n$ field elements per multiplication gate, assuming a PCG for tensor-product correlations. We note that the tensor-product correlations come free as by-products from almost all existing constructions of PCGs for random Beaver triples. These by-products allow us to construct an efficient preprocessing phase and a recursive check protocol, which enables an online evaluation of the circuit without authenticating all the wire values. The correctness of the online computation can also be checked efficiently with a sublinear communication cost in the circuit size.
2026/282(PDF)
Last updated:  2026-02-17
Unforgeable Watermarks for Language Models via Robust Signatures
Foundations
Huijia Lin, Kameron Shahabi, and Min Jae Song
Show abstract
Foundations
Language models now routinely produce text that is difficult to distinguish from human writing, raising the need for robust tools to verify content provenance. Watermarking has emerged as a promising countermeasure, with existing work largely focused on model quality preservation and robust detection. However, current schemes provide limited protection against false attribution.We strengthen the notion of soundness by introducing two novel guarantees: unforgeability and recoverability. Unforgeability prevents adversaries from crafting false positives, texts that are far from any output from the watermarked model but are nonetheless flagged as watermarked. Recoverability provides an additional layer of protection: whenever a watermark is detected, the detector identifies the source text from which the flagged content was derived. Together, these properties strengthen content ownership by linking content exclusively to its generating model, enabling secure attribution and fine-grained traceability.We construct the first undetectable watermarking scheme that is robust, unforgeable, and recoverable with respect to substitutions (i.e., perturbations in Hamming metric). The key technical ingredient is a new cryptographic primitive called robust (or recoverable) digital signatures, which allow verification of messages that are close to signed ones, while preventing forgery of messages that are far from all previously signed messages. We show that any standard digital signature scheme can be boosted to a robust one using property-preserving hash functions (Boyle, LaVigne, and Vaikuntanathan, ITCS 2019).
2026/281(PDF)
Last updated:  2026-02-17
Do Androids Dream of a Dead Internet: Interactive Watermarks for Bot Detection
Cryptographic protocols
Brennon Brimhall, Harry Eldridge, Maurice Shih, Ian Miers, and Matthew Green
Show abstract
Cryptographic protocols
A number of recent works propose watermarking the outputs of large language models (LLMs) but fail to describe who is authorized to watermark the text or check for a watermark. To resolve these problems, we propose interactive watermarking schemes. Our technique leverages the fact that, for many of the cases in which detecting synthetic text is useful, the detector is able to control some part of the prompt that is passed to the LLM.In other words, we propose poisoning the prompt, through which the examining user establishes a steganographic channel with the LLM provider. This lets the user and the LLM provider agree on a shared key, which is then used to embed a a symmetric watermark and permits the end-user examiner to learn if the entity they are conversing with is a bot. Because the steganographic prompt and the LLM response are indistinguishable from their natural distributions, this approach simultaneously sidesteps prior impossibility results from Zhang et al. [ICML'24] and resolves the authorization questions unanswered by previous work.Our primary construction is based on elliptic curve Diffie-Hellman; we sketch a more sophisticated version using broadcast encryption. Our secondary construction uses a symmetric key protocol with a pre-shared key. To improve efficiency, we introduce steganographic synchronization codes. We experimentally validate our theoretical findings and outline directions for future work.
2026/280(PDF)
Last updated:  2026-02-17
Reducing the Number of Qubits in Quantum Discrete Logarithms on Elliptic Curves
Attacks and cryptanalysis
Clémence Chevignard, Pierre-Alain Fouque, and André Schrottenloher
Show abstract
Attacks and cryptanalysis
Solving the Discrete Logarithm problem on the group of points of an elliptic curve is one of the major cryptographic applications of Shor's algorithm. However, current estimates for the number of qubits required remain relatively high, and notably, higher than the best recent estimates for factoring of RSA moduli. For example, recent work by Gidney (arXiv 2025) estimates 2043 logical qubits for breaking 3072-bit RSA, while previous work by Häner et al. (PQCrypto 2020) estimates a requirement of 2124 logical qubits for solving discrete logarithm instances on 256-bit elliptic curves over prime fields. Indeed, for an $n$-bit elliptic curve, the most space-optimized optimized implementation by Proos and Zalka (Quant. Inf. Comput. 2003) gives $5n + o(n)$ qubits, as more additional space is required to store the coordinates of points and compute the addition law.In this paper, we propose an alternative approach to the computation of point multiplication in Shor's algorithm (on input $k$, computing $k P$ where $P$ is a fixed point). Instead of computing the point multiplication explicitly, we use a Residue Number System to compute directly the projective coordinates of $k P$ with low space usage. Then, to avoid performing any modular inversion, we compress the result to a single bit using a Legendre symbol.This strategy allows us to obtain the most space-efficient polynomial-time algorithm for the ECDLP to date, with only $3.12n + o(n)$ qubits, at the expense of an increase in gate count, from $\mathcal{O}(n^3)$ to $\widetilde{\mathcal{O}}(n^3)$. For $n = 256$ we estimate that 1098 qubits would be necessary, with 22 independent runs, using $2^{38.10}$ Toffoli gates each. This represents a much higher gate count than the previous estimate by Häner et al. (roughly $2^{30}$), but half of the corresponding number of qubits (2124).
2026/279(PDF)
Last updated:  2026-02-16
On the Concrete Hardness Gap Between MLWE and LWE
Attacks and cryptanalysis
Tabitha Ogilvie
Show abstract
Attacks and cryptanalysis
Concrete security estimates for Module-LWE (MLWE) over an appropriate ring are often obtained by translating to an "equivalent" unstructured LWE instance, which implicitly treats algebraic structure as a pure efficiency gain with no impact on security. We show that this heuristic fails for realistic parameters. In common MLWE/RLWE instantiations, an attacker can exploit symmetries to obtain hybrid attacks that are strictly stronger than the best corresponding attack on LWE, translating to a concrete hardness gap between MLWE and LWE.Our starting point is the observation that many cryptographically relevant rings admit coefficient isometries: ring elements whose multiplication acts as a signed permutation on coefficient vectors and preserves the secret and error distributions of interest. Multiplying an MLWE instance by such an isometry creates many derived instances that share the same public matrix and are therefore compatible with the same expensive offline preprocessing in hybrid attacks. We formalise this mechanism and incorporate it into both primal and dual hybrid frameworks.We instantiate coefficient isometries for power-of-two cyclotomic rings, and quantify the resulting advantage in two regimes. For sparse-secret RLWE (popular in homomorphic encryption), isometry-enabled hybrids yield gaps of up to 15 bits over LWE-based estimates. For the standardised Kyber/ML-KEM parameters, we obtain a consistent 2--3 bit gap under standard cost models. Our results demonstrate that the widely assumed equivalence between LWE and MLWE in power-of-two cyclotomics does not hold, with real world consequences for deployed schemes.
2026/278(PDF)
Last updated:  2026-02-16
Exploiting PDF Obfuscation in LLMs, arXiv, and More
Applications
Zhongtang Luo, Jianting Zhang, and Zheng Zhong
Show abstract
Applications
Many modern systems parse PDF files to extract semantic information, including multimodal large language models and academic submission platforms. We show that this practice is extremely vulnerable in real-world use cases. By exploiting standard-compliant features of the PDF page description language, an adversary can craft PDFs whose parsed content differs arbitrarily from what is visually rendered to human readers and whose metadata can be manipulated to mislead automated systems.We demonstrate two concrete vulnerabilities. First, we build adversarial PDFs using font-level glyph remapping that cause several widely deployed multimodal language models to extract incorrect text, while remaining visually indistinguishable from benign documents. Across six platforms, most systems that rely on PDF text extraction are vulnerable, whereas OCR-based pipelines are robust. Second, we analyze arXiv's TeX-detection mechanism and show that it relies on brittle metadata and font heuristics, which can be fully bypassed without changing the visual output.Our findings reveal a potential risk arising from discrepancies between automated PDF parsing and human-visible semantics. We argue that rendering-based interpretation, followed by computer vision, is one better approach for security-sensitive PDF interpretation.
2026/277(PDF)
Last updated:  2026-02-16
Collusion-Minimized TLS Attestation Protocol for Decentralized Applications
Cryptographic protocols
Uğur Şen, Murat Osmanoğlu, Oğuz Yayla, Ali Aydın Selçuk, and Ali Doğanaksoy
Show abstract
Cryptographic protocols
Transport Layer Security (TLS) attestation protocols are a key building block for decentralized applications that require authenticated off-chain data. However, existing Designed Commitment TLS (DCTLS) constructions rely on designated verifiers, which prevents public verifiability and enables prover–verifier collusion in on-chain settings.To address these limitations, we propose a collusion-minimized TLS attestation framework $\Pi_{\textsf{coll-min}}$ that enables jointly verifiable attestations with distributed verifiers. The framework combines two complementary components: a modified and exportable variant of DCTLS, denoted as dx-DCTLS, which enables third-party verification by replacing non-verifiable components with verifiable counterparts, and a decentralized validation layer based on distributed verifiable random functions (DVRF) and a threshold signature scheme (TSS). Together, these two components allow multiple verifiers to jointly validate TLS attestations while minimizing prover–verifier collusion. In this study, we formalize a threshold attestation unforgeability notion capturing adversarial behaviors in multi-verifier environments and prove security under standard assumptions. Specifically, by transitioning from independent multi-session validations, as commonly employed in decentralized oracle networks (DONs), to a unified and exportable attestation framework, we achieve a reduction in the prover complexity from $O(n)$ to $O(1)$. To evaluate practicality, we provide a prototype implementation of the DVRF–TSS component and a performance analysis of dx-DCTLS. The results show that the proposed framework remains efficient at high threshold sizes and introduces only modest additional overhead, demonstrating the feasibility of collusion-minimized and jointly verifiable TLS attestations for smart contract environments.
2026/276(PDF)
Last updated:  2026-02-16
On the conversion of module representations for higher dimensional supersingular isogenies
Foundations
Aurel Page, Damien Robert, and Julien Soumier
Show abstract
Foundations
We expand the well developed toolbox between quaternionic ideals and supersingular elliptic curves into its higher dimensional version, namely (Hermitian) modules and maximal supersingular principally polarised abelian varieties. One of our main result is an efficient algorithm to compute an unpolarised isomorphism $A \simeq E_0^g$ given the abstract module representation of $A$. This algorithm relies on a subroutine that solves the Principal Ideal Problem in matrix rings over quaternion orders, combined with a higher dimensional generalisation of the Clapotis algorithm. To illustrate the flexibility of our framework, we also use it to reduce the degree of the output of the KLPT$^2$ algorithm, from $O(p^{25})$ to $O(p^{15.5})$.
2026/275(PDF)
Last updated:  2026-02-16
PhantomCrypt: Second-Order Deniable Encryption with Post-Quantum Security
Cryptographic protocols
Shahzad Ahmad, Stefan Rass, and Zahra Seyedi
Show abstract
Cryptographic protocols
Traditional deniable encryption primarily focuses on denying the $content$ of secret communications, allowing plausible alternative plaintexts to be presented in the event of coercion. However, even the recognizable use of deniable encryption may already defeat its purpose, making any revealed plaintext suspicious to a coercer. Hence, for practical deniability, not only does the content need to be deniable, but also the entire use of deniable encryption must be considered. We call this $second~order~deniability$. This notion aims to hide the whole use of deniable encryption, where covert communication is indistinguishable from innocuous data, and enhanced content deniability, enabling multiple, cryptographically plausible decryptions that are computationally indistinguishable from the true message. To show its practicality, we present PhantomCrypt, which combines conventional deniable encryption (DE) with steganographic methods to hide the use of DE itself, while retaining the ability to decrypt a ciphertext into several distinct plaintexts, even if not under pressure. We prove the security of PhantomCrypt using a formalization of second-order deniability in standard cryptographic terms.
2026/274(PDF)
Last updated:  2026-02-16
From linear regression to generative model for explainable non profiled side-channel attacks
Attacks and cryptanalysis
Sana Boussam, Mathieu Carbone, Benoît Gérard, Guénaël Renault, and Gabriel Zaid
Show abstract
Attacks and cryptanalysis
Non profiled side-channel attacks aim at exploiting leakage traces from a targeted embedded system to extract secret information, without a priori knowledge on the true leakage model of the device. To automate and simplify attacks, deep learning techniques have thus been introduced in the side-channel community. Most of the works published have mainly explored the use of discriminative models in the profiled or non profiled context. However, the lack of interpretability and explainability of these models constitutes a major limitation for security laboratory. Indeed, the lack of theoretical results regarding the choice of neural network architectures or the selection of appropriate loss functions poses significant challenges for evaluators seeking to construct a relevant and suitable model for the targeted leakage traces. To address the aforementioned limitations, we propose in this work a novel conditional generative model, specifically designed to carry out non profiled attacks, which is fully interpretable and explainable. To do so, we develop a novel model that fits to the non profiled context. To guarantee the interpretability and explainability of our model, we provide theoretical results to justify both its architecture, and a new loss function for its optimization process. We further propose a key recovery strategy based on our model that requires no leakage model assumptions.As a consequence, our work represents thus the first interpretable and generic (i.e. no a priori knowledge on the leakage model is required) non profiled deep learning-based side-channel attacks. Moreover, to emphasise the benefits of our new model in comparison with conventional linear regression based attack (LRA), we also provide a theoretical comparative analysis on the deterministic part estimation for different Gaussian noise configurations. Finally, we experimentally validate and compare the attack performances of our model with LRA and state-of-the-art discriminative-models-based non profiled attacks using simulations and various publicly available datasets.
2026/273(PDF)
Last updated:  2026-02-16
Weighted Cryptography with Weight-Independent Complexity
Cryptographic protocols
Aarushi Goel, Swagata Sasmal, and Mingyuan Wang
Show abstract
Cryptographic protocols
Cryptographic primitives involving multiple participants, such as secure multiparty computation (MPC), threshold signatures, and threshold encryption, are typically designed under the assumption that at least a threshold number of participants remain honest and non-colluding. However, many real-world applications require more expressive access structures beyond simple thresholds. A prominent example is the weighted threshold access structure, where each party is assigned a weight and security holds as long as the total weight of corrupted parties does not exceed a specified threshold.Despite the practical relevance of such access structures, our understanding of efficient constructions supporting them remains limited. For instance, existing approaches for weighted MPC and weighted threshold encryption incur costs that scale with the total assigned weights to all parties or rely on non-black-box use of cryptography.In this work, we present the first black-box constructions of the following weighted cryptosystems with weight-independent complexity in the trusted setup model: (i) a weighted MPC protocol with guaranteed output delivery, (ii) a semi-honest weighted threshold encryption scheme and (iii) a semi-honest weighted threshold Schnorr signature scheme. At the heart of our constructions is a new succinct computational secret sharing scheme with linear homomorphism for weighted threshold access structures. We provide two concrete instantiations of this primitive, based on the Decisional Composite Residuosity (DCR) assumption and the Learning With Errors (LWE) assumption, respectively. Furthermore, our constructions extend to any general access structure that can be represented efficiently as a monotone Boolean circuit.
2026/272(PDF)
Last updated:  2026-02-19
On the Complexity of Interactive Arguments
Foundations
Idan Baril and Iftach Haitner
Show abstract
Foundations
We study the minimal hardness assumptions required for constructing interactive argumentsfor NP, focusing on succinct arguments—the total number of bits sent by the prover is smallerthan the witness size—and on relaxed forms of zero knowledge, such as witness indistinguisha-bility. Known constructions of succinct arguments rely on collision-resistant hash functions(Kilian, STOC 92), indistinguishability obfuscation (Sahai and Waters, STOC 14), the dis-crete logarithm assumption (Bootle et al., Eurocrypt 16), and lattice-based assumptions (Baumet al., Crypto 18), while known constructions of witness indistinguishability require one-wayfunctions (OWFs) (Goldreich et al., FOCS 86, JACM 91). This suggests that succinct witness-indistinguishable interactive arguments may require the existence of OWFs.Somewhat surprisingly, we prove that the existence of fully black-box reductions from OWFsto succinct witness-indistinguishable interactive arguments is unlikely: the existence of sucha reduction would imply (unconditionally) the existence of OWFs. More generally, we con-sider fully black-box reductions from OWFs to succinct witness-indistinguishable interactivearguments combined with an additional hardness assumption G (e.g., NP̸ ⊆ P/poly). Suchreductions would show that any algorithm breaking the candidate one-way function can betransformed into an algorithm that either breaks the soundness of the interactive argument orviolates the assumption G. We prove that the existence of such a reduction already implies ablack-box reduction from OWFs to G alone.
2026/271(PDF)
Last updated:  2026-02-16
Defining Quantum-Secure Message Authentication
Secret-key cryptography
Ashwin Jha, Mustafa Khairallah, Jannis Leuther, and Stefan Lucks
Show abstract
Secret-key cryptography
The classical EUF-CMA notion for the security of message authentication codes (MACs) is based on "freshness": messages chosen by the adversary are authenticated, and then the adversary has to authenticate a fresh message on its own. In a quantum setting, where classical messages are authenticated but adversaries can make queries in superposition, "freshness" is undefinable. Instead of requiring the adversary to be unable to forge a fresh message, one can require "stability" (the adversary cannot authenticate more messages than queried before), or "exclusiveness" (the adversary cannot authenticate a message from a subset of messages it did not query before). The "plus-one" security definition, proposed by Boneh and Zhandry at Eurocrypt 2013, maintains stability, but fails at exclusiveness. The "blind unforgeability" notion from Alagic et al. (Eurocrypt 2020) maintains exclusiveness, but it is unknown if it maintains stability. This paper proposes a new security definition: "splitting unforgeability" (SU). A MAC is SU-secure, if it maintains both stability and exclusiveness. Against classical adversaries, SU is equivalent to EUF-CMA. Against quantum adversaries, SU implies both plus-one security and blind unforgeability. With respect to $q$-query adversaries, $(2q-1)$-wise independent functions do not suffice for SU, but $(3q+1)$-wise independent functions do, as does a qPRF. Boneh and Zhandry's "Quantum Carter-Wegman MAC" (BZq-MAC), which combines a qPRF and an $\epsilon$-AXU hash function, is SU-secure up to the quantum birthday bound.We additionally analyse the security of different instantiations of the Hash-then-MAC composition of a SU-secure MAC $F$ and a hash function $H$ which is either collapsing or Bernoulli-preserving.
2026/270(PDF)
Last updated:  2026-02-16
Pseudorandomness of Knapsacks over a Number Ring
Foundations
Biswajit Mandal and Shashank Singh
Show abstract
Foundations
In this work, we study the pseudorandomness of the bounded Knapsack function family defined over a number ring ${R}$. We establish that if the Knapsack function family is one-way and certain related folded Knapsack function families are pseudorandom, then the original Knapsack is pseudorandom. This can be seen as a generalisation of the work of Micciancio and Mol presented at the CRYPTO-2011, for the case of Knapsack function families defined over arbitrary number ring.
2026/269(PDF)
Last updated:  2026-02-18
Exact Error Analysis for Blind Rotation in Fully Homomorphic Encryption
Public-key cryptography
Sin Kim, Seunghwan Lee, Dohyuk Kim, and Dong-Joon Shin
Show abstract
Public-key cryptography
Blind rotation is the computational core of GINX, AP, and AP+ bootstrappings, yet its error behavior has not been precisely characterized. Prior analyses rely on heuristic independence assumptions that fail to capture the distinct error accumulation patterns of different algorithmic variants. We prove that no additional assumptions are needed: the (M)LWE assumption guaranteeing ciphertext indistinguishability also implies the independence properties required for an exact second-moment characterization of blind rotation error. We derive the first closed-form formulas covering all four combinations of decomposition type (full vs. approximated) and algorithmic variant (vanilla vs. unrolled). Our analysis reveals three structural phenomena previously overlooked: (i) a message-dependent residual term that halves the effective decomposition variance, (ii) a doubling coefficient inherent to unrolled implementations, and (iii) bounded cross-term correlations whose contribution remains constant in the security parameter. Our formulas match experimental measurements within 1%, whereas prior estimators deviate by up to 50% for unrolled blind rotation. Our results show that the (M)LWE assumption—already required for security— is sufficient to determine the exact error behavior of GINX, AP, and AP+ bootstrapping.
2026/268(PDF)
Last updated:  2026-02-16
One Pair to Rule Them All: An Optimal Algorithm for Solving Code Equivalence via Codeword Search
Attacks and cryptanalysis
Alessandro Budroni and Andre Esser
Show abstract
Attacks and cryptanalysis
Two linear codes $\mathcal{C},\mathcal{C}’$ over $\mathbb{F}_q$ are linearly equivalent if one can be mapped to the other via a monomial transformation. Recovering this monomial from $\mathcal{C}$ and $\mathcal{C}’$ is known as the Linear Code Equivalence (LCE) problem.The most efficient algorithms to solve the LCE problem follow a common framework based on finding low-weight codewords. This framework admits a natural lower bound obtained by assuming that among the found low-weight codewords, a single equivalent codeword pair can be identified and used to reconstruct the monomial without overhead. Whether this lower bound can be achieved by a constructive instantiation has remained an open problem. Existing algorithms all require multiple equivalent pairs for monomial reconstruction, resulting in both concrete and asymptotic gaps to the lower bound.In this work, we answer the question of whether there exists such an optimal framework instantiation in the affirmative. We introduce a canonical labeling technique, as a generalization of canonical forms, that allows for monomial reconstruction from a single pair of equivalent codewords. Crucially, this labeling procedure, even if not necessarily polynomial-time, can be embedded into the codeword-search framework to identify equivalent codewords and perform final monomial recovery without overhead. This gives rise to the first framework instantiation that meets its lower bound both asymptotically and concretely up to negligible tolerance.For the parameter sets proposed for the LESS signature scheme, an active second-round contender in the NIST PQC standardization process, our analysis reduces the estimated bit security by up to 15 bits.
2026/267(PDF)
Last updated:  2026-02-16
Beyond the Linear Barrier: Secret Sharing for Evolving (Weighted) Threshold Access Structures with Poly-logarithmic Share Size
Foundations
Danilo Francati, Sara Giammusso, and Daniele Venturi
Show abstract
Foundations
Evolving secret sharing allows a dealer to share a secret message between a growing number of $n$ parties. Crucially, the dealer does not know an upper bound on $n$, neither it knows the access structure before party $n$ arrives; furthermore, the dealer is not allowed to update the shares of old parties.We construct new secret sharing schemes for so-called evolving (weighted) threshold access, in which the arrival of party $n$ determines the number of parties $t_n \le n$ that are required in order to reconstruct the secret. We also consider the more general case in which party $n$ has associated a weight $w_n$ with logarithmic size, and the authorized subsets of parties are those for which the sum of the corresponding weights exceeds the current threshold $t_n$. In particular, we obtain:- A secret sharing scheme for evolving threshold access structures with adaptive privacy in the plain model, and with share size $\mathsf{poly}(\lambda,\log n)$. This construction requires one-way functions (OWFs) and indistinguishability obfuscation (iO) for Turing machines. - A secret sharing scheme for evolving weighted threshold access structures with adaptive privacy in the plain model, and with share size $\mathsf{poly}(\lambda,\log W_n)$ where $W_n$ is the sum of the weights up to party $n$. This construction requires OWFs and iO for Turing machines, and additionally assumes that the weights are fixed (i.e., cannot change over time).- A secret sharing scheme for evolving weighted threshold access structures with static privacy in the plain model, and with share size $\mathsf{poly}(\lambda,\log W_n)$. This construction allows the weight of old parties to change over time, but it requires somewhere statistically binding hash functions and achieves only static privacy.Previous constructions of secret sharing schemes for evolving (weighted) threshold access structures achieved (much worse) share sizes linear in $W_n$ (and in the security parameter) and, when considering adaptive privacy, they require the random oracle model.
2026/266(PDF)
Last updated:  2026-02-16
UltraFold: Efficient Distributed BaseFold from Packed Interleaved Merkle Trees
Cryptographic protocols
Wenhao Wang and Fan Zhang
Show abstract
Cryptographic protocols
Transparent, code-based polynomial commitment schemes (PCSs) such as BaseFold (CRYPTO’24) are a compelling building block for large-scale zero-knowledge proofs (ZKPs): they avoid trusted setup, rely on standard hash assumptions, offer post-quantum security, and achieve high performance. As ZKP workloads grow, the polynomials that PCSs must commit to and open increasingly exceed the memory and throughput of a single machine, motivating a scalable distributed version of BaseFold. However, existing distribution attempts either only support polynomials of specific structures, or their proof size grows with the number of workers, or they do not scale to arbitrarily many workers. In this paper, we present UltraFold, the first distributed BaseFold PCS that works with general polynomials, scales to any number of workers, and maintains succinct proofs whose size does not depend on the worker count. To enable efficient distributed commitment and opening, UltraFold introduces an interleaved Merkle leaf layout that is realized via a single all-to-all exchange of partially encoded values, and each worker’s computation becomes local after this exchange. To mitigate hashing overhead, UltraFold further uses packed Merkle trees, reducing both prover time in practice and the resulting proof size. We implement UltraFold and evaluate it in a distributed setting: using 256 single-core workers, we commit to and open a 134M-coefficient polynomial in under 2 seconds, with a proof size of 216 KB.
2026/265(PDF)
Last updated:  2026-02-18
Catalytic Tree Evaluation From Matching Vectors
Foundations
Alexandra Henzinger, Edward Pyne, and Seyoon Ragavan
Show abstract
Foundations
We give new algorithms for tree evaluation (S. Cook et al. TOCT 2012) in the catalytic-computing model (Buhrman et al. STOC 2014). Two existing approaches aim to solve tree evaluation in low space: on the one hand, J. Cook and Mertz (STOC 2024) give an algorithm for TreeEval running in super-logarithmic space $O(\log n\log\log n)$ and super-polynomial time $n^{O(\log\log n)}$. On the other hand, a simple reduction from TreeEval to circuit evaluation, combined with the result of Buhrman et al. (STOC 2014), gives a catalytic algorithm for TreeEval running in logarithmic $O(\log n)$ free space and polynomial time, but with polynomial catalytic space.We show that the latter result can be improved. We give a catalytic algorithm for TreeEval with logarithmic $O(\log n)$ free space, polynomial runtime, and subpolynomial $2^{\log^\epsilon n}$ catalytic space (for any $\epsilon > 0$). Our result opens a new line of attack on putting TreeEval in logspace, and immediately implies an improved simulation of time by catalytic space, by the reduction of Williams (STOC 2025). Our catalytic TreeEval algorithm is inspired by a connection to matching-vector families and private information retrieval, and improved constructions of (uniform) matching-vector families would imply improvements to our algorithm.
2026/264(PDF)
Last updated:  2026-02-15
WillowFold: Secure Aggregation with a Lightweight Committee
Cryptographic protocols
Hossein Hafezi, Kasra Abbaszadeh, Adrià Gascón, Phillipp Schoppmann, Mariana Raykova, and Benedikt Bünz
Show abstract
Cryptographic protocols
Secure aggregation enables a server to learn the sum of private inputs of clients, while revealing no additional information beyond the final sum. Recent work, Willow (CRYPTO 2025) achieves one--shot secure aggregation in the single-server model with dynamic client participation. To ensure security under these features, Willow relies on an auxiliary committee to verify the correctness of the aggregation. Although this verification requires no private information---broadening the set of parties eligible to participate in the committee---the committee’s total work scales linearly with the number of clients, which poses a challenge for large-scale deployments. This linear committee cost limitation is also shared by other state-of-the-art single-server secure aggregation protocols, including Flamingo (IEEE S&P 2023) and OPA (CRYPTO 2025).In this paper, we introduce WillowFold, a secure aggregation scheme based on Willow that enables lightweight verification of the aggregation and requires only logarithmic committee cost in the number of clients. Our scheme additionally supports streaming aggregation and reduces per-client server storage to a single ID (4 bytes), rather than storing full contributions as in Willow (1.5 KB), which facilitates distributed execution of the server-side computation. Finally, we close a gap in Willow’s security proof by showing that the zero-knowledge proofs underlying the scheme must be simulation extractable in order to rule out correlated ciphertext attacks.WillowFold employs proof-carrying data (PCD)---a primitive for incremental verification of distributed computations---to achieve a lightweight verification cost. In particular, we present a concretely efficient instantiation of WillowFold using a variant of Spartan+KZH-Fold (CCS 2025) as the underlying PCD scheme. As independent contributions, we show a zero-knowledge variant of Spartan+KZH, which is the SNARK underlying Spartan+KZH-Fold, that uses logarithmic randomness, and we further prove that this variant satisfies the simulation extractability.We implement and evaluate WillowFold, showing that it supports $8$ million clients with proof size $<60$~KB and verification time under one second--a $10^{5}$-fold improvement over Willow.
2026/263(PDF)
Last updated:  2026-02-18
Compact and Statistical NIZK Proofs of Knowledge for Disjunctions from $\Sigma$-Protocols
Cryptographic protocols
Gennaro Avitabile, Luisa Siniscalchi, and Ivan Visconti
Show abstract
Cryptographic protocols
The classical results of Cramer et al. [Crypto’94] showed how to compose $n$ $\Sigma$-protocols with statistical HVZK to obtain an efficient proof of knowledge of their disjunction maintaining statistical HVZK without adding hardness assumptions. The Fiat-Shamir (FS) transform applied to their construction produces a statistical NIZK proof of knowledge in the random oracle (RO) model, but, unfortunately, the proof size in their case is linear in $n$.Recently, there has been increasing interest in solving the major open problem of obtaining statistical NIZK proofs of knowledge for disjunctions starting from $\Sigma$-protocols, with improved communication and minimizing hardness assumptions. The current best results are due 1) to Goel et al. [Eurocrypt '22], which, unfortunately, require Dlog-based assumptions, and 2) to Boudgoust and Simkin [TCC '24] which, unfortunately, obtain computational ZK only.In this work, we solve the above open problem showing, for a large class of $\Sigma$-protocols, how to obtain a non-interactive compact statistical NIZK proof of knowledge without adding hardness assumptions, therefore only relying on random oracles.More precisely, the communication complexity of our construction is $O(\lambda^2\log{n})+\mathsf{CC}(\Sigma)$ where $\lambda$ is the security parameter and $\mathsf{CC}(\Sigma)$ is the communication of a single run of the underlying $\Sigma$-protocol, and this is obtained via calls to a random oracle and to a prover executing a stand-alone instance of the $\Sigma$-protocol.
2026/262(PDF)
Last updated:  2026-02-14
Fuzzy Private Set Intersection from Density-Bounded Assumptions
Cryptographic protocols
Seunghun Paik and Jae Hong Seo
Show abstract
Cryptographic protocols
We propose a new fuzzy private set intersection (FPSI) protocol, a two-party functionality that securely identifies similar items across private sets. Our design departs from the prevailing separation-based paradigm in existing constructions. Rather than enforcing common separation-based assumptions, e.g., ``apart by $2\delta$'' or disjoint projection assumptions, we adopt a fundamentally different perspective: we explicitly allow collisions within $\delta$-balls, but require that their multiplicity be bounded per axis. We formalize this by introducing a new structural assumption called \textit{$k$-max collision}, which upper-bounds the number of items intersecting any $\delta$-ball along each axis by a parameter $k$. This new assumption intrinsically reflects the axis-wise distinctness of the data through density, rather than pairwise separability. Following prior analyses of disjoint projection assumptions, we show the plausibility of $k$-max collision by proving that uniformly distributed data satisfies this property. Under the $k$-max collision assumption, we propose an FPSI protocol with computation/communication complexities scaling linearly with both parties' set sizes and the data dimension for a fixed $k$. Our results demonstrate that separation assumptions are not necessary for efficient FPSI. Notably, our construction accommodates the intermediate-dimension regime ($1\ll d \ll \kappa$ for the data dimension $d$ and the statistical security parameter $\kappa$), which has remained neither efficiently supported nor well-justified under existing separation-based FPSI paradigms.
2026/261(PDF)
Last updated:  2026-02-14
Logarithmic-Depth Pseudorandom Functions from Well-Founded Code-Based Assumptions
Foundations
Youlong Ding, Aayush Jain, and Ilan Komargodski
Show abstract
Foundations
We give the first $\mathsf{NC}^1$-computable Pseudorandom Function (PRF) constructions from well-founded (non-ad-hoc) code-based assumptions. Specifically, we give two constructions based on two different classical variants of the learning parity with noise (LPN) assumption: (1) An $\mathsf{NC}^1$-computable PRF from hardness of Sparse-LPN [Alekhnovich, FOCS~'03] (with respect to a sublinear-depth explicit expander graph family). This PRF is also key-homomorphic. (2) An $\mathsf{NC}^1$-computable PRF from hardness of Ring-LPN [Heyse et al., FSE~'12].Both of these assumptions have been used and studied for many years. Notably, within the range of parameters that we need, none of these assumptions is known to imply collision-resistant hashing, and the first is not known to imply public-key encryption. Prior constructions of code-based PRFs (let alone key-homomorphic) are either super-logarithmic depth, or rely on ad-hoc and new assumptions. As a bonus, we give a similar result relying on the \emph{classical} LPN assumption, albeit with quasi-polynomial hardness: -A key-homomorphic $\mathsf{NC}^1$-computable PRF from quasi-polynomial hardness of classical LPN [Blum et al., CRYPTO~'93].Technically, all of our results are obtained via a refinement and substantial extension of the recent ideas of [Ding, Jain, and Komargodski, STOC~'25].
2026/260(PDF)
Last updated:  2026-02-14
Investigating the Wedge Map on SNOVA
Public-key cryptography
Po-En Tseng, Lih-Chung Wang, Peigen Li, and Yen-Liang Kuan
Show abstract
Public-key cryptography
This paper investigates the algebraic structure of SNOVA, a NIST PQC Round 2 candidate, with a specific focus on the kernel dimension of the wedge map. We employ lifting techniques to transform the public key from the matrix ring $\mathbb{F}_q^{l \times l}$ to an equivalent representation over the extension field $\mathbb{F}_{q^l}$, establishing that the rank of the wedge map is a structural invariant. A key contribution of this work is the derivation of a generating function that explicitly characterizes the wedge map's kernel dimension. This algebraic analysis provides a rigorous understanding of SNOVA's geometry, verifying the safety of specific parameter sets and enabling future refined adjustments to guarantee structural security.
2026/259(PDF)
Last updated:  2026-02-14
Blockchain Stacking Fraud and Deterrence
Cryptographic protocols
Tong Cao, Man Ho Au, and Xiapu Luo
Show abstract
Cryptographic protocols
As crypto-assets become increasingly integrated into global financial systems, they are now employed to demonstrate solvency, establish creditworthiness, and satisfy wealth verification requirements for various financial and regulatory purposes. A fundamental challenge in these applications is proving that a given wallet address is controlled exclusively by a single legal entity, as fraudulent multiple claims can undermine the integrity of asset-based verification systems. In this paper, we propose a binding of wallet address to legal identity that makes conflicting claims cryptographically inconsistent. In the main design, claimants post to a public, privacy-preserving bulletin board a zkSNARK that proves knowledge of both the private key and the address–identity binding without revealing either; proofs are structured so that duplicate claims for the same address are publicly detectable. When a shared bulletin board is not available, we introduce a self-deterrence proof: a zero-knowledge protocol in which producing two valid proofs for the same address under different legal identities would enable efficient extraction of the private key, creating a strong economic disincentive against double claiming. These cryptographic primitives provide effective countermeasures against blockchain stacking fraud attacks and enhance the reliability of crypto-assets in traditional financial applications, contributing to the broader integration of digital assets into established economic systems.
2026/258(PDF)
Last updated:  2026-02-17
Lightning, Field-Agnostic Super-Efficient Polynomial Commitment Scheme
Cryptographic protocols
Wenjie Qu, Yanpei Guo, and Jiaheng Zhang
Show abstract
Cryptographic protocols
Polynomial commitment schemes (PCS) are a fundamental building block of modern zkSNARKs. In this paper, we propose Lightning, a new coding-based PCS that achieves state-of-the-art prover efficiency. Our main technical contribution is a new linear code family, the Lightning code, which can be instantiated from any base code with constant relative distance. Compared to the base code, Lightning code significantly reduces encoding cost by trading off relative distance. We integrate Lightning code into the standard coding-based PCS framework of Ligero and Brakedown. Experimental results show that Lightning PCS reduces prover commitment time by up to $2.7\times$ compared to the fastest prover configuration of Brakedown, at the cost of a $2.4\times$ increase in proof size. Overall, Lightning provides a practical mechanism for trading proof size for prover efficiency in coding-based PCS constructions.
2026/257(PDF)
Last updated:  2026-02-13
Dishonest-Majority Secure Computation via PIR-Authenticated Multiplication Triples
Cryptographic protocols
Elette Boyle, Niv Gilboa, Matan Hamilis, Yuval Ishai, and Ariel Nof
Show abstract
Cryptographic protocols
We revisit the question of minimizing the overhead of security against malicious parties in dishonest-majority secure computation. A leading approach, pioneered by the SPDZ line of protocols, uses homomorphic MACs to authenticate computation: Parties effectively compute a MAC on the computation output using authenticated multiplication triples (AMT). However, securely generating these AMTs presently sits as the cost bottleneck.In this work, we introduce a new technique for enabling SPDZ-style verification via homomorphic MACs, while bypassing the need for AMT. We instead rely on the specific structure of state-of-the-art fast pseudorandom correlation generators (PCG) for generating standard (unauthenticated) multiplication triples (MT).Parties authenticate the computation result via an authenticated variant of private information retrieval (PIR), relying on the sparse representation of MT produced by these PCGs.This opens the door to a wide range of PIR optimizations and tradeoffs from the literature, resulting in asymptotic and concrete improvements over the traditional AMT-based approach. For example, in the Boolean 2-party case with \(\sigma=40\), we get a \(3\times\) to \(8\times\) computation improvement (and comparable communication) over best approaches using PCG to generate AMT, where the latter exploits variants of the Stationary Syndrome Decoding assumption of Kolesnikov et al. (Crypto 2025). With \(m\ge 3\) parties we obtain even larger improvements while reducing the asymptotic dependence on \(m\) from cubic to quadratic.
2026/256(PDF)
Last updated:  2026-02-13
Adams Bridge Accelerator: Bridging the Post-Quantum Transition
Implementation
Mojtaba Bisheh-Niasar, Emre Karabulut, Kiran Upadhyayula, Michael Norris, and Bharat Pillilli
Show abstract
Implementation
Quantum computing threatens widely deployed public-key cryptosystems, driving the urgent adoption of post-quantum cryptography (PQC) in cloud and hardware-accelerated security infrastructures. This paper presents Adams Bridge, an industry-grade hardware accelerator for lattice-based PQC that integrates ML-KEM and ML-DSA within a unified architecture to maximize hardware reuse and silicon efficiency. The design features a staged, pipelined datapath that exploits multi-level parallelism to accelerate polynomial operations shared by both schemes. An optimized NTT/INTT and point-wise multiplication engine is tightly coupled with a high-throughput Keccak core and efficient hardware sampling, reducing memory overhead and eliminating pipeline stalls.Synthesized in 5 nm technology and operating at 600 MHz, Adams Bridge achieves the best reported normalized Area-Time (AT) efficiency among unified designs, offering a 27% AT improvement for ML-DSA compared to state-of-the-art architectures. The three phases of ML-DSA complete in 26, 61, and 31 $\mu$s, respectively, while ML-KEM takes 11, 14, and 20 $\mu$s for its corresponding stages.To address physical attack vectors, the accelerator embeds hardware-level side-channel countermeasures, including masking, shuffling, and constant-time control and arithmetic, mitigating information leakage without compromising performance. Empirical TVLA evaluation up to one million traces confirms the elimination of first-order leakage in critical datapaths. Targeted for deployment within the open-source Caliptra Root of Trust (RoT), Adams Bridge represents the first open-source PQC accelerator under the Apache 2.0 license designed for real-world hardware security systems.
2026/255(PDF)
Last updated:  2026-02-13
On Compressing Non-Additive Correlations
Cryptographic protocols
Geoffroy Couteau, Alexander Koch, Nikolas Melissaris, Peter Scholl, Sacha Servan-Schreiber, and Xiaxi Ye
Show abstract
Cryptographic protocols
Compressing correlated randomness is a key component of secure computation protocols in the silent preprocessing model and has received significant attention in recent years. However, to date, all known constructions are restricted to generating additive correlations, where the parties receive additive shares of a relation. The only known exceptions are constructions from indistinguishability obfuscation.In this work, we initiate the study of compressing useful forms of correlated randomness for non-additive correlations, without resorting to indistinguishability obfuscation (iO). We provide several constructions for pseudorandom correlation generators and functions, overcoming the long-standing "iO-barrier" in the field. Concretely, we focus on two types of non-additive correlations in this work.First, we introduce pseudorandom permutation functions, where each party obtains a portion of a pseudorandom permutation $\pi$ over the set $\set{1,\dots, n}$, such that no subset of $n-2$ colluding parties learn the full permutation $\pi$. We give a three-party construction of a pseudorandom correlation function (PCF) for permutations from homomorphic secret sharing satisfying a special share programmability property. This construction can be instantiated from a variety of standard assumptions and can even be concretely efficient (generating pseudorandom permutations in a fraction of a second). We describe two applications of pseudorandom permutation functions: one in anonymous broadcast and the other in single secret leader election.Second, we introduce pseudorandom correlation generators for classical correlations (such as Beaver triples) where additive secret sharing is replaced with threshold} secret sharing. We obtain a pseudorandom correlation function for constant-degree Shamir shares of low-degree correlations from a variety of standard assumptions. We also introduce two new $t$-out-of-$n$ PCG constructions where either $t$ or $n-t$ is a constant and $t$ denotes the threshold. These constructions rely on the conjunction of the LPN and MQ assumptions.
2026/254(PDF)
Last updated:  2026-02-13
Key Committing Security of HCTR2, Revisited
Secret-key cryptography
Donghoon Chang, Yu Long Chen, Yukihito Hiraga, Kazuhiko Minematsu, Nicky Mouha, Yusuke Naito, Yu Sasaki, and Takeshi Sugawara
Show abstract
Secret-key cryptography
This paper presents improved attacks and proofs for the key committing security of EtE-HCTR2, a robust authenticated encryption scheme constructed from HCTR2 and the Encode-then-Encipher (EtE) framework, in light of the ongoing standardization effort of cryptographic accordions by NIST. We improve attacks on the instantiations with two common encodings, where zeros are either appended or prepended to the message, namely EtE_A-HCTR2 and EtE_P-HCTR2. Compared with the state-of-the-art attack by Chen et al. in ToSC 2023(4), our EtE_A-HCTR2 attack reduces the complexity from $O(2^{\tau/2})$ to $O(2^{\max\{\tau/3, \tau-n\}})$ for an $n$-bit block cipher and $\tau$-bit zero padding, which degrades EtE_A-HCTR2's security below the birthday bound. Meanwhile, our EtE_P-HCTR2 attack reduces the complexity from $O(2^{\min \{n/2, \tau\}})$ to $O(2^{\tau/2})$, which is tight with our new security proof. We verify these computationally-bounded attacks by experimentally generating concrete vectors for both EtE_A-HCTR2 with $\tau=96$ and EtE_P-HCTR2 with $\tau=64$, each instantiated with $n=128$, in less than 15 minutes. We consider yet another padding scheme that appends zeros to the first message block, namely EtE_S-HCTR2, and prove that it has a tight committing security bound of $O(2^{\tau/2})$ by avoiding the issue in EtE_A-HCTR2.
2026/253(PDF)
Last updated:  2026-02-13
Cryptanalytic Extraction of Deep Neural Networks with Non-Linear Activations
Attacks and cryptanalysis
Roderick Asselineau, Patrick Derbez, Pierre-Alain Fouque, and Brice Minaud
Show abstract
Attacks and cryptanalysis
Deep neural networks (DNNs) are today’s central machinelearning engines, yet their parameters represent valuable intellectual prop-erty exposed to extraction through black-box queries. While existingcryptanalytic attacks have primarily targeted ReLU-based architectures,this work extends model-stealing techniques to a broad class of non-linearactivation functions, including GELU, SiLU, SELU, Sigmoid, and oth-ers. We present the first universal black-box attack capable of recoveringboth weights and biases from networks whose activations converge to lin-ear behavior outside narrow non-linear regions. Our method generalizesprior geometric approaches by leveraging higher-order derivatives and ad-jacent linear zone analysis, bypassing the need for non-differentiability.We show that, for several activations, neuron signatures can be recov-ered more easily than in the ReLU case, and we further demonstratethat activation functions themselves can be identified when not publiclyknown. Our results broaden the scope of cryptanalytic model extraction,revealing that the secrecy of activation functions or smoothness of nonlin-earities does not provide effective protection against black-box recoveryattacks.
2026/252(PDF)
Last updated:  2026-02-13
At-Compromise Security: The Case for Alert Blindness
Foundations
Martin R. Albrecht, Simone Colombo, Benjamin Dowling, and Rikke Bjerg Jensen
Show abstract
Foundations
We start from the observation in prior work that cryptography broadly intuits security goals – as modelled in games or ideal functionalities – while claiming realism. This stands in contrast to cryptography’s attentive approach towards examining assumptions and constructions through cryptanalysis and reductions. To close this gap, we introduce a technique for determining security goals. Given that games and ideal functionalities model specific social relations between various honest and adversarial parties, our methodology is ethnography: a careful social science methodology for studying social relations in their contexts. As a first application of this technique, i.e. ethnography in cryptography, we study security at-compromise (neither pre- nor post-) and introduce the security goal of alert blindness. Specifically, in our 2024/2025 six-and-a-half-month ethnographic fieldwork with protesters in Kenya, we observed that alert blindness captures a security goal of abducted persons who were taken by Kenyan security forces for their presumed activism. We show this notion is achievable under standard assumptions by providing a construction secure in our model. We discussed both the notion and the construction with some interlocutors in Kenya.
2026/251(PDF)
Last updated:  2026-02-13
OpenAC: Open Design for Transparent and Lightweight Anonymous Credentials
Applications
Liam Eagen, Hy Ngo, Vikas Rushi, Ying Tong, Moven Tsai, and Janabel Xia
Show abstract
Applications
Digital identity systems require mechanisms for verifiable, privacy-preserving presentations of user attestations. The trivial approach of utilizing selective disclosure by presenting individually signed attestations introduces persistent linkability that compromises user anonymity. Existing anonymous credential systems come with practical drawbacks. Some depend on trusted setups, others require substantial modifications to an issuer’s established issuance flow.We propose an open, transparent, and lightweight anonymous credential design that addresses these limitations with the use of zero-knowledge proofs. Our construction is modular, requires no trusted setup and integrates with existing workflows without the need for substantial changes to existing cryptographic mechanisms, procedure overhauls, or hardware devices. It delivers unlinkability while maintaining broad applicability across heterogeneous digital-identity ecosystems and current verifiable credential standards.To demonstrate practicality, we provide a proof-of-concept implementation and benchmarks on mobile devices. Our results show best-in-class proving times, with a focus on efficient client-side proving, an essential requirement for usability in digital identity wallets.OpenAC was purposely constructed to be compatible with the European Digital Identity Architecture and Reference Framework (EUDI ARF). In the appendix, we map EUDI ARF’s functional, privacy, and interoperability requirements, illustrating how OpenAC satisfies regulatory constraints while preserving strong user privacy.
2026/250(PDF)
Last updated:  2026-02-13
On the Concrete Hardness of LWR with a Power of Two Modulus
Secret-key cryptography
Jules Baudrin, Rachelle Heim Boissier, and François-Xavier Standaert
Show abstract
Secret-key cryptography
LWR has been introduced by Banerjee et al. in 2012 as a deterministic variant of LWE. Since then, it has found many applications in the design of symmetric primitives and post-quantum schemes. Despite its deterministic nature, LWR is usually analyzed as LWE, under the (implicit) assumption that no improved attack can take advantage of the additional structure it provides. In this paper, we tackle this assumption in the context of power-of-two moduli and investigate the security of LWR against algebraic attacks in depth. For this purpose, we model its samples as the outputs of a vectorial Boolean function. We first observe that there are corner cases where the state-of-the-art linearisation attack of Arora & Ge does not apply. In contrast, we propose the first LWR-specific attack, which applies in any parameter regime as long as the modulus is a power of two. We combine analyses of standard criteria such as the algebraic degree and the number of monomials in the secret with an analysis of the algebraic normal form, that we are able to express exactly in a compact representation by leveraging group action theory. Our results exhibit specificities in the structure of LWR that we are able to exploit. They allow refining and strengthening the understanding of this important problem, and systematically improve over the attack of Arora & Ge. They also put forward new tools for (symmetric) cryptanalysis, which we believe can be of independent interest.
2026/249(PDF)
Last updated:  2026-02-13
Have Your CKAKE and Eat it, Too: Efficient, Composable KEM-Authenticated Key Exchange
Cryptographic protocols
Myrto Arapinis, Christopher Battarbee, and Mina Doosti
Show abstract
Cryptographic protocols
We report on a novel authenticated key-exchange (AKE) protocol where the authentication is achieved entirely by key-encapsulation mechanisms (KEMs). Techniques to achieve AKE with KEMs have been known for some time, but have received renewed attention in a post-quantum world; in contrast to classical cryptography, the data corresponding to the NIST post-quantum KEM standard is a significant save on bandwidth compared to the signature standard. Previous KEM-authenticated AKE protocols are not known to be composable; our protocol offers similar security guarantees, plus composability, while being more efficient in terms of bandwidth compared to non-composable KEM-based AKE protocols, and composable signature-based AKE protocols. Our protocol features a modular design, and a full security proof in the Constructive Cryptography (CC) framework, one of the major composable security frameworks. We also prove the forward secrecy of our protocol, and introduce generic techniques to prove forward secrecy in CC, which may be of independent interest.
2026/248(PDF)
Last updated:  2026-02-13
Lightweight PQ KEM and Hybrid MQTT Protocol for 8-bit AVR Sensor Nodes
Implementation
Yifan Dong, YoungBeom Kim, Jieyu Zheng, Zhichuang Liang, Boyue Fang, Seog Chung Seo, Maire O'Neill, and Yunlei Zhao
Show abstract
Implementation
Most PQC schemes remain too resource-intensive for ultra-constrained 8-bit AVR wireless sensor nodes. In this work, we present a comprehensive approach to practical lightweight PQC for such devices, covering scheme design, implementation optimization, and protocol integration. Our contributions are threefold: (i) We propose CTRU-Light, a lattice-based KEM specifically tailored for IoT sensor nodes. It combines small moduli, low-degree polynomials, and NTT-friendly arithmetic for high efficiency, with ASCON used for lightweight symmetric operations. (ii) We explore NTT-friendly moduli for the first time to accelerate modular multiplication on 8-bit AVR platforms and design optimized variants of Montgomery and Barrett multiplication. We show that K-RED2X multiplication exhibits approximate equivalence to Montgomery multiplication under small NTT-friendly moduli. We apply these optimizations to the latest implementations of Kyber (ASIACCS 2025) and Saber (CHES 2025), achieving significant improvements in both speed and code size. Furthermore, we present a highly optimized AVR assembly implementation of CTRU-Light that delivers high efficiency and low stack usage. (iii) We design a Hybrid KEM–MQTT protocol that integrates classical ECDH with post-quantum KEMs. We present the first implementation of this protocol and provide a detailed empirical analysis of its performance. Experiments show that CTRU-Light is the only scheme capable of supporting both pure PQ and hybrid KEM–MQTT on 8-bit WSNs, achieving lower handshake latency than Kyber-512 and LightSaber.
2026/247(PDF)
Last updated:  2026-02-13
Efficient Pairing-Based Batch Arguments for NP with a Constant-Size Proof
Cryptographic protocols
Zhe Jiang, Kai Zhang, Junqing Gong, and Haifeng Qian
Show abstract
Cryptographic protocols
Non-interactive Batch arguments (BARGs) for NP relations enable a prover to generate a single succinct proof for multiple NP instances, significantly amortizing verification costs. While recent pairing-based BARGs achieve impressive results, their practical efficiency remains limited by proof sizes and verifier pairing operations that scale linearly with the size of the Boolean circuit computing the NP relation. In this work, we present a novel pairing-based BARG construction that achieves constant proof size and a constant number of pairing operations, independent of both the number of instances and the circuit size. Our approach leverages a bivariate polynomial commitment to compactly encode all wire values across instances, and introduces a new efficient bi-to-uni variate sumcheck protocol, called BuLosum. BuLosum improves upon prior techniques by reducing proving costs to a single multi-scalar multiplication and verification to only three pairings. Using BuLosum, we further design an optimized matrix multiplication protocol that minimizes both proof size and verification overhead. By integrating these components, we obtain the first pairing-based BARG achieving constant-size proofs and constant number of pairing operations.
2026/246(PDF)
Last updated:  2026-02-13
Highly Efficient and Round-Optimal Asymmetric PAKE
Cryptographic protocols
Zachary Barbanell and Jiayu Xu
Show abstract
Cryptographic protocols
An asymmetric Password-Authenticated Key Exchange (aPAKE) protocol allows a client, who holds a raw password, and a server, who holds a one-way mapping of the password, to jointly establish a cryptographically strong session key, without an authenticated channel. The standard security definition for aPAKE is in the Universal Composability (UC) framework. Despite its great potential of being used in practice, existing aPAKE protocols are either not round-optimal, computationally inefficient, or not proven secure.In this work, we present two aPAKE protocols that are 1 simultaneous round, UC-secure, have reasonable computational costs, and rely on mild cryptographic assumptions. Our first protocol is in the Random Oracle Model and the Algebraic Group Model (ROM+AGM), secure under the CDH assumption, and has computational cost comparable to the most efficient aPAKE protocols in the literature (1 simultaneous round or not). Our second protocol is in the ROM only and secure under the DDH assumption, and also has significantly lower computational cost than the only other UC-secure 1 simultaneous round aPAKE protocol. Interestingly, even our second protocol does not rely on the gap Diffie-Hellman assumption, which requires creative application of the forking lemma.
2026/245(PDF)
Last updated:  2026-02-13
A note on adversary running times
Foundations
Amit Sahai
Show abstract
Foundations
In this note, we consider a perspective on adversary running times that fixes the adversary's running time to be $2^\kappa$, and then asks: to achieve security against such an adversary, what running time in terms of $\kappa$ do honest parties need? This perspective gives rise to a new natural class of adversary running times that we call \emph{quasi-exponential} time adversaries.
2026/244(PDF)
Last updated:  2026-02-13
Revisit Unravelled Linearization with Erhart (quasi-)Polynomial
Attacks and cryptanalysis
Yansong Feng, Yiming Gao, Honggang Hu, Abderrahmane Nitaj, Yanbin Pan, and Mengce Zheng
Show abstract
Attacks and cryptanalysis
We study lattice constructions arising from Coppersmith’s method combined with the Unravelled Linearization technique introduced by Herrmann and May (ASIACRYPT~2009) for solving structured polynomial equations, which can yield improved asymptotic bounds. This work contributes in two aspects, theoretical and practical.Theoretical contribution: While Unravelled Linearization, introduced can lead to improved asymptotic bounds, computing these bounds is technically delicate. Previous works typically assume that the lattice dimension is a polynomial function of the scaling parameter $m$ and therefore rely on Lagrange interpolation to derive asymptotic bounds. We show that this assumption does not hold in general. In particular, the interpolation approach used in Herrmann and May (ASIACRYPT~2009) and in \texttt{cuso} (EUROCRYPT~2025) may yield incorrect results: in the former case, the exponant in the determinate may be a quasi-polynomial instead of polynomial, while in the latter it becomes polynomial only beyond a certain threshold. To address this issue, we present a rigorous analysis based on Ehrhart theory for computing asymptotic bounds. As applications, we improve five cryptanalytic results, including attacks on linear congruential generators, quadratic generators, the subset-sum pseudorandom generator, and a recent partial-key attack on isogenies in the P\`oke setting. In the latter case, we reduce the required leakage of most significant bits from $85.29\%$ to $79.81\%$, and further to $79.08\%$ in certain special cases.Practical contribution: Asymptotic bounds alone are insufficient for practical attacks, since one must also determine an appropriate lattice dimension. To avoid trivial guessing of the lattice dimension from small to large, we introduce an estimator, $\mathsf{DimOracle}$, based on Ehrhart theory. Our approach simultaneously estimates the lattice dimension and determinant via the associated Ehrhart (quasi-)polynomials. Due to Barvinok’s algorithm, any fixed number of leading coefficients of these (quasi-)polynomial can be computed in polynomial time by reducing the problem to volume computations of faces of the Newton polytope. Using these coefficients, $\mathsf{DimOracle}$ provides accurate predictions of the required lattice size in practice. We implement $\mathsf{DimOracle}$ and demonstrate its effectiveness through experimental results.
2026/243(PDF)
Last updated:  2026-02-16
Towards Making Doubly-Efficient PIR Practical
Cryptographic protocols
Pan Xiao, Heng Zhang, Rending Ouyang, Cong Zhang, Jian Liu, Kui Ren, and Chun Chen
Show abstract
Cryptographic protocols
Doubly-efficient private information retrieval (DEPIR) enables sublinear per-query work (in the database size $N$) for both client and server, while requiring no client state. Despite its theoretical promise, single-server DEPIR exhibits a prohibitive concrete efficiency gap: for $N=2^{23}$, the state-of-the-art construction (Eurocrypt '25) requires a 733TB server state and over $2^{37}$ online RAM/Disk reads, rending it infeasible to execute.This paper advances single-server DEPIR towards practicality through a series of algorithmic innovations.Compared with the state-of-the-art, we achieve a 4 orders of magnitude reduction in server state and a 6 orders of magnitude reduction in query time.In particular, for the same level database ($N=2^{23}$), querying $5461$ elements in a single batch requires only 171GB of server state and $2^{24}$ online RAM/disk reads, yielding a 112s total query time and a 21ms amortized query time.
2026/242(PDF)
Last updated:  2026-02-13
Neo and SuperNeo: Post-quantum folding with pay-per-bit costs over small fields
Cryptographic protocols
Wilson Nguyen and Srinath Setty
Show abstract
Cryptographic protocols
We construct the first folding scheme that simultaneously achieves six desirable properties: plausible post-quantum security, pay-per-bit commitment costs, field-native arithmetic (the sum-check and norm checks run purely over a small field), support for general (non-SIMD) constraint systems, small-field support (e.g., Goldilocks), and low recursion overheads. No existing scheme satisfies all six: group-based schemes (e.g., HyperNova) lack post-quantum security and are tied to large elliptic-curve fields; lattice-based schemes (e.g., LatticeFold) require expensive ring arithmetic, lose pay-per-bit costs, and impose SIMD constraints; and hash-based schemes (e.g., Arc) incur prohibitively large verifier circuits.We present two lattice-based folding schemes for CCS—an NP-complete relation generalizing R1CS, Plonkish, and AIR—called Neo and SuperNeo. Neo satisfies five of the six properties but requires SIMD constraint systems; SuperNeo removes this restriction and satisfies all six. Both run a single invocation of the sum-check protocol over a small field extension and achieve pay-per-bit costs via new folding-friendly instantiations of Ajtai commitments under the Module-SIS assumption. At the core of our constructions are two new norm-preserving embeddings of field vectors into ring vectors that respect an evaluation homomorphism required for folding. We also introduce interactive reductions, a framework that generalizes reductions of knowledge and enables modular security proofs for composed lattice-based protocols.
2026/241(PDF)
Last updated:  2026-02-13
Algebraic Attack on Convolutional Neural Network with Max Pooling
Attacks and cryptanalysis
Zirui Chen, Shi Tang, Zhengchao Gao, Yongjia Su, Lingyue Qin, and Xiaoyang Dong
Show abstract
Attacks and cryptanalysis
Recovering the weights and biases of deep neural networks (DNNs) via black-box input-output queries - known as parameter extraction attacks - has been extensively studied for ReLU-based fully connected neural networks (FCNNs), but remains unexplored for Convolutional Neural Networks (CNNs) with max pooling function, a core architecture for computer vision and multimedia processing. The key challenge lies in CNN’s max pooling layer, which introduces an additional nonlinearity and hides ReLU critical points, rendering existing FCNN extraction methods inapplicable. To address this gap, we propose the first cryptanalytic extraction attack tailored for CNNs with the max pooling function.First, we establish an algebraic representation of CNNs, formally proving that CNNs are piecewise linear functions - enabling the extension of linearity-based extraction principles. We then identify two novel types of critical points in CNNs: (1) ReLU-Pooling Critical Points (RPCPs), where a ReLU neuron is at its zero-input critical point and its output is selected by max pooling; and (2) Pooling Switching Points (PSPs), where two neurons within a local receptive field yield identical maximum outputs, triggering a switch in the pooling selection.Leveraging these critical points, we design complementary extraction techniques: a pattern matching method for RPCPs to recover partial signatures and signs (exploiting the property that unselected pooling neurons have negative outputs), and an internal differential extraction attack for PSPs - inspired by cryptographic internal differential analysis - to recover high-accuracy signatures. Given that PSPs are far more abundant and efficient than RPCPs (verified by experiments), and RPCPs are capable of bias recovery, we integrate both methods: PSPs enable efficient signature extraction, while a single RPCP recovers the sign and bias.We evaluate our attack on multiple CNN architectures, including the LeNet-5 with modern architectures, trained on random data, MNIST, and CIFAR-10. Experimental results demonstrate that our approach achieves high extraction accuracy using polynomial query complexity and runtime, even for deep CNN layers. This work fills a research gap in CNN security.
2026/240(PDF)
Last updated:  2026-02-13
Do not Mix Models: Revisiting Generic Transforms for Committing Authenticated Encryption
Secret-key cryptography
Kazuhiko Minematsu and Akiko Inoue
Show abstract
Secret-key cryptography
Committing security for authenticated encryption (AE) captures the difficulty of constructing a distinct input tuple, including the key, that yields the same ciphertext. This notion is relatively new but has attracted significant attention due to its practical relevance. A promising direction is to design generic transforms that convert any AE scheme into a committing one.A common approach to generic transforms, initiated by the CTX transform (Chan and Rogaway, ESORICS 2022), is to add a hash function that uses part of the AE input/output, assuming the hash is ideal, i.e., a random oracle. Because the baseline AE is assumed to be secure in the standard model, this approach inherently mixes standard-model and idealized-model assumptions. We revisit this approach. We show that a number of state-of-the-art generic transforms relying on a mixed model (Chen and Karadžić, Eurocrypt 2025, and Bhattacharjee et al., ePrint 2024), proposed after CTX, are vulnerable once the hash function is instantiated, by presenting practical attacks against them. Our attacks exploit the fact that the baseline AE may depend on the instantiation of the generic transform, whereas the opposite is not true for the principle of the generic transform. In most cases, the attacks are effective with any instantiation, and the baseline AEs in the attacks have a natural structure, such as Enc-then-MAC with a counter mode encryption. We also demonstrate how to rectify these broken transforms with minimal algorithmic modifications, relying solely on the standard-model assumptions.
2026/239(PDF)
Last updated:  2026-02-14
Optimal Best-of-Both-Worlds Consensus
Cryptographic protocols
Fatima Elsheimy, Simon Holmgaard Kamp, Julian Loss, and Jesper Buus Nielsen
Show abstract
Cryptographic protocols
Consensus protocols face a fundamental trade-off: synchrony enables higher fault tolerance, whereas asynchrony provides responsiveness (latency proportional to actual network delays) and security under arbitrary delays.In particular, synchronous consensus achieves optimal resilience up to $t<n/2$ corruptions but relies on worst-case delay bounds that affect both latency and security, whereas asynchronous consensus tolerates only $t<n/3$ corruptions.Two approaches have sought orthogonal compromises between the two.Optimistically responsive protocols [Pass and Shi, EC'18] achieve resilience beyond the feasibility limits of asynchronous security, and provide responsiveness under optimistic conditions.Network-agnostic protocols [Blum–Katz–Loss, TCC'19] are secure in both synchronous and asynchronous networks but are non-responsive due to relying on conservative worst-case waiting and, to date, do not achieve constant time in asynchrony.We reconcile these approaches by constructing the first Byzantine agreement (BA) and validated Byzantine agreement (VBA) protocols that achieve network-agnostic security and optimistic responsiveness with optimal resilience tradeoffs.Concretely, for thresholds $t_s > t_r > t_a$ satisfying $n > 2t_s + t_a$ and $n > t_s + 2t_r$, our protocols satisfy:(1) Both protocols are synchronously secure for up to $t_s$ corruptions and asynchronously secure for up to $t_a$ corruptions.(2) Our VBA protocol is responsive when the number of corruptions is at most $t_r$, while our BA protocol is responsive if additionally at least $t_s+1$ honest parties begin with the same input.(3) Both protocols run in expected constant time with quadratic communication complexity, regardless of network conditions.We prove matching impossibility results showing that the resilience tradeoffs are optimal, and our protocols additionally achieve optimal expected time and communication complexity.
2026/238(PDF)
Last updated:  2026-02-12
PAC-Private Databases
Applications
Mayuri Sridhar, Michael A. Noguera, Chaitanyasuma Jain, Kevin Kristensen, Srinivas Devadas, Hanshen Xiao, and Xiangyao Yu
Show abstract
Applications
As data collection and sharing becomes more prevalent, quantifying leakage about released data is an increasingly crucial privacy issue. Prior work in private database analytics demonstrates how to provide strong theoretical privacy guarantees through differential privacy (DP). However, these techniques are often limited to specific queries; to the best of our knowledge, among the 19 queries in the TPC-H benchmark which do not directly leak customer information, prior work in DP can handle at most 9 queries, without additional analyst effort.In this work, we apply the recently-proposed Probably Approximately Correct (PAC) Privacy mechanism in order to provide a black-box technique to privatize general SQL queries against membership inference attacks. Naively applying PAC Privacy would allow us to privatize any individual query. However, databases are an interactive process: a user queries the database, views the response, and then chooses their next query. Prior work in PAC Privacy cannot provide any theoretical guarantees in this setting; instead, users would be required to provide all the queries a priori, which is a fundamental usability limitation. We construct the first algorithm to allow users to query the database adaptively and prove that algorithmic modifications via independent randomness provide automatic privatization guarantees.Our privatization layer, PAC-DB, does not require any human analysis in order to privately return a response for a general SQL query. PAC-DB is compatible with any database management system and does not require a trusted data curator. We provide an open-source implementation, where we privatize all of the 19 queries in consideration from the TPC-H benchmark with customers as our privacy concern. We provide both relative errors and initial performance estimates.
2026/237(PDF)
Last updated:  2026-02-12
Exploiting SNOVA’s Structure in the Wedge Product Attack
Attacks and cryptanalysis
Maxime Bros, Thai Hung Le, Jacob Lichtinger, Brice Minaud, Ray Perlner, Daniel Smith-Tone, and Cristian Valenzuela
Show abstract
Attacks and cryptanalysis
Post-quantum cryptography (PQC) aims to develop cryptographic schemes secure against quantum adversaries. One promising class of digital signature schemes is based on multivariate quadratic equations, where Unbalanced Oil and Vinegar (UOV) is a leading example. UOV has been extensively studied since its introduction in 1999, and it has remained secure. It offers very small signatures but suffers from very large public keys; to remediate this, some schemes---such as MAYO, QR-UOV, and SNOVA---add a structure to reduce the size of the public key. These four multivariate schemes are candidates that made it to the Second Round of the National Institute of Standards and Technology PQC Additional Call for Post-Quantum Signature schemes. In this work, we revisit a recently proposed algebraic attack by Ran on UOV and extend this approach to a new attack on SNOVA by exploiting its block-ring structure. In addition to improving the attack complexity, our exploitation of the block-ring structure rules out spurious solutions, which prevents generic version of Ran's attack from applying to SNOVA. Our attack breaks 6 of the 11 currently proposed SNOVA parameter sets and improves on the previous best result for an additional 2 sets; it is significantly more effective against larger $\ell$ in comparison to several earlier attacks. For example, for SNOVA-V with parameters $(v,o,\ell) = (29,6,5)$, the estimated security drops to $181$ bits, compared to $310$ bits for the previous best known attack.
2026/236(PDF)
Last updated:  2026-02-18
Sharing a Secret Anamorphically: Secret Shares Dressed Up as Signatures
Public-key cryptography
Gennaro Avitabile, Vincenzo Botta, and Daniele Friolo
Show abstract
Public-key cryptography
Anamorphic Encryption (Persiano, Phan and Yung, Eurocrypt '22) allows private communication in a challenging setting where encryption is severely controlled by a central authority (henceforth the dictator) who can demand the users to surrender their secret keys. Anamorphic Signatures (AS) (Kutylowski, Persiano, Phan, Yung, Zawada, Crypto '23) face the even more restrictive world where only authentication is allowed but users still want to send secret messages despite the repressive control of the dictator holding their signing key. Several constructions have been proposed so far, but they all come with some limitations.We propose a new flexible setting where digital signatures are used to covertly embed secret shares which $t$ out of $N$ designated receivers can combine to recover a covert secret. We formalize this new primitive called Anamorphic Secret Sharing Signatures (ASSS) together with new robustness, forward secrecy, and private unforgeability notions and we give a construction for Schnorr-like signatures overcoming the limitations of previous constructions. ASSS additionally imply (regular single-receiver) AS, but two signatures (instead of one) are needed to recover the covert message.
2026/235(PDF)
Last updated:  2026-02-12
Optimized Implementations of Keccak, Kyber, and Dilithium on the MSP430 Microcontroller
Implementation
DongHyun Shin, YoungBeom Kim, Ayesha Khalid, Máire O'Neill, and Seog Chung Seo
Show abstract
Implementation
Post-Quantum cryptography (PQC) typically requires more memory and computational power than conventional public-key cryptography. Until now, most active research in PQC optimization for embedded devices has focused on 32-bit and 64-bit ARM architectures, specifically Cortex-M0/M3/M4 and ARMv8. To enable a smooth migration of PQC algorithms in Internet of Things environments, optimization research is also required for devices with lower computational capabilities. To address this gap, we present the optimized implementation methodologies of CRYSTALS–Kyber and CRYSTALS–Dilithium, the National Institute of Standards and Technology (NIST) standardized key-encapsulation mechanism (KEM) and digital signature algorithm (DSA), on a widely used 16-bit MSP430 microcontroller. We review the current state-of-the-art implementation methodologies for Keccak, Kyber, and Dilithium, and carefully redesign them to suit the MSP430 architecture. For Number-Theoretic Transform (NTT)-based polynomial multiplication, we redesign optimal modular arithmetic, layer merging, and point-wise multiplication by taking full advantage of the characteristics of the MSP430. As a result, compared with the reference implementations in C, the optimized 16-bit NTT achieves performance improvements of 134%, 249%, and 210% for NTT, inverse NTT, and point-wise multiplication, respectively, while the optimized 32-bit NTT achieves performance improvements of 91%, 96%, and 56% for NTT, inverse NTT, and point-wise multiplication, respectively. Furthermore, for Keccak, we propose twisting and zig-zag techniques tailored to the MSP430, aimed at optimizing memory accesses. As a result, compared with the reference implementation in C, the optimized Keccak achieves a performance improvement of 57%. Moreover, compared with the reference implementations in C, our Kyber and Dilithium implementations achieve performance improvements of 46.1%–51.3%, 45.6%–60.0%, and 46.2%–62.3% for key generation (KeyGen), encapsulation (Encaps), and decapsulation (Decaps), respectively, and 44.5%–48.3%, 57.5%–65.0%, and 46.1%–50.0% improvements for key generation (KeyGen), signing (Sign), and verifying (Verify), respectively.
2026/234(PDF)
Last updated:  2026-02-12
When Trying to Catch Cheaters Breaks the MPC (Full Version)
Cryptographic protocols
Andreas Brüggemann and Thomas Schneider
Show abstract
Cryptographic protocols
The paper is currently under embargo as it identifies vulnerabilities which still are to be fixed.The paper will be released on March 15, 2026.
2026/233(PDF)
Last updated:  2026-02-12
FHE for SIMD Arithmetic Logic Units with Amortized $O(1)$ Bootstrapping per Ciphertext
Public-key cryptography
Mingyu Gao and Hongren Zheng
Show abstract
Public-key cryptography
The evaluation of both arithmetic and logic operations on machine words (e.g., 64-bit registers) in homomorphic settings is an attractive problem due to its close alignment with real-world programming models. Existing FHE schemes require *iterative* bootstrapping operations with the iteration number scaling with the machine word bit-width $n$. Prior approaches incur either quadratic cost for multiplication (DM/CGGI), linear cost for logic operations (REFHE [Eurocrypt'26] and Kim25c [eprint/2025/1440]), or logarithmic cost for multiplication (CPL [eprint/2025/1740]).We mitigate this inherent barrier by amortizing $O(n)$ iterations across $O(n)$ ciphertexts for scenarios with sufficient machine words. We introduce a triangle encoding such that in its conversion to boolean mode, each iteration on a combined ciphertext processes multiple ciphertexts, resulting in an amortized constant cost per ciphertext. Additionally, the triangle encoding supports leveled arithmetic, and its refreshing requires only two CKKS bootstrapping operations. We reuse discrete CKKS as our boolean mode, which supports lightweight conversion to the triangle encoding. We also introduce a method to induce SIMD structure for the triangle encoding from the RLWE ring.Even when we bootstrap after each multiplication, we show a 3$\times$ to 3.8$\times$ higher throughput compared with the state-of-the-art CPL scheme as it requires 3 to 5 bootstrapping operations (approximately $\log (n/4)$ with $n$ ranging from 64 to 256). We further estimate 15$\times$ and 491$\times$ higher throughput for multiplication and bitwise operations, respectively, compared with the REFHE scheme. We make our OpenFHE-based prototype open source for ease of reference.
2026/232(PDF)
Last updated:  2026-02-12
Collision Attacks on SHA-256 up to 37 Steps with Improved Trail Search
Attacks and cryptanalysis
Zhuolong Zhang, Muzhou Li, Lei Gao, and Meiqin Wang
Show abstract
Attacks and cryptanalysis
As a NIST-standardized hash function, SHA-256 has been extensively analyzed in the context of collision attacks. Although Mendel et al. introduced the first 31-step collision attack at EUROCRYPT 2013, the number of attacked steps has not been increased for more than a decade. After a thorough review of existing attacks, we identify that progressing beyond 31 steps necessitates new local collisions in the message expansion. To date, such local collisions have been manually constructed, which is both time-consuming and technically challenging. Besides, even the latest automated models for searching signed differential trails fail to accurately account for the bit conditions imposed by Boolean functions, potentially overlooking high-quality trails. In this paper, we enhance the existing framework by overcoming these two limitations. Firstly, an automated tool that can efficiently identify high-quality local collisions is given following the main idea of EUROCRYPT 2013. Secondly, two new models of Boolean functions that can accurately capture bit conditions are introduced. Leveraging these improvements, we finally reach the first 37-step collision attack on SHA-256, extending the number of attacked steps by six, and this is the first such advancement in 12 years.
2026/231(PDF)
Last updated:  2026-02-11
RAGtime-PIANO: Efficient Secure Remote RAG
Cryptographic protocols
Antonia Januszewicz, Jiachen Zhao, Meng Jiang, and Taeho Jung
Show abstract
Cryptographic protocols
Retrieval Augmented Generation (RAG) can enhance the performance of Large Language Models (LLMs) when used in conjunction with a comprehensive knowledge database. However, the space required to store the necessary information can be taxing when RAG is used locally. As such, the concept of RAG-as-a-Service (RaaS) has emerged, in which third-party servers can be used to process client queries via an external database. Unfortunately, using such a service would expose the client's query to a third party, making the product unsuitable for processing sensitive queries. Our scheme ensures that throughout the entire RAG processing, neither the query, any distances, nor retrieval information is known to the database hosting server. Using a two-pronged approach, we employ Fully Homomorphic Encryption (FHE) and Private Information Retrieval (PIR) to ensure complete security during RAG processing. FHE is used to maintain privacy during initial query processing, during which the query embedding is encrypted and sent to the server for k-means centroid scoring to obtain a similarity ranking. Then, a series of PIR queries is used to privately retrieve the centroid-associated embeddings and the top-ranked documents. A first-of-its-kind, lightweight, fully secure RAG protocol, RAGtime-PIANO, enables efficient secure RAG.
2026/230(PDF)
Last updated:  2026-02-11
Rule Variant Restrictions for the Tamarin Prover
Cryptographic protocols
Felix Linker
Show abstract
Cryptographic protocols
We introduce an optimization to the Tamarin prover that reduces its search space. The optimization applies to protocol models that use equational theories with cancellative operators, for example, when modelling Diffie-Hellman groups or bilinear pairings. We prove the optimization's soundness and evaluate its performance.
2026/229(PDF)
Last updated:  2026-02-17
ANIMAGUS: A Provably Secure Accordion Mode of Operation
Secret-key cryptography
Gülnihal Öztürk, Onur Koçak, and Oğuz Yayla
Show abstract
Secret-key cryptography
Block ciphers are designed to operate on fixed length blocks of bits. A block cipher mode of operation is used in order to encrypt variable-length input. These modes process multiple data blocks and ensure information security through the application of block cipher algorithms. While there are several NIST-approved block cipher modes, they exhibit certain inherent limitations and security vulnerabilities. Hence, the necessity for a novel mode has emerged. NIST aims to design an accordion mode, characterized as a tweakable variable-input-length strong pseudorandom permutation, offering enhanced security and performance attributes compared to existing modes. It lastly proposed to develop accordion mode types from different variants of HCTR2 algorithm. Thus, we propose a new design based on HCTR2 structure for an accordion mode in this article. The proposed design accommodates variable-length inputs while ensuring strong pseudorandom permutation security. It also exhibits adaptability with any underlying block cipher and facilitates variable-length tweaks. This ultimately provides a more versatile framework compared to numerous existing schemes in the literature.
2026/228(PDF)
Last updated:  2026-02-11
SCA-MQDSA: Side-Channel Analysis of Multivariate Digital Signature Implementations
Attacks and cryptanalysis
N.K. Vishwaajith, Anindya Ganguly, Debranjan Pal, Trevor Yap, Puja Mondal, Suparna Kundu, Sayandeep Saha, Shivam Bhasin, Ingrid Verbauwhede, and Angshuman Karmakar
Show abstract
Attacks and cryptanalysis
The rapid progress of Internet-of-Things (IoT) systems and network protocols has strengthened the demand for digital signature schemes with compact signatures and low computational overhead. However, standardized post-quantum signature schemes, such as ML-DSA, SLH-DSA, and Falcon, incur relatively large signature sizes, which limit their practicality on resource-constrained devices (RCD). To address this challenge, NIST recalled the post-quantum digital signature standardization process. It reopened the interest in alternative constructions. Signature schemes based on multivariate quadratic equations have emerged as promising candidates due to their comparatively small signature sizes and efficient verification. At the same time, these schemes are often deployed on platforms with limited physical protections, making side-channel security a critical concern.Four multivariate-based signature schemes, such as UOV, MAYO, QR-UOV, and SNOVA, were reached in the second round of the NIST post-quantum signature standardization process for further evaluation. In this work, we analyze implementations of multivariate digital signature algorithms submitted to the NIST process, with a particular focus on the MAYO signature scheme. We present an \textit{inexpensive} side-channel attack targeting nibble-sliced implementations of MAYO. Our attack operates under a minimal threat model: it does not require physical possession of the target device and succeeds using leakage obtained from a single signing execution.We perform our attacks on a ChipWhisperer-Lite platform with a 32-bit STM32F3 ARM Cortex-M4 target mounted on a CW308 UFO board, and we add the corresponding reference code along with target traces.
2026/227(PDF)
Last updated:  2026-02-15
Analysis and Vulnerabilities in zkLogin
Applications
Sofia Celi, Hamed Haddadi, and Kyle Den Hartog
Show abstract
Applications
Zero-Knowledge Authorization (ZKA) systems allow users to prove possession of externally issued credentials (e.g., JSON Web Tokens) without revealing the credentials in full via the usage of Zero-Knowledge Proofs (ZKP). They are increasingly promoted as privacy-preserving and decentralized alternatives for authorization, and are already deployed in practice, with proposals for higher-stakes settings such as government access-control frameworks. In this work, we show that the security and privacy of zkLogin—the most widely deployed ZKA system—cannot only be reduced to the underlying ZKP. Instead, zkLogin critically depends on non-cryptographic assumptions about JWT/JSON parsing, issuer trust policy, architectural binding, and execution-environment integrity: none of which are specified or enforced as protocol-level properties.Via an analysis of the public documentation, source code and surveys on wallets and public endpoints, we identify three broad classes of vulnerabilities in zkLogin: (i) permissive, non-canonical claim extraction that admits malformed JWTs; (ii) transformation of short-lived authentication artifacts into durable authorization credentials without enforcing their issuance context (issuer, audience, subject and temporal validity binding), which enables cross-application impersonation and misuse—particularly in browser-based deployments that expose system’s material; and (iii) systemic centralization and privacy risks arising from reliance on a small set of issuers and outsourced proving infrastructure, including disclosure of user identity attributes to third-party services without consent. We note that none of the vulnerabilities identified are cryptographic in nature. Overall, our findings demonstrate that zkLogin inherits, and in some cases amplifies, fragilities of web-based authentication ecosystems, and that the security of the system cannot be reduced only to the ZKPs
2026/226(PDF)
Last updated:  2026-02-12
Round-Optimal Identity-Based Blind Signature from Module Lattice Assumptions
Public-key cryptography
Arup Mazumder, Mrittika Nandi, and Shashank Singh
Show abstract
Public-key cryptography
This work presents a round optimal identity-based blind signature scheme based on module lattices. Our construction extends Fischlin's two-round blind signature framework [CRYPTO'06] to the identity-based setting. The construction uses the GPV signature scheme based on Micciancio and Peikert's G-trapdoor techniques and NIZK proofs [CRYPTO'22] in the random oracle model. The scheme is secure under the MLWE and MSIS assumptions. The optimised parameters are also provided targeting $128$-bit security. To the best of our knowledge, this scheme is the first-round optimal identity-based blind signature scheme whose security relies on module lattice problems.
2026/225(PDF)
Last updated:  2026-02-11
Solving SIS in any norm via Gaussian sampling
Attacks and cryptanalysis
Amaury Pouly and Yixin Shen
Show abstract
Attacks and cryptanalysis
The short integer solution (SIS) problem is an important problem in lattice-based cryptography. In this paper, we construct a natural and simple algorithm that allows us to solve the problem for any norm in the case where the norm bound $\ell$ is smaller than half the modulus $q$.The algorithm consists in using a discrete gaussian sampler on the SIS $q$-ary lattice to sample lattice vectors, and requires to estimate the probabily that the sampled vector is non-zero and falls into a ball of radius $\ell$ in the given norm. For the latter, we improve upon previous analysis of random $q$-ary lattice by obtaining tight bounds on the expected value and variance of the Gaussian mass of the entire lattice and of an $\ell_p$-norm ball, for any $p\in(0,\infty]$. These bounds require new technical results on the discrete Gaussian, but also rely on a conjecture which we have extensively verified.Aside from the conjecture, the remaining part of the algorithm is provably correct. When instantiated with a Markov chain Monte Carlo (MCMC)-based discrete Gaussian sampler, the complexity of the algorithm can be estimated precisely.Although our algorithm does not break Dilithium, it is at least 50 bits faster than the recent algorithm of Ducas, Engelberts and Loyer in Crypto 2025 for all security levels.
2026/224(PDF)
Last updated:  2026-02-17
Usage of Mixed Integer Linear Programming in Cryptanalysis of Block Ciphers
Attacks and cryptanalysis
Halil İbrahim Kaplan
Show abstract
Attacks and cryptanalysis
This paper presents a comprehensive approach to the cryptanalysis of block ciphers using Mixed Integer Linear Programming (MILP).By formulating the cipher’s components including substitution boxes,linear layers, and key schedules as systems of linear inequalities, MILPenables the automated discovery of optimal cryptanalytic characteristics. Our methodology is demonstrate the MILP modelling through theanalysis of the ITUbee cipher under three attack models: differential, linear, and related-key differential cryptanalysis. This work highlights thepotential of MILP as a robust, generalizable tool for evaluating ciphersecurity and guiding the design of resilient cryptographic primitives.
2026/223(PDF)
Last updated:  2026-02-10
Nested MuSig2
Public-key cryptography
Nadav Kohen
Show abstract
Public-key cryptography
Bitcoin Improvement Proposal 327 specifies a variant of the MuSig2 multi-signature protocol that is becoming widely adopted in Bitcoin applications. This protocol enables multiple participants to collaboratively compute (BIP 340) Schnorr signatures for a single aggregate public key efficiently, while preventing external parties from distinguishing whether multiple signers were involved. It has been widely proposed that it should be secure to allow MuSig2 participant keys to themselves be "nested" MuSig2-aggregated keys. No security argument has previously been presented for this practice, though various applications have been proposed that assume the security of such an operation.In this work, we propose NestedMuSig2, a recursive variant of MuSig2 that enables a tree of nested cosigners to privately generate aggregate Schnorr signatures while maintaining all of the efficiency and security benefits of MuSig2, including non-interactive public key aggregation. Nested signers in this scheme cannot distinguish between cosigners that are using further nesting and those that are not. In particular, this means that NestedMuSig2 is compatible with all existing protocols that use MuSig2. We reduce the security of NestedMuSig2 to the AOMDL assumption in the random oracle model. Similarly, we reduce the security of a more efficient and compact variant to the AOMDL assumption in the random oracle model used in conjunction with the algebraic group model.
2026/222(PDF)
Last updated:  2026-02-10
ITSAKE: an unconditionally secure authenticated key establishment protocol
Cryptographic protocols
Pierre-Alain Jacqmin and Jean Liénardy
Show abstract
Cryptographic protocols
We introduce $\textrm{ITSAKE}$, a 2-message Information-Theoretic Secure Authenticated Key Establishment protocol. That is, $\textrm{ITSAKE}$ is a protocol to establish keys between two parties from a pre-shared secret. Beside correctness and key indistinguishability, it offers entity authentication and perfect forward secrecy. Moreover, synchronization problems are avoided because $\textrm{ITSAKE}$ satisfies the weak synchronization robustness property. The main advantage of $\textrm{ITSAKE}$ is that all these security properties are unconditionally proved, i.e., they do not rely on unproven mathematical assumptions or on limitations on the power of the adversary. We provide complete and detailed security proofs in a well-defined multi-party security model for symmetric-key authenticated key establishment protocols admitting concurrent runs in which the adversary has a complete control on the communication channel.The main drawback of $\textrm{ITSAKE}$ is the length of the pre-shared keys that the parties need to hold. To give an example, we present an instantiation of the protocol for which the master-key is just slightly longer than the total keying material that can be established. At this cost, one obtains fresher keys, authenticity of the partner, information-theoretic security, and avoids synchronization problems.$\textrm{ITSAKE}$ is meant to be used to protect highly confidential information, for which trust in unproven assumptions or in limitations on the adversary is undesired. Moreover, as memory devices are typically much less expensive than dedicated quantum apparatus, and as physical key exchanges performed occasionally may be more convenient than maintaining those quantum apparatus, $\textrm{ITSAKE}$ is a practical and cost-effective alternative to Quantum Key Distribution (QKD) in many of its potential use-cases, while offering the same security guarantees. Thus, $\textrm{ITSAKE}$ challenges the necessity of QKD in most settings where these guarantees are the main objective.
2026/221(PDF)
Last updated:  2026-02-10
Spinel: A Post-Quantum Signature Scheme Based on $\mathrm{SL}_n(\mathbb{F}_p)$ Hashing
Cryptographic protocols
Asmaa Cherkaoui, Faraz Heravi, Delaram Kahrobaei, and Siamak F. Shahandashti
Show abstract
Cryptographic protocols
The advent of quantum computation compels the cryptographic community to design digital signature schemes whose security extends beyond the classical hardness assumptions. In this work, we introduce Spinel, a post-quantum digital signature scheme that combines the proven security of SPHINCS+ (CCS 2019) with a new family of algebraic hash functions (Adv. Math. Commun. 2025) derived from the Tillich-Zémor paradigm (Eurocrypt 2008) with security rooted in the hardness of navigating expander graphs over $\mathrm{SL}_n(\mathbb{F}_p)$, a problem believed to be hard even for quantum adversaries. We first provide empirical evidence of the security of this hash function, complementing the original theoretical analysis. We then show how the hash function can be integrated within the SPHINCS+ framework to give a secure signature scheme. We then model and analyze the security degradation of the proposed scheme, which informs the parameter selection we discuss next. Finally, we provide an implementation of the hash function and the proposed signature scheme Spinel as well as detailed empirical results for the performance of Spinel showing its feasibility in practice. Our approach lays the foundations for the design of algebraic hash-based signature schemes, expanding the toolkit of post-quantum cryptography.
2026/220(PDF)
Last updated:  2026-02-10
Optimizing Differential Privacy in Federated Analytics under Known Input Distributions
Foundations
Ferran Alborch, Andreas Athanasiou, and Pascal Reisert
Show abstract
Foundations
Differential privacy (DP) is one of the most efficient tools for protecting the privacy of individual data holders under computation. This property guarantees that the computation outputs for every pair of adjacent input sets are statistically indistinguishable with respect to a given parameter ε, which is independent of the likelihood that specific inputs occur or not. While the distribution of input sets is generally unknown, in some use cases (approximate) information about it might be available. If the latter is the case, two adjacent inputs of one individual are sometimes already obfuscated by other inputs and the computation itself (i.e., without any additional noise). For example, if the sum of n independent and identically distributed uniformly random bits outputs approximately n/2, both values for the first bit remain (almost) equally likely for large n.Based on this observation, we present a new DP mechanism that uses an estimate of the input distribution to reduce the noise addition (compared to standard DP) and hence improves the accuracy of the output. We first explore this idea in the central model, where a single central party collects all data. Then, we provide a new technique (possibly of independent interest) that allows multiple entities to jointly generate reduced noise, using the property of infinite divisibility. This allows each party to individually add noise to their respective inputs, e.g., in Federated Analytics applications.We apply our theoretical results, both for the single and multi-party setups, to perform data analysis over human resources data from different subsidiaries within a corporate group. Our benchmarks show that our new DP mechanism provides more accurate outputs while retaining the same privacy level as state-of-the-art DP approaches using the geometric mechanism.
2026/219(PDF)
Last updated:  2026-02-10
$\phi(n)$-evaluation algorithm: a novel approach for an efficient retrieval of Euler's totient of an RSA Modulus
Public-key cryptography
Jay Mehta and Hitarth Rana
Show abstract
Public-key cryptography
In this paper, we propose an algorithm to efficiently retrieve the value of Euler's totient function of an RSA modulus, consequently voiding the RSA encryption. Furthermore, we show that the proposed algorithm is significantly faster and more effective when the prime factors of an RSA modulus are closer to each other. We conjecture a relation between the difference of two prime factors of the RSA modulus and the required number of steps for the algorithm.
2026/218(PDF)
Last updated:  2026-02-13
Isochronous Fixed-Weight Sampling in Hardware
Implementation
Adrian Marotzke
Show abstract
Implementation
We present hardware implementations of the recently proposed isochronous fixed-weight sampling algorithm by Décio Luiz Gazzoni Filho, Tomás S. R. Silva, Julio López (CiC vol. 1, 2024) and apply them to the post-quantum cryptographic algorithms NTRU-HPS, Streamlined NTRU Prime and Classic McEliece. We offer multiple implementations, optimized for different targets: A high-area high-performance implementation, a lightweight low-area implementation, as well as a side-channel secure implementation using gadget-based masking. We verify the security of our masked implementation using the PROLEAD leakage detection tool. We show that the sampling algorithm results in highly efficient and effective implementations for the NTRU-like schemes, outperforming existing hardware implementations for fixed-weight sampling.
2026/217(PDF)
Last updated:  2026-02-10
Cavefish: Communication-Optimal Light Client Protocol for UTxO Ledgers
Cryptographic protocols
Aggelos Kiayias, Marc Roeschlin, Polina Vinogradova, and Pyrros Chaidos
Show abstract
Cryptographic protocols
Blockchain light clients (LCs) are agents with limited resources that are not able or not willing to maintain a fully validated copy of the ledger. They rely on service providers (SPs), typically full nodes, to access data required for tasks such as constructing transactions or interacting with off-chain applications.We introduce Cavefish, a novel protocol for UTxO-based platforms that enables LCs to submit transactions with minimal trust, storage, and computation overheads. Cavefish defines a two-party computation protocol between an LC and an SP, in which the LC specifies a transaction and the SP constructs it. The LC only receives a blinded version of the transaction, preventing modification and reuse, but allowing the LC to verify the transaction against the original intent. The SP is compensated inside the constructed transaction, eliminating the need for a separate protocol or exchange.To support this, we propose a variant of the predicate blind signature (PBS) scheme of Fuchsbauer and Wolf (Eurocrypt 2024), letting the SP obtain a valid signature on the unblinded transaction, which can then be posted on chain as the resulting signatures verify as standard Schnorr signatures. When combined with hierarchical deterministic (HD) wallets, the LC can provide a single public key and chain code to the SP, reducing communication footprint to a minimum. To further reduce overheads, our PBS variant relaxes the unlinkability requirements of traditional blind signatures in favor of efficiency as transactions only need to stay private until posted.On top of this, we define a multi-SP version of Cavefish that provides strong liveness and optimality guarantees. We benchmark the Non-interactive Argument of Knowledge (NArg) component of Cavefish on two major UTxO-based blockchains. Our results show that proving and verification times, as well as circuit sizes, are practical for real-world deployment.
2026/216(PDF)
Last updated:  2026-02-12
ECHO: Efficient Covertly-Secure Three-party Computation with Applications to Private Machine Learning
Cryptographic protocols
Yufei Duan, Yun Li, Zhicong Huang, Cheng Hong, Tao Wei, and Chao Zhang
Show abstract
Cryptographic protocols
Secure three-party computation with an honest majority is one of the most efficient settings in secure computation, and has been widely adopted in practical applications. However, achieving malicious security in this setting incurs significant concrete efficiency penalties, which could be an order of magnitude worse than that of their semi-honest counterparts. Covert security offers a potential security-efficiency trade-off by detecting malicious behavior with a certain probability (such as $50\%$), deterring rational adversaries through the risk of detection and loss of credibility. Yet, existing covert security research primarily focuses on two-party or general $n$-party protocols in the dishonest-majority setting, with limited progress toward efficient three-party solutions. This work presents the first comprehensive framework, $\mathsf{ECHO}$, for covertly secure, honest-majority three-party computation with applications to privacy-preserving machine learning. We systematically explore the design space of cheating detection and cheater identification techniques, and propose a suite of novel protocols for both arithmetic and Boolean circuits. Each protocol is engineered for a distinct performance goal: minimal online latency, high end-to-end efficiency, or low communication. Notably, for arithmetic circuits over rings, we introduce a protocol leveraging asymmetric message authentication codes, achieving an online phase that is only $1.26\times$ slower than the semi-honest baseline, over three times faster than its maliciously secure counterpart. For Boolean circuits, our novel cut-and-choose-based method outperforms the best previous maliciously secure protocol by a factor of five. In practical PPML benchmarks, our framework achieves near semi-honest performance while delivering up to $8\times$ speedup over maliciously secure protocols on real-world tasks.
2026/215(PDF)
Last updated:  2026-02-12
Endomorphisms via splittings
Attacks and cryptanalysis
Sabrina Kunzweiler and Min-Yi Shen
Show abstract
Attacks and cryptanalysis
One of the fundamental hardness assumptions underlying isogeny-based cryptography is the problem of finding a non-trivial endomorphism of a given supersingular elliptic curve. We show that this problem is related to the problem of finding a good splitting of a principally polarized superspecial abelian surface. We provide formal security reductions, as well as a proof-of-concept implementation of an algorithm to compute endomorphisms of elliptic curves by solving the splitting problem.
2026/214(PDF)
Last updated:  2026-02-10
Cavern: Efficient Honest-Majority Maliciously Secure $(2+1)$-PC for $\mathbb{Z}_{2^n}$ via DPF
Cryptographic protocols
Yang Liu and Liang Feng Zhang
Show abstract
Cryptographic protocols
We introduce Cavern, a new maliciously secure $(2+1)$-PC protocol for efficient piecewise polynomial (i.e., spline) evaluation on additively secret shared inputs over the ring $\mathbb{Z}_{2^n}$ in the preprocessing model, where parties obtain input-independent correlated randomness in an offline phase, which they then use to run an efficient protocol in the input-dependent online phase. This $(2+1)$ party structure can alternatively be instantiated between two parties with the aid of a (possibly untrusted) dealer. At the technical level, we introduce a new primitive called verifiable incremental distributed point function (VIDPF) and build on a novel combination of the VIDPF and authenticated secret sharing, providing an efficient method to detect the malicious behavior of the dealer or one of the parties. We implement and benchmark our protocol against the state-of-the-art semi-honest protocol Grotto (CCS 2023), and the trusted-dealer-based maliciously secure 2PC protocol Shark (S&P 2025). The results indicate that Cavern only imposes a constant factor overhead on the top of Grotto and Shark, while providing stronger security guarantees.
2026/213(PDF)
Last updated:  2026-02-10
Orbit: Optimizing Rescale and Bootstrap Placement with Integer Linear Programming Techniques for Secure Inference
Implementation
Zikai Zhou, William Seo, Edward Chen, Alex Ozdemir, Fraser Brown, and Wenting Zheng
Show abstract
Implementation
Fully Homomorphic Encryption (FHE) allows computation on encrypted data without decrypting it. In theory, FHE makes privacy-preserving machine learning possible. In practice, however, it remains impractically slow for real workloads. A major source of slowdown is bootstrap operations; in CKKS, a popular FHE scheme for tensor workloads, the slowdown is compounded by scale management and rescale operations. FHE compilers aim to make bootstrap placement and scale management efficient and easy by compiling high-level programs into low-level, optimized FHE computations. Unfortunately, existing approaches miss crucial optimization opportunities because they overlook a key property of CKKS programs: bootstrap and rescale placement are fundamentally coupled through the level budget. In this paper, we present Orbit, an FHE compiler that jointly optimizes bootstrap and rescale placement through a novel Integer Linear Programming (ILP) formulation that reasons about both ciphertext level and scale constraints. To make this formulation tractable for end-to-end programs, we introduce three techniques that reduce ILP complexity while preserving optimality. Across five convolutional neural networks and multiple cryptographic parameter configurations, Orbit achieves a geometric mean speedup up to 1.19× over DaCapo, 1.73× over Orion, and 1.52× over ReSBM, keeps compilation under 6 minutes, and retains model accuracy within 0.3% of plaintext execution.
2026/212(PDF)
Last updated:  2026-02-10
PANCAKE: A SNARK with Plonkish Constraints, Almost-Free Additions, No Permutation Check, and a Linear-Time Prover
Cryptographic protocols
Yuxi Xue, Peimin Gao, Xingye Lu, and Man Ho Au
Show abstract
Cryptographic protocols
This paper presents \(\mathsf{Pancake}\), a linear-time SNARK with a circuit-specific setup that eliminates the explicit representation and separate verification of addition gates in Plonkish constraint systems. Specifically, we consolidate wiring constraints and addition-gate constraints into a single family of general linear constraints, which can be enforced efficiently via a single sumcheck protocol. As a result, \(\mathsf{Pancake}\) achieves ``almost-free'' addition gates, which significantly reduces the witness size and directly improves prover efficiency while preserving full support for high-degree custom gates.Our implementation shows that \(\mathsf{Pancake}\) outperforms the state-of-the-art Plonkish SNARK \(\mathsf{HyperPlonk}\) (Chen et al., EUROCRYPT 2023) in terms of prover efficiency. For a circuit size of $2^{24}$ where half the gates are additions, \(\mathsf{Pancake}\) achieves prover speedups of $1.67\times$ (single-threaded) and $2.43\times$ (32-threaded), while also generating smaller proofs and maintaining comparable verification time.
2026/211(PDF)
Last updated:  2026-02-10
A Generalized $\chi_n$-Function
Secret-key cryptography
Cheng Lyu, Mu Yuan, Dabin Zheng, Siwei Sun, and Shun Li
Show abstract
Secret-key cryptography
The mapping $\chi_n$ from $\mathbb{F}_{2}^{n}$ to itself defined by $y=\chi_n(x)$ with $y_i=x_i+x_{i+2}(1+x_{i+1})$, where the indices are computed modulo $n$, has been widely studied for its applications in lightweight cryptography. However, $\chi_n $ is bijective on $\mathbb{F}_2^n$ only when $n$ is odd, restricting its use to odd-dimensional vector spaces over $\mathbb{F}_2$. To address this limitation, we introduce and analyze the generalized mapping $\chi_{n, m}$ defined by $y=\chi_{n,m}(x)$ with $y_i=x_i+x_{i+m} (x_{i+m-1}+1)(x_{i+m-2}+1) \cdots (x_{i+1}+1)$, where $m$ is a fixed integer with $m\nmid n$. To investigate such mappings, we further generalize $\chi_{n,m}$ to $\theta_{m, k}$, where $\theta_{m, k}$ is given by $y_i=x_{i+mk} \prod_{\substack{j=1,\,\, m \nmid j}}^{mk-1} \left(x_{i+j}+1\right), \,\,{\rm for }\,\, i\in \{0,1,\ldots,n-1\}$. We prove that these mappings generate an abelian group isomorphic to the group of units in $\mathbb{F}_2[z]/(z^{\lfloor n/m\rfloor +1})$. This structural insight enables us to construct a broad class of permutations over $\mathbb{F}_2^n$ for any positive integer $n$, along with their inverses. We rigorously analyze algebraic properties of these mappings, including their iterations, fixed points, and cycle structures. Additionally, we provide a comprehensive database of the cryptographic properties for iterates of $\chi_{n,m}$ for small values of $n$ and $m$. Finally, we conduct a comparative security and implementation cost analysis among $\chi_{n,m}$, $\chi_n$, $\chi\chi_n$ and their variants, and prove Conjecture 1 proposed in [Belkheyar et al., 2025] as a by-product of our study. Our results lead to generalizations of $\chi_n$, providing alternatives to $\chi_n$ and $\chi\chi_n$.
2026/210(PDF)
Last updated:  2026-02-09
How to Classically Verify a Quantum Cat without Killing It
Cryptographic protocols
Yael Tauman Kalai, Dakshita Khurana, and Justin Raizes
Show abstract
Cryptographic protocols
Existing protocols for classical verification of quantum computation (CVQC) consume the prover's witness state, requiring a new witness state for each invocation. Because QMA witnesses are not generally clonable, destroying the input witness means that amplifying soundness and completeness via repetition requires many copies of the witness. Building CVQC with low soundness error that uses only *one* copy of the witness has remained an open problem so far.We resolve this problem by constructing a CVQC that uses a single copy of the QMA witness, has negligible completeness and soundness errors, and does *not* destroy its witness. The soundness of our CVQC is based on the post-quantum Learning With Errors (LWE) assumption.To obtain this result, we define and construct two primitives (under the post-quantum LWE assumption) for non-destructively handling superpositions of classical data, which we believe are of independent interest:- A *state preserving* classical argument for NP.- Dual-mode trapdoor functions with *state recovery*.
2026/209(PDF)
Last updated:  2026-02-09
Post-Quantum Security of Block Cipher Constructions
Secret-key cryptography
Gorjan Alagic, Chen Bai, Christian Majenz, and Kaiyan Shi
Show abstract
Secret-key cryptography
Block ciphers are versatile cryptographic ingredients that are used in a wide range of applications ranging from secure Internet communications to disk encryption. While post-quantum security of public-key cryptography has received significant attention, the case of symmetric-key cryptography (and block ciphers in particular) remains a largely unexplored topic. In this work, we set the foundations for a theory of post-quantum security for block ciphers and associated constructions. Leveraging our new techniques, we provide the first post-quantum security proofs for the key-length extension scheme FX, the tweakable block ciphers LRW and XEX, and most block cipher encryption and authentication modes. Our techniques can be used for security proofs in both the plain model and the quantum ideal cipher model. Our work takes significant initial steps in establishing a rigorous understanding of the post-quantum security of practical symmetric-key cryptography.
IACR Logo
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.

[8]ページ先頭

©2009-2026 Movatter.jp