Is there a construction which has the following properties? I want a function$f$ that is slow to compute and a function$g$ that verifies the computation that$f$ did and is fast to compute. In other words, assume some parameter$N$, then we have functions:
- $f(x) = y$
- $g(x,y) = $ true or false
- $f$ is$N$ times slower to compute than$g$.
The main use for this would be DDoS protection systems by adding a ticket to requests that are expensive to generate, but fast to verify on a filter appliance.
EDIT: One potential approach is to use x as a seed for generating an instance of some NP-complete problem, since these have verifiers that are in P. However, the asymmetry in the resources is not controllable.
ps. Couldn't think of a tag that matches the topic...
- 7$\begingroup$I think what you are looking for is cryptographicProof of Work (PoW) as used for example inHashcash. A very simple idea: server send a challenge and client has to find the input to a cryptographic hash which matches the challenge - verification is fast (just compute the hash) while getting the right input for the challenge is hard (often just rute force all possible values).$\endgroup$Steffen Ullrich– Steffen Ullrich2025-12-15 08:12:24 +00:00Commented2 days ago
- 1$\begingroup$Wasted CPU cycles. You might as well perform some mining op for a cryptocurrency, include the date/time for starting this on a ledger (possibly with a remark that it is for your system) and then collect. At least that would give you coin. Beware that many mining ops such as bitcoin have non-deterministic runtime though.$\endgroup$2025-12-15 09:10:05 +00:00Commentedyesterday
- 6$\begingroup$Do you need $f$ to be a true function, that is $y$ is always the same for a given $x$? If so, do you further need that $f(x)$ is the only $y$ such that $g(x,y)$ is true ? [Of course if there are other $y$, they should be at least as hard to find as it is to compute $f(x)$ ]. Is it fine that there is often a large variation in the cost of evaluating $f$ / finding an $y$? Is it acceptable that $f$ isembarrassingly parallel, thus making it possible, if costly, to compute $y$ fast?$\endgroup$2025-12-15 11:40:43 +00:00Commentedyesterday
- 5$\begingroup$Cross-posted:crypto.stackexchange.com/q/119179/351,cstheory.stackexchange.com/q/56903/5038. Pleasedo not post the same question on multiple sites.$\endgroup$D.W.– D.W.2025-12-15 19:50:07 +00:00Commentedyesterday
- 2$\begingroup$The problem in using proof-of-work for DDoS protection is the amount of resources available. A botnet operator does not really care if they put high CPU load on the systems they have captured, and the load per request cannot be made arbitrarily large without hurting legitimate users.$\endgroup$jpa– jpa2025-12-16 12:06:44 +00:00Commented20 hours ago
1 Answer1
The main use for this would be DDoS protection systems by adding a ticket to requests that are expensive to generate, but fast to verify on a filter appliance.
This kind of problem is commonly tackled with Proof of Work (PoW) functions. Real-world examples areHashcash. Thewikipedia page for PoW provides fairly detailed information including links to different PoW functions.
Note that it is unlikely for the problem that you need a function which is exactly and always N times slower, its more that there should be a significant cost difference on average. It might also be useful to be dynamically tunable, i.e. accept a low cost difference when the load is low (no attack) in order to be friendly to most clients, but increase the cost difference with load or only for specific client IP ranges.
- $\begingroup$I wanted to rule out in my question problems that require a challenge-response type. In other words, the proof of work schemes I'm aware are such that the client/attacker can't both generate the problem instance and prove they did the work.$\endgroup$eof– eof2025-12-16 07:35:58 +00:00Commentedyesterday
- 4$\begingroup$@eof: It does not need to be a challenge-response but it needs to be an x which can not be defined or predicted by the attacker. Otherwise the attacker could simply compute the value once and replay it without further effort - which would make it unusable for DDoS protection. Or the server must track which values were used (within some time window if x is time based) so that it detects replay - which would further add load to the server. And if x is somehow predictable or the attacker can get lots of x in advance, they can precompute all the results before mounting the DDoS attack.$\endgroup$Steffen Ullrich– Steffen Ullrich2025-12-16 08:15:12 +00:00Commentedyesterday
- $\begingroup$This is problematic for client-server, since the timing is not deterministic. For tickets, there is always a simple solution, not_valied_before.$\endgroup$kelalaka– kelalaka2025-12-16 09:53:09 +00:00Commented22 hours ago
- $\begingroup$@kelalaka: How does not_valid_before help against precomputed values, which might take some time to create by the attacker but then can all be send at once? I also don't see how not_valid_before by itself helps against replay.$\endgroup$Steffen Ullrich– Steffen Ullrich2025-12-16 11:27:33 +00:00Commented21 hours ago
- $\begingroup$set, not_valid_before = 10:00 pm; set is_used = false. Then, if someone comes from earlier, reject. After 10:00 pm, check is_used. Standard approach. In distributed computing, one may need POW, from client to client-server, the server is the BOSS. If OP has some other issues on their mind, well, then question is not clear.$\endgroup$kelalaka– kelalaka2025-12-16 12:27:34 +00:00Commented20 hours ago
Explore related questions
See similar questions with these tags.


