Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Differential privacy

From Wikipedia, the free encyclopedia
Methods of safely sharing general data
An informal definition of differential privacy

Differential privacy (DP) is a mathematically rigorous framework for releasing statistical information about datasets while protecting the privacy of individual data subjects. It enables a data holder to share aggregate patterns of the group while limiting information that is leaked about specific individuals.[1][2] This is done by injecting carefully calibrated noise into statistical computations such that the utility of the statistic is preserved while provably limiting what can be inferred about any individual in the dataset.

Another way to describe differential privacy is as a constraint on the algorithms used to publish aggregate information about a statistical database which limits the disclosure of private information of records in the database. For example, differentially private algorithms are used by some government agencies to publish demographic information or other statistical aggregates while ensuring confidentiality of survey responses, and by companies to collect information about user behavior while controlling what is visible even to internal analysts.

Roughly, an algorithm is differentially private if an observer seeing its output cannot tell whether a particular individual's information was used in the computation. Differential privacy is often discussed in the context of identifying individuals whose information may be in a database. Although it does not directly refer to identification and reidentification attacks, differentially private algorithms provably resist such attacks.[3]

ε-differential privacy

[edit]
A formal definition of ε-differential privacy.D1{\displaystyle D_{1}} is a dataset without the private data, andD2{\displaystyle D_{2}} is one with it. This is "pure ε-differential privacy", meaning δ=0.

The 2006Cynthia Dwork,Frank McSherry,Kobbi Nissim, andAdam D. Smith article[3] introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from astatistical database.[4] (Here, the termstatistical database means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.)

The definition of ε-differential privacy requires that a change to one entry in a database only creates a small change in theprobability distribution of the outputs of measurements, as seen by the attacker.[3] The intuition for the definition of ε-differential privacy is that a person's privacy cannot be compromised by a statistical release if their data are not in the database.[5] In differential privacy, each individual is given roughly the same privacy that would result from having their data removed.[5] That is, the statistical functions run on the database should not be substantially affected by the removal, addition, or change of any individual in the data.[5]

How much any individual contributes to the result of a database query depends in part on how many people's data are involved in the query. If the database contains data from a single person, that person's data contributes 100%. If the database contains data from a hundred people, each person's data contributes just 1%. The key insight of differential privacy is that as the query is made on the data of fewer and fewer people, more noise needs to be added to the query result to produce the same amount of privacy. Hence the name of the 2006 paper, "Calibrating noise to sensitivity in private data analysis."[citation needed]

Definition

[edit]

Let ε be a positivereal number andA{\displaystyle {\mathcal {A}}} be arandomized algorithm that takes a dataset as input (representing the actions of the trusted party holding the data). Letim A{\displaystyle {\textrm {im}}\ {\mathcal {A}}} denote theimage ofA{\displaystyle {\mathcal {A}}}.

The algorithmA{\displaystyle {\mathcal {A}}} is said to provide (ε, δ)-differential privacy if, for all datasetsD1{\displaystyle D_{1}} andD2{\displaystyle D_{2}} that differ on a single element (i.e., the data of one person), and all subsetsS{\displaystyle S} ofim A{\displaystyle {\textrm {im}}\ {\mathcal {A}}}:

Pr[A(D1)S]eεPr[A(D2)S]+δ.{\displaystyle \Pr[{\mathcal {A}}(D_{1})\in S]\leq e^{\varepsilon }\Pr[{\mathcal {A}}(D_{2})\in S]+\delta .}

where the probability is taken over therandomness used by the algorithm.[6] This definition is sometimes called "approximate differential privacy", with "pure differential privacy" being a special case whenδ=0{\displaystyle \delta =0}. In the latter case, the algorithm is commonly said to satisfy ε-differential privacy (i.e., omittingδ=0{\displaystyle \delta =0}).[citation needed]

Differential privacy offers strong and robust guarantees that facilitate modular design and analysis of differentially private mechanisms due to itscomposability,robustness to post-processing, and graceful degradation in the presence ofcorrelated data.[citation needed]

Example

[edit]

According to this definition, differential privacy is a condition on the release mechanism (i.e., the trusted party releasing informationabout the dataset) and not on the dataset itself. Intuitively, this means that for any two datasets that are similar, a given differentially private algorithm will behave approximately the same on both datasets. The definition gives a strong guarantee that presence or absence of an individual will not affect the final output of the algorithm significantly.

For example, assume we have a database of medical recordsD1{\displaystyle D_{1}} where each record is a pair (Name,X), whereX{\displaystyle X} is aBoolean denoting whether a person has diabetes or not. For example:

NameHas Diabetes (X)
Ross1
Monica1
Joey0
Phoebe0
Chandler1
Rachel0

Now suppose a malicious user (often termed anadversary) wants to find whether Chandler has diabetes or not. Suppose he also knows in which row of the database Chandler resides. Now suppose the adversary is only allowed to use a particular form of queryQi{\displaystyle Q_{i}} that returns the partial sum of the firsti{\displaystyle i} rows of columnX{\displaystyle X} in the database. In order to find Chandler's diabetes status the adversary executesQ5(D1){\displaystyle Q_{5}(D_{1})} andQ4(D1){\displaystyle Q_{4}(D_{1})}, then computes their difference. In this example,Q5(D1)=3{\displaystyle Q_{5}(D_{1})=3} andQ4(D1)=2{\displaystyle Q_{4}(D_{1})=2}, so their difference is 1. This indicates that the "Has Diabetes" field in Chandler's row must be 1. This example highlights how individual information can be compromised even without explicitly querying for the information of a specific individual.

Continuing this example, if we constructD2{\displaystyle D_{2}} by replacing (Chandler, 1) with (Chandler, 0) then this malicious adversary will be able to distinguishD2{\displaystyle D_{2}} fromD1{\displaystyle D_{1}} by computingQ5Q4{\displaystyle Q_{5}-Q_{4}} for each dataset. If the adversary were required to receive the valuesQi{\displaystyle Q_{i}} via anε{\displaystyle \varepsilon }-differentially private algorithm, for a sufficiently smallε{\displaystyle \varepsilon }, then he or she would be unable to distinguish between the two datasets.

Composability and robustness to post processing

[edit]

Composability refers to the fact that the joint distribution of the outputs of (possibly adaptively chosen) differentially private mechanisms satisfies differential privacy.[3]

The other important property for modular use of differential privacy is robustness to post processing. This is defined to mean that for any deterministic or randomized functionF{\displaystyle F} defined over the image of the mechanismA{\displaystyle {\mathcal {A}}}, ifA{\displaystyle {\mathcal {A}}} satisfies ε-differential privacy, so doesF(A){\displaystyle F({\mathcal {A}})}.[3]

The property ofcomposition permits modular construction and analysis of differentially private mechanisms[3] and motivates the concept of theprivacy loss budget.[citation needed] If all elements that access sensitive data of a complex mechanisms are separately differentially private, so will be their combination, followed by arbitrary post-processing.[3]

Group privacy

[edit]

In general, ε-differential privacy is designed to protect the privacy between neighboring databases which differ only in one row. This means that no adversary with arbitrary auxiliary information can know ifone particular participant submitted their information. However this is also extendable.[3] We may want to protect databases differing inc{\displaystyle c} rows, which amounts to an adversary with arbitrary auxiliary information knowing ifc{\displaystyle c} particular participants submitted their information. This can be achieved because ifc{\displaystyle c} items change, the probability dilation is bounded byexp(εc){\displaystyle \exp(\varepsilon c)} instead ofexp(ε){\displaystyle \exp(\varepsilon )},[8] i.e., for D1 and D2 differing onc{\displaystyle c} items:Pr[A(D1)S]exp(εc)Pr[A(D2)S]{\displaystyle \Pr[{\mathcal {A}}(D_{1})\in S]\leq \exp(\varepsilon c)\cdot \Pr[{\mathcal {A}}(D_{2})\in S]\,\!}Thus setting ε instead toε/c{\displaystyle \varepsilon /c} achieves the desired result (protection ofc{\displaystyle c} items).[3] In other words, instead of having each item ε-differentially private protected, now every group ofc{\displaystyle c} items is ε-differentially private protected (and each item is(ε/c){\displaystyle (\varepsilon /c)}-differentially private protected).[3]

Hypothesis testing interpretation

[edit]

One can think of differential privacy as bounding the error rates in a hypothesis test. Consider two hypotheses:

Then, there are two error rates:

Ideal protection would imply that both error rates are equal, but for a fixed (ε, δ) setting, an attacker can achieve the following rates:[9]

ε-differentially private mechanisms

[edit]

Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like theexponential mechanism[10] and posterior sampling[11] sample from a problem-dependent family of distributions instead.

An important definition with respect to ε-differentially private mechanisms is sensitivity.[3] Letd{\displaystyle d} be a positive integer,D{\displaystyle {\mathcal {D}}} be a collection of datasets, andf:DRd{\displaystyle f\colon {\mathcal {D}}\rightarrow \mathbb {R} ^{d}} be a function. One definition of thesensitivity of a function, denotedΔf{\displaystyle \Delta f}, can be defined by:[3]Δf=maxf(D1)f(D2)1,{\displaystyle \Delta f=\max \lVert f(D_{1})-f(D_{2})\rVert _{1},}where the maximum is over all pairs of datasetsD1{\displaystyle D_{1}} andD2{\displaystyle D_{2}} inD{\displaystyle {\mathcal {D}}} differing in at most one element and1{\displaystyle \lVert \cdot \rVert _{1}} denotes theL1 norm.[3] In the example of the medical database below, if we considerf{\displaystyle f} to be the functionQi{\displaystyle Q_{i}}, then the sensitivity of the function is one, since changing any one of the entries in the database causes the output of the function to change by either zero or one. This can be generalized to othermetric spaces (measures of distance), and must be to make certain differentially private algorithms work, including adding noise from theGaussian distribution (which requires theL2 norm) instead of theLaplace distribution.[3]

There are techniques (which are described below) using which we can create a differentially private algorithm for functions, with parameters that vary depending on their sensitivity.[3]

Laplace mechanism

[edit]
See also:Additive noise mechanisms
This sectionmay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(July 2024) (Learn how and when to remove this message)
Laplace mechanism offering .5-differential privacy for a function with sensitivity 1.

The Laplace mechanism adds Laplace noise (i.e. noise from theLaplace distribution, which can be expressed by probability density functionnoise(y)exp(|y|/λ){\displaystyle {\text{noise}}(y)\propto \exp(-|y|/\lambda )\,\!}, which has mean zero and standard deviation2λ{\displaystyle {\sqrt {2}}\lambda \,\!}). Now in our case we define the output function ofA{\displaystyle {\mathcal {A}}\,\!} as a real valued function (called as the transcript output byA{\displaystyle {\mathcal {A}}\,\!}) asTA(x)=f(x)+Y{\displaystyle {\mathcal {T}}_{\mathcal {A}}(x)=f(x)+Y\,\!} whereYLap(λ){\displaystyle Y\sim {\text{Lap}}(\lambda )\,\!\,\!} andf{\displaystyle f\,\!} is the original real valued query/function we planned to execute on the database. Now clearlyTA(x){\displaystyle {\mathcal {T}}_{\mathcal {A}}(x)\,\!} can be considered to be a continuous random variable, where

pdf(TA,D1(x)=t)pdf(TA,D2(x)=t)=noise(tf(D1))noise(tf(D2)){\displaystyle {\frac {\mathrm {pdf} ({\mathcal {T}}_{{\mathcal {A}},D_{1}}(x)=t)}{\mathrm {pdf} ({\mathcal {T}}_{{\mathcal {A}},D_{2}}(x)=t)}}={\frac {{\text{noise}}(t-f(D_{1}))}{{\text{noise}}(t-f(D_{2}))}}\,\!}

which is at moste|f(D1)f(D2)|λeΔ(f)λ{\displaystyle e^{\frac {|f(D_{1})-f(D_{2})|}{\lambda }}\leq e^{\frac {\Delta (f)}{\lambda }}\,\!}. We can considerΔ(f)λ{\displaystyle {\frac {\Delta (f)}{\lambda }}\,\!} to be the privacy factorε{\displaystyle \varepsilon \,\!}. ThusT{\displaystyle {\mathcal {T}}\,\!} follows a differentially private mechanism (as can be seen from the definition above). If we try to use this concept in our diabetes example then it follows from the above derived fact that in order to haveA{\displaystyle {\mathcal {A}}\,\!} as theε{\displaystyle \varepsilon \,\!}-differential private algorithm we need to haveλ=1/ε{\displaystyle \lambda =1/\varepsilon \,\!}. Though we have used Laplace noise here, other forms of noise, such as the Gaussian Noise, can be employed, but they may require a slight relaxation of the definition of differential privacy.[8]

Randomized response

[edit]
See also:Local differential privacy

A simple example, especially developed in thesocial sciences,[12] is to ask a person to answer the question "Do you own theattribute A?", according to the following procedure:

  1. Toss a coin.
  2. If heads, then toss the coin again (ignoring the outcome), and answer the question honestly.
  3. If tails, then toss the coin again and answer "Yes" if heads, "No" if tails.

(The seemingly redundant extra toss in the first case is needed in situations where just theact of tossing a coin may be observed by others, even if the actual result stays hidden.) The confidentiality then arises from therefutability of the individual responses.

But, overall, these data with many responses are significant, since positive responses are given to a quarter by people who do not have theattribute A and three-quarters by people who actually possess it. Thus, ifp is the true proportion of people withA, then we expect to obtain (1/4)(1-p) + (3/4)p = (1/4) +p/2 positive responses. Hence it is possible to estimatep.

In particular, if theattribute A is synonymous with illegal behavior, then answering "Yes" is not incriminating, insofar as the person has a probability of a "Yes" response, whatever it may be.

Although this example, inspired byrandomized response, might be applicable tomicrodata (i.e., releasing datasets with each individual response), by definition differential privacy excludes microdata releases and is only applicable to queries (i.e., aggregating individual responses into one result) as this would violate the requirements, more specifically the plausible deniability that a subject participated or not.[13][14]

Stable transformations

[edit]

A transformationT{\displaystyle T} isc{\displaystyle c}-stable if theHamming distance betweenT(A){\displaystyle T(A)} andT(B){\displaystyle T(B)} is at mostc{\displaystyle c}-times the Hamming distance betweenA{\displaystyle A} andB{\displaystyle B} for any two databasesA,B{\displaystyle A,B}.[citation needed] If there is a mechanismM{\displaystyle M} that isε{\displaystyle \varepsilon }-differentially private, then the composite mechanismMT{\displaystyle M\circ T} is(ε×c){\displaystyle (\varepsilon \times c)}-differentially private.[7]

This could be generalized to group privacy, as the group size could be thought of as the Hamming distanceh{\displaystyle h} betweenA{\displaystyle A} andB{\displaystyle B} (whereA{\displaystyle A} contains the group andB{\displaystyle B} does not). In this caseMT{\displaystyle M\circ T} is(ε×c×h){\displaystyle (\varepsilon \times c\times h)}-differentially private.[citation needed]

Research

[edit]

Early research leading to differential privacy

[edit]

In 1977,Tore Dalenius formalized the mathematics ofcell suppression.[15] Tore Dalenius was a Swedish statistician who contributed to statistical privacy through his 1977 paper that revealed a key point about statistical databases, which was that databases should not reveal information about an individual that is not otherwise accessible.[16] He also defined a typology for statistical disclosures.[4]

In 1979,Dorothy Denning,Peter J. Denning andMayer D. Schwartz formalized the concept of a Tracker, an adversary that could learn the confidential contents of astatistical database by creating a series of targeted queries and remembering the results.[17] This and future research showed that privacy properties in a database could only be preserved by considering each new query in light of (possibly all) previous queries. This line of work is sometimes calledquery privacy, with the final result being that tracking the impact of a query on the privacy of individuals in the database wasNP-hard.[citation needed]

21st century

[edit]

In 2003,Kobbi Nissim andIrit Dinur demonstrated that it is impossible to publish arbitrary queries on a private statistical database without revealing some amount of private information, and that the entire information content of the database can be revealed by publishing the results of a surprisingly small number of random queries—far fewer than was implied by previous work.[18] The general phenomenon is known as theFundamental Law of Information Recovery, and its key insight, namely that in the most general case, privacy cannot be protected without injecting some amount of noise, led to development of differential privacy.[citation needed]

In 2006,Cynthia Dwork,Frank McSherry,Kobbi Nissim andAdam D. Smith published an article[3] formalizing the amount of noise that needed to be added and proposing a generalized mechanism for doing so.[citation needed] This paper also created the first formal definition of differential privacy.[4] Their work was a co-recipient of the 2016 TCC Test-of-Time Award[19] and the 2017Gödel Prize.[20]

Since then, subsequent research has shown that there are many ways to produce very accurate statistics from the database while still ensuring high levels ofprivacy.[1]

Adoption in real-world applications

[edit]
See also:Implementations of differentially private analyses

To date there areover 12 real-world deployments of differential privacy, the most noteworthy being:

Public purpose considerations

[edit]

There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:[30]

  • Data utility and accuracy. The main concern with differential privacy is the trade-off between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the trade-offs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this trade-off.
  • Data privacy and security. Differential privacy provides a quantified measure of privacy loss and an upper bound and allows curators to choose the explicit trade-off between privacy and accuracy. It is robust to still unknown privacy attacks. However, it encourages greaterdata sharing, which if done poorly, increases privacy risk. Differential privacy implies that privacy is protected, but this depends very much on the privacy loss parameter chosen and may instead lead to a false sense of security. Finally, though it is robust against unforeseen future privacy attacks, a countermeasure may be devised that we cannot predict.

Attacks in practice

[edit]

Because differential privacy techniques are implemented on real computers, they are vulnerable to various attacks not possible to compensate for solely in the mathematics of the techniques themselves. In addition to standard defects of software artifacts that can be identified usingtesting orfuzzing, implementations of differentially private mechanisms may suffer from the following vulnerabilities:

  • Subtle algorithmic or analytical mistakes.[31][32]
  • Timing side-channel attacks.[33] In contrast withtiming attacks against implementations of cryptographic algorithms that typically have low leakage rate and must be followed with non-trivial cryptanalysis, a timing channel may lead to a catastrophic compromise of a differentially private system, since a targeted attack can be used to exfiltrate the very bit that the system is designed to hide.
  • Leakage throughfloating-point arithmetic.[34] Differentially private algorithms are typically presented in the language of probability distributions, which most naturally lead to implementations using floating-point arithmetic. The abstraction of floating-point arithmetic isleaky, and without careful attention to details, a naive implementation may fail to provide differential privacy. (This is particularly the case for ε-differential privacy, which does not allow any probability of failure, even in the worst case.) For example, the support of a textbook sampler of the Laplace distribution (required, for instance, for theLaplace mechanism) is less than 80% of alldouble-precision floating point numbers; moreover, the support for distributions with different means are not identical. A single sample from a naïve implementation of the Laplace mechanism allows distinguishing between two adjacent datasets with probability more than 35%.
  • Timing channel through floating-point arithmetic.[35] Unlike operations over integers that are typically constant-time on modern CPUs, floating-point arithmetic exhibits significant input-dependent timing variability.[36] Handling ofsubnormals can be particularly slow, as much as by ×100 compared to the typical case.[37]

See also

[edit]

References

[edit]
  1. ^abHilton, M; Cal (2012)."Differential Privacy: A Historical Survey".Semantic Scholar.S2CID 16861132. Retrieved31 December 2023.
  2. ^Dwork, Cynthia (2008-04-25)."Differential Privacy: A Survey of Results". In Agrawal, Manindra; Du, Dingzhu; Duan, Zhenhua; Li, Angsheng (eds.).Theory and Applications of Models of Computation. Lecture Notes in Computer Science. Vol. 4978. Springer Berlin Heidelberg. pp. 1–19.doi:10.1007/978-3-540-79228-4_1.ISBN 978-3-540-79227-7.S2CID 2887752.
  3. ^abcdefghijklmnopCalibrating Noise to Sensitivity in Private Data Analysis by Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith. In Theory of Cryptography Conference (TCC), Springer, 2006.doi:10.1007/11681878_14. Thefull version appears in Journal of Privacy and Confidentiality, 7 (3), 17-51.doi:10.29012/jpc.v7i3.405
  4. ^abcHilton, Michael.Differential Privacy: A Historical Survey(PDF).S2CID 16861132. Archived fromthe original(PDF) on 2017-03-01.
  5. ^abcDwork, Cynthia (2008)."Differential Privacy: A Survey of Results". In Agrawal, Manindra; Du, Dingzhu; Duan, Zhenhua; Li, Angsheng (eds.).Theory and Applications of Models of Computation. Lecture Notes in Computer Science. Vol. 4978. Berlin, Heidelberg: Springer. pp. 1–19.doi:10.1007/978-3-540-79228-4_1.ISBN 978-3-540-79228-4.
  6. ^The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth. Foundations and Trends in Theoretical Computer Science. Vol. 9, no. 3–4, pp. 211‐407, Aug. 2014.doi:10.1561/0400000042
  7. ^abcPrivacy integrated queries: an extensible platform for privacy-preserving data analysis by Frank D. McSherry. In Proceedings of the 35th SIGMOD International Conference on Management of Data (SIGMOD), 2009.doi:10.1145/1559845.1559850
  8. ^abDifferential Privacy by Cynthia Dwork, International Colloquium on Automata, Languages and Programming (ICALP) 2006, p. 1–12.doi:10.1007/11787006_1
  9. ^Kairouz, Peter, Sewoong Oh, and Pramod Viswanath. "The composition theorem for differential privacy." International conference on machine learning. PMLR, 2015.link
  10. ^"Microsoft Research – Emerging Technology, Computer, and Software Research".Microsoft Research.
  11. ^Dimitrakakis, Christos; Nelson, Blaine; Zhang, and Zuhe; Mitrokotsa, Aikaterini; Rubinstein, Benjamin (December 23, 2016). "Bayesian Differential Privacy through Posterior Sampling".arXiv:1306.1066 [stat.ML].
  12. ^Warner, S. L. (March 1965). "Randomised response: a survey technique for eliminating evasive answer bias".Journal of the American Statistical Association.60 (309).Taylor & Francis:63–69.doi:10.1080/01621459.1965.10480775.JSTOR 2283137.PMID 12261830.S2CID 35435339.
  13. ^Dwork, Cynthia. "A firm foundation for private data analysis." Communications of the ACM 54.1 (2011): 86–95, supra note 19, page 91.
  14. ^Bambauer, Jane, Krishnamurty Muralidhar, and Rathindra Sarathy. "Fool's gold: an illustrated critique of differential privacy." Vand. J. Ent. & Tech. L. 16 (2013): 701.
  15. ^Tore Dalenius (1977)."Towards a methodology for statistical disclosure control".Statistik Tidskrift.15.hdl:1813/111303.
  16. ^Dwork, Cynthia (2006)."Differential Privacy". In Bugliesi, Michele; Preneel, Bart; Sassone, Vladimiro; Wegener, Ingo (eds.).Automata, Languages and Programming. Lecture Notes in Computer Science. Vol. 4052. Berlin, Heidelberg: Springer. pp. 1–12.doi:10.1007/11787006_1.ISBN 978-3-540-35908-1.
  17. ^Dorothy E. Denning; Peter J. Denning; Mayer D. Schwartz (March 1979)."The Tracker: A Threat to Statistical Database Security".ACM Transactions on Database Systems.4 (1):76–96.doi:10.1145/320064.320069.S2CID 207655625.
  18. ^Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (PODS '03). ACM, New York, NY, USA, 202–210.doi:10.1145/773153.773173
  19. ^"TCC Test of Time Award".www.iacr.org.
  20. ^Chita, Efi."2017 Gödel Prize".EATCS.
  21. ^Ashwin Machanavajjhala, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. "Privacy: Theory meets Practice on the Map". In Proceedings of the 24th International Conference on Data Engineering, ICDE) 2008.
  22. ^Erlingsson, Úlfar; Pihur, Vasyl; Korolova, Aleksandra (2014)."RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response".Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. pp. 1054–1067.arXiv:1407.6981.doi:10.1145/2660267.2660348.ISBN 978-1-4503-2957-6.
  23. ^google/rappor, GitHub, 2021-07-15
  24. ^Tackling Urban Mobility with Technology by Andrew Eland. Google Policy Europe Blog, Nov 18, 2015.
  25. ^"Apple – Press Info – Apple Previews iOS 10, the Biggest iOS Release Ever".Apple. Retrieved20 June 2023.
  26. ^Collecting telemetry data privately by Bolin Ding, Jana Kulkarni, Sergey Yekhanin. NIPS 2017.
  27. ^Messing, Solomon; DeGregorio, Christina; Hillenbrand, Bennett; King, Gary; Mahanti, Saurav; Mukerjee, Zagreb; Nayak, Chaya; Persily, Nate; State, Bogdan (2020), "Social Sciences",Facebook Privacy-Protected Full URLs Data Set, Zagreb Mukerjee, Harvard Dataverse,doi:10.7910/dvn/tdoapg, retrieved2023-02-08
  28. ^Evans, Georgina; King, Gary (January 2023)."Statistically Valid Inferences from Differentially Private Data Releases, with Application to the Facebook URLs Dataset".Political Analysis.31 (1):1–21.doi:10.1017/pan.2022.1.ISSN 1047-1987.S2CID 211137209.
  29. ^"Disclosure Avoidance for the 2020 Census: An Introduction". 2 November 2021.
  30. ^"Technology Factsheet: Differential Privacy".Belfer Center for Science and International Affairs. Retrieved2021-04-12.
  31. ^McSherry, Frank (25 February 2018)."Uber's differential privacy .. probably isn't".GitHub.
  32. ^Lyu, Min; Su, Dong; Li, Ninghui (1 February 2017). "Understanding the sparse vector technique for differential privacy".Proceedings of the VLDB Endowment.10 (6):637–648.arXiv:1603.01699.doi:10.14778/3055330.3055331.S2CID 5449336.
  33. ^Haeberlen, Andreas; Pierce, Benjamin C.; Narayan, Arjun (2011). "Differential Privacy Under Fire".20th USENIX Security Symposium.
  34. ^Mironov, Ilya (October 2012). "On significance of the least significant bits for differential privacy".Proceedings of the 2012 ACM conference on Computer and communications security(PDF). ACM. pp. 650–661.doi:10.1145/2382196.2382264.ISBN 9781450316514.S2CID 3421585.
  35. ^Andrysco, Marc; Kohlbrenner, David; Mowery, Keaton; Jhala, Ranjit; Lerner, Sorin; Shacham, Hovav (May 2015). "On Subnormal Floating Point and Abnormal Timing".2015 IEEE Symposium on Security and Privacy. pp. 623–639.doi:10.1109/SP.2015.44.ISBN 978-1-4673-6949-7.S2CID 1903469.
  36. ^Kohlbrenner, David; Shacham, Hovav (August 2017). "On the Effectiveness of Mitigations Against Floating-point Timing Channels".Proceedings of the 26th USENIX Conference on Security Symposium. USENIX Association:69–81.
  37. ^Dooley, Isaac; Kale, Laxmikant (September 2006)."Quantifying the interference caused by subnormal floating-point values"(PDF).Proceedings of the Workshop on Operating System Interference in High Performance Applications.

Further reading

[edit]

Publications

[edit]

Tutorials

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Differential_privacy&oldid=1321745195"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp