| RFC 9446 | Ten Years After | July 2023 |
| Farrell, et al. | Informational | [Page] |
This memo contains the thoughts and recountings of events thattranspired during and after the release of information about the United States National Security Agency (NSA)by Edward Snowden in 2013. There are four perspectives: that of someone who was involved with sifting through the information to responsibly inform the public, that of a security area director of the IETF, that of a human rights expert, and that of a computer science and affiliate law professor. The purpose of this memo is to provide some historical perspective, while at the same time offering a view as to what security and privacy challenges the technical community should consider. These essays do not represent a consensus view, but that of the individual authors.¶
This document is not an Internet Standards Track specification; it is published for informational purposes.¶
This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not candidates for any level of Internet Standard; see Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained athttps://www.rfc-editor.org/info/rfc9446.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
On June 6th, 2013, an article appeared inThe Guardian[Guard2013]that was the beginning of a series of what have come to be known asthe Snowden revelations, describing certain activities of the UnitedStates National Security Agency (NSA). These activities included,amongst others: secret court orders; secret agreements for the receiptof so-called "meta-information" that includes source, destination, andtiming of communications; and tapping of communications lines. Thebreathtaking scope of the operations shocked the Internet technicalcommunity and resulted in a sea change within the IETF, IAB,and other standards organizations.¶
Now that some years have passed, it seems appropriate to reflect on thatperiod of time and to consider what effect the community's actions had,where security has improved, how the threat surface has evolved, whatareas haven't improved, and where the community might invest futureefforts.¶
Bruce Schneier begins this compendium of individual essays by bringingus back to 2013, recalling how it was for him and others to reportwhat was happening, and the mindset of those involved. Next, StephenFarrell reviews the technical community's reactions and in particularthe reactions of the IETF community, technical advances, and wherethreats remain. Then Farzaneh Badii discusses the impact of thoseadvances -- or lack thereof -- on human rights. Finally StevenM. Bellovin puts the Snowden revelations into an ever-evolvinghistorical context of secrets and secret stealing that spanscenturies, closing with some suggestions for IETF.¶
Readers are invited to consider what impact we as a community havehad, what challenges remain, and what positive contribution thetechnical community can and should make to address security andprivacy of citizens of the world.¶
-- Eliot Lear, Independent Submissions Editor for the RFC Series¶
In 2013 and 2014, I wrote extensively about new revelations regardingNSA surveillance based on the documents provided by EdwardSnowden. But I had a more personal involvement as well.¶
I wrote the essay below in September 2013.The New Yorker agreed topublish it, butThe Guardian asked me not to. It wasscared of UK law enforcement and worried that this essay wouldreflect badly on it. And given that the UK police would raid itsoffices in July 2014, it had legitimate cause to be worried.¶
Now, ten years later, I offer this as a time capsule of what thoseearly months of Snowden were like.¶
It's a surreal experience, paging through hundreds of top-secret NSAdocuments. You're peering into a forbidden world: strange, confusing,and fascinating all at the same time.¶
I had flown down to Rio de Janeiro in late August at the request ofGlenn Greenwald. He had been working on the Edward Snowden archive fora couple of months, and had a pile of more technical documents that hewanted help interpreting. According to Greenwald, Snowden also thoughtthat bringing me down was a good idea.¶
It made sense. I didn't know either of them, but I have been writingabout cryptography, security, and privacy for decades. I coulddecipher some of the technical language that Greenwald had difficultywith, and understand the context and importance of variousdocument. And I have long been publicly critical of the NSA'seavesdropping capabilities. My knowledge and expertise could helpfigure out which stories needed to be reported.¶
I thought about it a lot before agreeing. This was before DavidMiranda, Greenwald's partner, was detained at Heathrow airport by theUK authorities; but even without that, I knew there was a risk. I flya lot -- a quarter of a million miles per year -- and being put on a TSAlist, or being detained at the US border and having my electronicsconfiscated, would be a major problem. So would the FBI breaking into myhome and seizing my personal electronics. But in the end, that made memore determined to do it.¶
I did spend some time on the phone with the attorneys recommended tome by the ACLU and the EFF. And I talked about it with my partner,especially when Miranda was detained three days before my departure.Both Greenwald and his employer,The Guardian, are careful about whomthey show the documents to. They publish only those portions essentialto getting the story out. It was important to them that I be aco-author, not a source. I didn't follow the legal reasoning, but thepoint is thatThe Guardian doesn't want to leak the documents torandom people. It will, however, write stories in the public interest,and I would be allowed to review the documents as part of thatprocess. So after a Skype conversation with someone atThe Guardian, Isigned a letter of engagement.¶
And then I flew to Brazil.¶
I saw only a tiny slice of the documents, and most of what I saw wassurprisingly banal. The concerns of the top-secret world are largelytactical: system upgrades, operational problems owing to weather,delays because of work backlogs, and so on. I paged through weeklyreports, presentation slides from status meetings, and generalbriefings to educate visitors. Management is management, even insidethe NSA. Reading the documents, I felt as though I were sitting throughsome of those endless meetings.¶
The meeting presenters try to spice things up. Presentations regularlyinclude intelligence success stories. There were details -- what had beenfound, and how, and where it helped -- and sometimes there were attaboysfrom "customers" who used the intelligence. I'm sure these areintended to remind NSA employees that they're doing good. Itdefinitely had an effect on me. Those were all things I want the NSAto be doing.¶
There were so many code names. Everything has one: every program,every piece of equipment, every piece of software. Sometimes codenames had their own code names. The biggest secrets seem to be theunderlying real-world information: which particular companyMONEYROCKET is; what software vulnerability EGOTISTICALGIRAFFE -- really,I am not making that one up -- is; how TURBINE works. Those secretscollectively have a code name -- ECI, for exceptionally compartmentedinformation -- and almost never appear in the documents. Chatting withSnowden on an encrypted IM connection, I joked that the NSA cafeteriamenu probably has code names for menu items. His response: "Trust mewhen I say you have no idea."¶
Those code names all come with logos, most of them amateurish and alot of them dumb. Note to the NSA: take some of that more thanten-billion-dollar annual budget and hire yourself a designfirm. Really; it'll pay off in morale.¶
Once in a while, though, I would see something that made me stop,stand up, and pace around in circles. It wasn't that what I read wasparticularly exciting, or important. It was just that it wasstartling. It changed -- ever so slightly -- how I thought about the world.¶
Greenwald said that that reaction was normal when people startedreading through the documents.¶
Intelligence professionals talk about how disorienting it is living onthe inside. You read so much classified information about the world'sgeopolitical events that you start seeing the world differently. Youbecome convinced that only the insiders know what's really going on,because the news media is so often wrong. Your family isignorant. Your friends are ignorant. The world is ignorant. The onlything keeping you from ignorance is that constant stream of classifiedknowledge. It's hard not to feel superior, not to say things like "Ifyou only knew what we know" all the time. I can understand how GeneralKeith Alexander, the director of the NSA, comes across as sosupercilious; I only saw a minute fraction of that secret world, and Istarted feeling it.¶
It turned out to be a terrible week to visit Greenwald, as he wasstill dealing with the fallout from Miranda's detention. Two otherjournalists, one fromThe Nation and the other fromThe Hindu, werealso in town working with him. A lot of my week involved Greenwaldrushing into my hotel room, giving me a thumb drive of new stuff tolook through, and rushing out again.¶
A technician fromThe Guardian got a search capability working while Iwas there, and I spent some time with it. Question: when you're giventhe capability to search through a database of NSA secrets, what's thefirst thing you look for? Answer: your name.¶
It wasn't there. Neither were any of the algorithm names I knew, noteven algorithms I knew that the US government used.¶
I tried to talk to Greenwald about his own operational security. Ithad been incredibly stupid for Miranda to be traveling with NSAdocuments on the thumb drive. Transferring files electronically iswhat encryption is for. I told Greenwald that he and Laura Poitrasshould be sending large encrypted files of dummy documents back andforth every day.¶
Once, at Greenwald's home, I walked into the backyard and looked forTEMPEST receivers hiding in the trees. I didn't find any, but thatdoesn't mean they weren't there. Greenwald has a lot of dogs, but Idon't think that would hinder professionals. I'm sure that a bunch ofmajor governments have a complete copy of everything Greenwaldhas. Maybe the black bag teams bumped into each other in those earlyweeks.¶
I started doubting my own security procedures. Reading about the NSA'shacking abilities will do that to you. Can it break the encryption onmy hard drive? Probably not. Has the company that makes my encryptionsoftware deliberately weakened the implementation for it?Probably. Are NSA agents listening in on my calls back to the US? Veryprobably. Could agents take control of my computer over the Internetif they wanted to? Definitely. In the end, I decided to do my best andstop worrying about it. It was the agency's documents, after all. Andwhat I was working on would become public in a few weeks.¶
I wasn't sleeping well, either. A lot of it was the sheer magnitude ofwhat I saw. It's not that any of it was a real surprise. Those of usin the information security community had long assumed that the NSAwas doing things like this. But we never really sat down and figuredout the details, and to have the details confirmed made a bigdifference. Maybe I can make it clearer with an analogy. Everyoneknows that death is inevitable; there's absolutely no surprise aboutthat. Yet it arrives as a surprise, because we spend most of our livesrefusing to think about it. The NSA documents were a bit likethat. Knowing that it is surely true that the NSA is eavesdropping onthe world, and doing it in such a methodical and robust manner, isvery different from coming face-to-face with the reality that it isand the details of how it is doing it.¶
I also found it incredibly difficult to keep the secrets.The Guardian's process is slow and methodical. I move much faster. Idrafted stories based on what I found. Then I wrote essays about thosestories, and essays about the essays. Writing was therapy; I wouldwake up in the wee hours of the morning, and write an essay. But thatput me at least three levels beyond what was published.¶
Now that my involvement is out, and my first essays are out, I feel alot better. I'm sure it will get worse again when I find anothermonumental revelation; there are still more documents to go through.¶
I've heard it said that Snowden wants to damage America. I can saywith certainty that he does not. So far, everyone involved in thisincident has been incredibly careful about what is released to thepublic. There are many documents that could be immensely harmful tothe US, and no one has any intention of releasing them. The documentsthe reporters release are carefully redacted. Greenwald and Irepeatedly debated withThe Guardian editors the newsworthiness of storyideas, stressing that we would not expose government secrets simplybecause they're interesting.¶
The NSA got incredibly lucky; this could have ended with a massivepublic dump like Chelsea Manning's State Department cables. I supposeit still could. Despite that, I can imagine how this feels to the NSA.It's used to keeping this stuff behind multiple levels of security:gates with alarms, armed guards, safe doors, and military-gradecryptography. It's not supposed to be on a bunch of thumb drives inBrazil, Germany, the UK, the US, and who knows where else, protectedlargely by some random people's opinions about what should or shouldnot remain secret. This is easily the greatest intelligence failure inthe history of ever. It's amazing that one person could have had somuch access with so little accountability, and could sneak all of thisdata out without raising any alarms. The odds are close to zero thatSnowden is the first person to do this; he's just the first person tomake public that he did. It's a testament to General Alexander's powerthat he hasn't been forced to resign.¶
It's not that we weren't being careful about security, it's that ourstandards of care are so different. From the NSA's point of view,we're all major security risks, myself included. I was taking notesabout classified material, crumpling them up, and throwing them intothe wastebasket. I was printing documents marked "TOPSECRET/COMINT/NOFORN" in a hotel lobby. And once, I took the wrongthumb drive with me to dinner, accidentally leaving the unencryptedone filled with top-secret documents in my hotel room. It was anhonest mistake; they were both blue.¶
If I were an NSA employee, the policy would be to fire me for that alone.¶
Many have written about how being under constant surveillance changesa person. When you know you're being watched, you censor yourself. Youbecome less open, less spontaneous. You look at what you write on yourcomputer and dwell on what you've said on the telephone, wonder how itwould sound taken out of context, from the perspective of ahypothetical observer. You're more likely to conform. You suppressyour individuality. Even though I have worked in privacy for decades,and already knew a lot about the NSA and what it does, the change waspalpable. That feeling hasn't faded. I am now more careful about whatI say and write. I am less trusting of communications technology. I amless trusting of the computer industry.¶
After much discussion, Greenwald and I agreed to write three storiestogether to start. All of those are still in progress. In addition, Iwrote two commentaries on the Snowden documents that were recentlymade public. There's a lot more to come; even Greenwald hasn't lookedthrough everything.¶
Since my trip to Brazil (one month before), I've flown back to the USonce and domestically seven times -- all without incident. I'm not on anylist yet. At least, none that I know about.¶
As it happened, I didn't write much more with Greenwald orThe Guardian. Those two had a falling out, and by the time everythingsettled and both began writing about the documentsindependently -- Greenwald at the newly formed websiteThe Intercept -- Igot cut out of the process somehow. I remember hearing that Greenwaldwas annoyed with me, but I never learned the reason. We haven't spokensince.¶
Still, I was happy with the one story I was part of: how the NSA hacksTor. I consider it a personal success that I pushedThe Guardian topublish NSA documents detailing QUANTUM. I don't think that would havegotten out any other way. And I still use those pages today when Iteach cybersecurity to policymakers at the Harvard Kennedy School.¶
Other people wrote about the Snowden files, and wrote a lot. It was aslow trickle at first, and then a more consistent flow. BetweenGreenwald, Bart Gellman, andThe Guardian reporters, there ended upbeing steady stream of news. (Bart brought in Ashkan Soltani to helphim with the technical aspects, which was a great move on his part,even if it cost Ashkan a government job later.) More stories werecovered by other publications.¶
It started getting weird. Both Greenwald and Gellman held documentsback so they could publish them in their books. Jake Appelbaum, whohad not yet been accused of sexual assault by multiple women, wasworking with Poitras. He partnered withDer Spiegel to release an implantcatalog from the NSA's Tailored Access Operations group. To this day,I am convinced that the document was not in the Snowden archives:that Jake got it somehow, and it was released with the implicationthat it was from Edward Snowden. I thought it was important enoughthat I started writing about each item in that document in my blog:"NSA Exploit of the Week." That got my website blocked by the DoD: Ikeep a framed print of the censor's message on my wall.¶
Perhaps the most surreal document disclosures were when artistsstarted writing fiction based on the documents. This was in 2016, whenLaura Poitras built a secure room in New York to house thedocuments. By then, the documents were years out of date. And nowthey're over a decade out of date. (They were leaked in 2013, but mostof them were from 2012 or before.)¶
I ended up being something of a public ambassador for thedocuments. When I got back from Rio, I gave talks at a privateconference in Woods Hole, the Berkman Center at Harvard, somethingcalled the Congress on Privacy and Surveillance in Geneva, events atboth CATO and New America in DC, an event at the University ofPennsylvania, an event at EPIC, a "Stop Watching Us" rally in DC,the RISCS conference in London, the ISF in Paris, and...then...at theIETF meeting in Vancouver in November 2013. (I remember little ofthis; I am reconstructing it all from my calendar.)¶
What struck me at the IETF was the indignation in the room, and thecalls to action. And there was action, across many fronts. Wetechnologists did a lot to help secure the Internet, for example.¶
The government didn't do its part, though. Despite the public outcry,investigations by Congress, pronouncements by President Obama, andfederal court rulings, I don't think much has changed. The NSAcanceled a program here and a program there, and it is now more publicabout defense. But I don't think it is any less aggressive abouteither bulk or targeted surveillance. Certainly its governmentauthorities haven't been restricted in any way. And surveillancecapitalism is still the business model of the Internet.¶
And Edward Snowden? We were in contact for a while on Signal. Ivisited him once in Moscow, in 2016. And I had him do a guestlecture to my class at Harvard for a few years, remotely byJitsi. Afterwards, I would hold a session where I promised to answerevery question he would evade or not answer, explain every response hedid give, and be candid in a way that someone with an outstandingarrest warrant simply cannot. Sometimes I thought I could channelSnowden better than he could.¶
But now it's been a decade. Everything he knows is old and out ofdate. Everything we know is old and out of date. The NSA suffered aneven worse leak of its secrets by the Russians, under the guise of theShadow Brokers, in 2016 and 2017. The NSA has rebuilt. It again hascapabilities we can only surmise.¶
In 2013, the IETF and, more broadly, the Internet technical, security, andprivacy research communities, were surprised by the surveillance and attackefforts exposed by the Snowden revelations[Timeline]. While thepotential for such was known, it was the scale and pervasiveness of theactivities disclosed that was alarming and, I think it fair to say, quiteannoying, for very many Internet engineers.¶
As for the IETF's reaction, informal meetings during the July 2013 IETF meetingin Berlin indicated that IETF participants considered that these revelationsshowed that we needed to do more to improve the security and privacy propertiesof IETF protocols, and to help ensure deployments made better use of thesecurity and privacy mechanisms that already existed. In August, the IETF set upa new mailing list[Perpass], which became a useful venue for triagingproposals for work on these topics. At the November 2013 IETF meeting, therewas a lively and very well attended plenary session[Plenary-video] on"hardening the Internet" against such attacks, followed by a "birds of afeather" session[Perpass-BoF] devoted to more detailed discussion of possibleactions in terms of new working groups, protocols, and Best Current Practice(BCP) documents that could help improve matters. This was followed inFebruary/March 2014 by a joint IAB/W3C workshop on "strengthening the Internetagainst pervasive monitoring"[STRINT] held in London and attended by 150engineers (still the only IAB workshop in my experience where we needed awaiting list for people after capacity for the venue was reached!). The STRINTworkshop report was eventually published as[RFC7687] in 2015, but in themeantime, work proceeded on a BCP document codifyingthat the IETF community considered that "pervasive monitoring is an attack"[RFC7258] (aka BCP 188). The IETF Last Call discussion for that shortdocument included more than 1000 emails -- while there was broad agreement onthe overall message, a number of IETF participants considered enshrining thatmessage in the RFC Series and IETF processes controversial. In any case, theBCP was published in May 2014. The key statement on which rough consensus wasreached is in the abstract of RFC 7258 and says "Pervasive monitoring is atechnical attack that should be mitigated in the design of IETF protocols,where possible." That document has since been referenced[Refs-to-7258] bymany IETF working groups and RFCs as justifying additional work on security andprivacy. Throughout that period and beyond, the repercussions of the Snowdenrevelations remained a major and ongoing agenda item for both of the IETF'smain technical management bodies, the IAB and the IESG (on which I served atthe time).¶
So far, I've only described the processes with which the IETF dealt withthe attacks, but there was, of course, also much technical work started by IETFparticipants that was at least partly motivated by the Snowden revelations.¶
In November 2013, a working group was established to document better practicesfor using TLS in applications[UTA] so that deployments would be less at riskin the face of some of the attacks related to stripping TLS or havingapplications misuse TLS APIs or parameters. Similar work was done later to updaterecommendations for use of cryptography in other protocols in the CURDLE Working Group[CURDLE]. The CURDLE Working Group was, to an extent, created toenable use of a set of new elliptic curves that had been documented by the IRTFCrypto Forum Research Group[CFRG]. That work in turn had been partlymotivated by (perhaps ultimately unfounded) concerns about elliptic curvesdefined in NIST standards, following the DUAL_EC_DRBG debacle[Dual-EC] (described further below) where aNIST random number generator had been deliberately engineered to produce outputthat could be vulnerable to NSA attack.¶
Work to develop a new version of TLS was started in 2014, mainly due toconcerns that TLS 1.2 and earlier version implementations had been shown to bevulnerable to a range of attacks over the years. The work to develop TLS 1.3[RFC8446] also aimed to encrypt more of the handshake so as toexpose less information to network observers -- a fairly direct result of theSnowden revelations. Work to further improve TLS in this respect continuestoday using the so-called Encrypted Client Hello (ECH) mechanism[TLS-ECH]to remove one of the last privacy leaks present in current TLS.¶
Work on ECH was enabled by significant developments to encrypt DNS traffic,using DNS over TLS (DoT)[RFC7858] or DNS Queries over HTTPS (DoH)[RFC8484], which also started as a result ofthe Snowden revelations. Prior to that, privacy hadn't really been consideredwhen it came to DNS data or (more importantly) the act of accessing DNS data.The trend towards encrypting DNS traffic represents a significant change forthe Internet, both in terms of reducing cleartext, but also in terms of movingpoints-of-control. The latter aspect was, and remains, controversial, but theIETF did its job of defining new protocols that can enable better DNS privacy.Work on HTTP version 2[RFC9113] and QUIC[RFC9000] further demonstratesthe trend in the IETF towards always encrypting protocols as the new norm, atleast at and above the transport layer.¶
Of course, not all such initiatives bore fruit; for example, attempts to definea new MPLS encryption mechanism[MPLS-OPPORTUNISTIC-ENCRYPT]foundered due to a lack of interest and the existence of the already deployedIEEE Media Access Control Security (MACsec) scheme. But there has been a fairly clear trend towards trying toremove cleartext from the Internet as a precursor to provide improved privacywhen considering network observers as attackers.¶
The IETF, of course, forms only one part of the broader Internet technicalcommunity, and there were many non-IETF activities triggered by the Snowdenrevelations, a number of which also eventually resulted in new IETF work tostandardise better security and privacy mechanisms developed elsewhere.¶
In 2013, the web was largely unencrypted despite HTTPS being relativelyusable, and that was partly due to problems using the Web PKI at scale. TheLet's Encrypt initiative[LE] issued its first certificates in 2015 aspart of its aim to try to move the webtowards being fully encrypted, and it has been extremely successful in helpingachieve that goal. Subsequently, the automation protocols developed forLet's Encrypt were standardised in the IETF's ACME Working Group[ACME].¶
In 2013, most email transport between mail servers was cleartext,directly enabling some of the attacks documented in the Snowden documents.Significant effort by major mail services and MTA software developers sincethen have resulted in more than 90% of email being encrypted between mailservers, and various IETF protocols have been defined in order to improve thatsituation, e.g., SMTP MTA Strict Transport Security (MTA-STS)[RFC8461].¶
Lastly, MAC addresses have historically been long-term fixed values visible tolocal networks (and beyond), which enabled some tracking attacks that weredocumented in the Snowden documents[Toronto]. Implementers, vendors, and the IEEE 802standards group recognised this weakness and started work on MAC addressrandomisation that in turn led to the IETF's MADINAS Working Group[MADINAS], whichaims to ensure randomised MAC addresses can be used on the Internet withoutcausing unintentional harm.There is also a history of IETF work on deprecating MAC-address-based IPv6 interface identifiersand advocating pseudorandom identifiers and temporary addresses, some ofwhich pre-dates Snowden[RFC7217][RFC8064][RFC8981].¶
In summary, the significantly large volume of technical work pursued in theIETF and elsewhere as a result of the Snowden revelations has focussed on twomain things: decreasing the amount of plaintext that remains visible to networkobservers and secondly reducing the number of long-term identifiers that enableunexpected identification or re-identification of devices or users. This workis not by any means complete, nor is deployment universal, but significantprogress has been made, and the work continues even if the level of annoyanceat the attack has faded somewhat over time.¶
One should also note that there has been pushback against these improvementsin security and privacy and the changes they cause for deployments. That hascome from more or less two camps: those on whom these improvements forcechange tend to react badly, but later figure out how to adjust, and those who seemingly prefer not to strengthen security so as to, forexample, continue to achieve what they call "visibility" even in the face of themany engineers who correctly argue that such an anti-encryption approachinevitably leads to worse security overall. The recurring nature of this kindof pushback is nicely illustrated by[RFC1984]. That informational documentwas published in 1996 as an IETF response to an early iteration of theperennial "encryption is bad" argument. In 2015, the unmodified 1996 text wasupgraded to a BCP (BCP 200) as the underlying arguments havenot changed, and will not change.¶
Looking back on all the above from a 2023 vantage point, I think that, as acommunity of Internet engineers, we got a lot right, but that today there's waymore that needs to be done to better protect the security and privacy of peoplewho use the Internet. In particular, we (the technical community) haven't donenearly as good a job at countering surveillance capitalism[Zubhoff2019], which has explodedin the last decade. In part, that's because many of the problems are outside ofthe scope of bodies such as the IETF. For example, intrusive backend sharingof people's data for advertising purposes can't really be mitigated viaInternet protocols.¶
However, I also think that the real annoyance felt with respect to the Snowdenrevelations is (in general) not felt nearly as much when it comes to the legalbut hugely privacy-invasive activities of major employers of Internetengineers.¶
It's noteworthy that RFC 7258 doesn't consider that bad actors are limited togovernments, and personally, I think many advertising industry schemes forcollecting data are egregious examples of pervasive monitoring and hence oughtalso be considered an attack on the Internet that ought be mitigated wherepossible. However, the Internet technical community clearly hasn't acted inthat way over the last decade.¶
Perhaps that indicates that Internet engineers and the bodies in which theycongregate need to place much more emphasis on standards for ethical behaviourthan has been the case for the first half-century of the Internet. And whileit would be good to see the current leaders of Internet bodies work to makeprogress in that regard, at the time of writing, it sadly seems more likely thatgovernment regulators will be the ones to try force better behaviour. That ofcourse comes with a significant risk of having regulations that stymie the kindof permissionless innovation that characterised many earlier Internetsuccesses.¶
So while we got a lot right in our reaction to Snowden's revelations,currently, we have a "worse" Internet. Nonetheless, I do still hope to see asea change there, as the importance of real Internet security and privacy forpeople becomes utterly obvious to all, even the most hard-core capitalists andgovernment signals intelligence agencies. That may seem naive, but I remainoptimistic that, as a fact-based community, we (and eventually our employers)will recognise that the lesser risk is to honestly aim to provide the bestsecurity and privacy practically possible.¶
It is very difficult to empirically measure the effect of Snowden'srevelations on human rights and the Internet. Anecdotally, we havebeen witnessing dominant regulatory and policy approaches that impacttechnologies and services that are at the core of protecting humanrights on the Internet. (A range of European Union laws aims toaddress online safety or concentration of data. There are many moreregulations that have an impact on the Internet[Masnick2023].) Therehas been little progress in fixing technical and policy issues thathelp enable human rights. The Snowden revelations did notrevolutionize the Internet governance andtechnical approaches to support human rights such as freedomof expression, freedom of association and assembly, and privacy. It did not decrease the number of Internet shutdowns nor the eagerness of authoritarian (and even to some extent democratic) countries to territorialize the Internet. In some cases, the governments argued that they should have more data sovereignty or Internet sovereignty. Perhaps the revelations helped with the evolution of some technical and policy aspects.¶
After Snowden's revelations 10 years ago, engineers and advocates atthe IETF responded in a fewways. One prominent response was the issuance of a BCP document, "Pervasive Monitoring Is an Attack"[RFC7258] byFarrell and Tschofenig. The responses to the Snowden revelations did notmean that IETF had lost sight of issues such as privacy andsurveillance. There were instances of resistance to surveillance inthe past by engineers (we do not delve into how successful that was inprotecting human rights). However, historically, many engineers believedthat widespread and habitual surveillance was too expensive to bepractical. The revelations proved them wrong.¶
Rights-centered activists were also involved with the IETF before therevelations. For example, staff from Center for Democracy andTechnology (CDT) was undertaking work at the IETF (and was a member ofthe Internet Architecture Board) and held workshops about thechallenges of creating privacy-protective protocols and systems. Thetechnical shortcomings that were exploited by the National SecurityAgency to carry out mass-scale surveillance were recognized by theIETF before the Snowden revelations[Garfinkel1995][RFC6462]. In2012, Joy Liddicoat and Avri Doria wrote a report for the Internet Societythat extensively discussed the processes and principles of humanrights and Internet protocols[Doria2012].¶
Perhaps the Snowden revelations brought more attention to the IETF andits work as it related to important issues, such as privacy andfreedom of expression. It might have also expedited and helped withmore easily convening the Human Rights Protocol ConsiderationsResearch Group (HRPC) in the Internet Research Task Force (IRTF) in July 2015. The HRPC RG was originally co-chairedby Niels ten Oever (who worked at Article 19 at the time) and Internetgovernance activist Avri Doria.The charter of the HRPC RG states thatthe group was established: "to research whether standards andprotocols can enable, strengthen or threaten human rights, as definedin the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and PoliticalRights (ICCPR)."¶
During the past decade, a few successful strides were made to createprotocols that, when and if implemented, aim at protecting privacy ofthe users, as well as help with reducing pervasive surveillance. Theseefforts were in keeping with the consensus of the IETF found in RFC7258. Sometimes these protocols have anti-censorship qualities aswell. A few examples immediately come to mind: 1) the encryption of DNSqueries (for example, DNS over HTTPS), 2) ACME protocol underpinningthe Let's Encrypt initiative, and 3) Registration Data Access Protocol(RDAP)[RFC7480][RFC7481][RFC8056][RFC9082][RFC9083][RFC9224]. (It is debatable that RDAP had anything to do withthe Snowden revelations, but it is still a good example and is finallybeing implemented.)¶
The DNS Queries over HTTPS protocol aimed to encrypt DNS queries. Fouryears after RFC 7258, DoH was developed to tackle both active andpassive monitoring of DNS queries. It is also a tool that can helpwith combatting censorship. Before the revelations, DNS query privacywould have been controversial due to being expensive or unnecessary, but the Snowden revelations made it more plausible. Let's Encrypt was not an Internet protocol, but it was an initiative that aimed to encrypt the web, and later onsome of the automation protocols were standardized in the IETF ACMEWorking Group. RDAP could solve along-term problem: redacting the domain name registrants' (and IPaddress holders') sensitive, personal data but at the same timeenabling legitimate access to the information. As to the work of HRPCResearch Group, it has so far issued[RFC8280] by ten Oever andCath and a number of informational Internet-Drafts.¶
While we cannot really argue that all the movements and privacy-preserving protocols and initiatives that enable protecting humanrights at the infrastructure layer solely or directly result from the Snowdenrevelations, I think it is safe to say that the revelations helpedwith expediting the resolution of some of the "technical" hesitationsthat had an effect on fixing Internet protocols that enabledprotection of human rights.¶
Unfortunately, the Snowden revelations have not yet helped usmeaningfully with adopting a human rights approach. We can't agree onprioritizing human rights in our Internet communities for a host ofreasons. This could be due to: 1) human rights are sometimes inconflict with each other; 2) it is simply not possible to mitigate thehuman right violation through the Internet protocol; 3) it is notobvious for the engineers in advance how the Internet protocolcontributes to enabling human rights protections, or precisely what they ought to do; 4) the protocol is already there, but market, law, and ahost of other societal and political issues do not allow forwidespread implementation.¶
IETF did not purposefully take a long time to adopt and implement protocols thatenabled human rights. There were technical and political issues thatcreated barriers. For example, as WHOIS was not capable of accommodating a tiered-access option, the IETF community attempted a few times before to create a protocol that would disclose the necessaryinformation of IP holders and domain name registrants while at thesame time protecting their data (Cross Registry Internet Service Protocol (CRISP) and later on Internet Registry Information Service (IRIS) are theexamples). However, IRIS was technically very difficult to implement. It was not until RDAP was developed and theGeneral Data Protection Regulation (GDPR) was enacted that InternetCorporation for Assigned Names and Numbers had to consider instructingregistries and registrars to implement RDAP and its community had tocome up with a privacy-compliant policy. Overall, a host ofregulatory and market incentives can halt or slow down theimplementation of human-rights-enabling protocols and implementationcould depend on other organizations with their own political andstakeholder conflicts. Sometimes the protocol is available, but the regulatory framework andthe market do not allow for implementation. Sometimes the surrounding context includes practical dimensions that are easy to overlook in a purely engineering-focused argument.¶
A curious example of this is sanctions regimes that target transactions involvingeconomically valuable assets. As a result, sanctions might limitsanctioned nations' and entities' access to IPv4 resources (because the existence ofa resale market for these addresses causes acquiring them to beinterpreted as buying something of value), though the same considerationmay not apply to IPv6 address resources. But IPv6 adoption itselfdepends on a host of complex factors that are by no means limited totechnical comparisons of the properties of IPv4 and IPv6. Someonefocused only on technical features of protocols may devise an elegantsolution but be surprised both by deployment challenges and unintendeddownstream effects.Sometimes there are arguments over implementation of a protocolbecause as it is perceived, while it can protect freedom of expressionand reduce surveillance, it can hamper other human rights. Forinstance, the technical community and some network operators still have doubts about the implementation of DNS over HTTPS,despite its potential to circumvent censorship and its ability to encrypt DNS queries. The arguments againstimplementation of DoH include protection of children online and lackof law enforcement access to data.¶
We must acknowledge that sometimes the technical solutions that we usethat protect one right (for example, encryption to protect the right toprivacy or to prevent surveillance) could potentially affect technicaland policy solutions that try to protect other human rights (forexample, encryption could prevent financial institutions frommonitoring employees' network activities to detect fraudulentbehavior). Acknowledging and identifying these conflicts can help uscome up with alternative techniques that could protect human rightswhile not hampering other technical solutions such asencryption. Where such alternative techniques are not possible,acknowledging the shortcoming could clarify and bring to light thetrade-offs that we have accepted in our Internet system.¶
Ironically, we advocate for connectivity and believe expressingoneself on the Internet is a human right, but when a war erupts, weresort to tools that impact that very concept. For example, somebelieve that, by imposing sanctions on critical properties of the Internet,we can punish the perpetrators of a war. The Regional InternetRegistries that are in charge of registration of IP addresses haveshown resilience to these requests. However, some tech companies (forexample, Cogent[Roth2022]) decided not to serve sanctioned countriesand overcomplied with sanctions. Overcompliance with sanctions couldhamper ordinary people's access to the Internet[Badii2023].¶
Perhaps we can solve some of these problems by undertaking a thoroughimpact assessment and contextualization to reveal how and why Internetprotocols affect human rights (something Fidler and I arguedfor[Badii2021]). Contextualization andimpact assessment can reveal how each Internet protocol or each lineof code, in which systems, have an impact on which and whose humanrights.¶
The HRPC RG (which I am a part of) and the larger human rights andpolicy analyst communities are still struggling to analyze legal,social, and market factors alongside the protocols to have a goodunderstanding of what has an impact and what has to be changed. It ishard, but it is not impossible. If we thoroughly document and researchthe lifecycle of an Internet protocol and contextualize it, we mighthave a better understanding of whichparts of the protocol to fix and how to fix them in order to protect human rights.¶
Overall, the revelations did, to some extent, contribute to theevolution of our ideas and perspectives. Our next step should be toundertake research on the impact of Internet systems (includingInternet protocols) on human rights, promote the implementation ofprotocols good for human rights through policy and advocacy, and focuson which technical parts we can standardize to help with morewidespread implementation of human-rights-enabling Internet protocols.¶
It's not a secret: many governments in the world don't like it whenpeople encrypt their traffic. More precisely, they like strongcryptography for themselves but not for others, whether those othersare private citizens or other countries. But the history is longer andmore complex than that.¶
For much of written history, both governments and individuals usedcryptography to protect their messages. To cite just one famousexample, Julius Caesar is said to have encrypted messages by shiftingletters in the alphabet by 3[Kahn1996]. In modern parlance, 3 wasthe key, and each letter was encrypted with¶
C[i] = (P[i] + 3) mod 23¶
(The Latin alphabet of his time had only 23 letters.)KnownArabic writings on cryptanalysis go back to at least the 8th century;their sophistication shows that encryption was reasonably commonlyused. In the 9th century, Abū Yūsuf Yaʻqūb ibn ʼIsḥāq aṣ-Ṣabbāḥ al-Kindī developed and wrote about frequency analysis as a way tocrack ciphers[Borda2011][Kahn1996].¶
In an era of minimal literacy, though, there wasn't that much use ofencryption, simply because most people could neither read norwrite. Governments used encryption for diplomatic messages, andcryptanalysts followed close behind. The famed Black Chambers of theRenaissance era read messages from many different governments, whileearly cryptographers devised stronger and stronger ciphers[Kahn1996]. In Elizabethan times in England, Sir Francis Walsingham'sintelligence agency intercepted and decrypted messages from Mary,Queen of Scots; these messages formed some of the strongest evidenceagainst her and eventually led to her execution[Kahn1996].¶
This pattern continued for centuries. In the United States, ThomasJefferson invented the so-called wheel cipher in the late 18thcentury; it was reinvented about 100 years later by Étienne Bazeriesand used as a standard American military cipher well into World War II[Kahn1996]. Jefferson and other statesmen of the late 18th and early 19th centuries regularly usedcryptography when communicating with each other. An encrypted messagewas even part of the evidence introduced in Aaron Burr's 1807 trialfor treason[Kerr2020][Kahn1996]. Edgar Allan Poe claimed that hecould cryptanalyze any message sent to him[Kahn1996].¶
The telegraph era upped the ante. In the US, just a year afterSamuel Morse deployed his first telegraph line between Baltimore andWashington, his business partner, Francis Smith, published a codebookto help customers protect their traffic from prying eyes[Smith1845]. In 1870, Britain nationalized its domestic telegraph network;in response, Robert Slater published a more sophisticated codebook[Slater1870]. On the government side, Britain took advantage of itsposition as the central node in the world's international telegraphicnetworks to read a great deal of traffic passing through the country[Headrick1991][Kennedy1971]. They used this ability strategically,too -- when war broke out in 1914, the British Navy cut Germany'sundersea telegraph cables, forcing them to use radio; an intercept ofthe so-called Zimmermann telegram, when cryptanalyzed, arguably led toAmerican entry into the war and thence to Germany's defeat. Once theUS entered the war, it required users of international telegraphlines to deposit copies of the codebooks they used for compression, sothat censors could check messages for prohibited content[Kahn1996].¶
In Victorian Britain, private citizens, often lovers, used encryptionin newspapers' personal columns to communicate without their parents'knowledge. Charles Wheatstone and Charles Babbage used to solve theseelementary ciphers routinely for their own amusement[Kahn1996].¶
This pattern continued for many years. Governments regularly usedciphers and codes, while other countries tried to break them; privateindividuals would sometimes use encryption but not often, and rarelywell. But the two World Wars marked a sea change, one that would soonreverberate into the civilian world.¶
The first World War featured vast troop movements by all parties; thisin turn required a lot of encrypted communications, often by telegraphor radio. These messages were often easily intercepted inbulk. Furthermore, the difficulty of encrypting large volumes ofplaintext led to the development of a variety of mechanical encryptiondevices, including Germany's famed Enigma machine. World War IIamplified both trends. It also gave rise to machine-assistedcryptanalysis, such as the United Kingdom's bombes (derived from anearlier Polish design) and Colossus machine, and the American's devicefor cracking Japan's PURPLE system. The US also used punchcard-based tabulators to assist in breaking other Japanese codes, suchas the Japanese Imperial Navy's JN-25[Kahn1996][Rowlett1998].¶
These developments set the stage for the postwar SIGINT (SignalsIntelligence) environment. Many intragovernmental messages were sent byradio, making them easy to intercept; advanced cryptanalytic machinesmade cryptanalysis easier. Ciphers were getting stronger, though, andgovernment SIGINT agencies did not want to give up their access todata. While there were undoubtedly many developments, two are wellknown.¶
The first involved CryptoAG, a Swedish (and later Swiss) manufacturerof encryption devices. The head of that company, Boris Hagelin, was afriend of William F. Friedman, a pioneering Americancryptologist. During the 1950s, CryptoAG sold its devices to othergovernments; apparently at Friedman's behest, Hagelin weakened theencryption in a way that let the NSA read the traffic[Miller2020].¶
The story involving the British is less well-documented and lessclear. When some of Britain's former colonies gained theirindependence, the British government gave them captured, war-surplusEnigma machines to protect their own traffic. Some authors contendthat this was deceptive, in that these former colonies did not realizethat the British could read Enigma-protected traffic; others claimthat this was obvious but that these countries didn't care: Britainwas no longer their enemy; it was neighboring countries they wereworried about. Again, though, this concerned governmental use ofencryption[Kahn1996][Baldwin2022]. There was still little privateuse.¶
The modern era of conflict between an individual's desire for privacy andthe government desires to read traffic began around 1972. The grainharvest in the USSR had failed; since relations between the SovietUnion and the United States were temporarily comparatively warm, theSoviet grain company -- an arm of the Soviet government, ofcourse -- entered into negotiations with private Americancompanies. Unknown to Americans at the time, Soviet intelligence wasintercepting the phone calls of the American negotiating teams. Inother words, private companies had to deal with state actors as athreat. Eventually, US intelligence learned of this and came to arealization: the private sector needed strong cryptography, too, toprotect American national interests[Broad1982][Johnson1998]. Thisunderscored the need for strong cryptography to protect Americancivilian traffic -- but the SIGINT people were unhappy at the thought ofmore encryption that they couldn't break.¶
Meanwhile, the US was concerned about protecting unclassified data[Landau2014]. In 1973 and again in 1974, theNational Bureau of Standards (NBS) put out a call for a strong, modernencryption algorithm. IBM submitted Lucifer, an internally developedalgorithm based on what has become known as a 16-round Feistel network. Theoriginal version used a long key.It seemed quite strong, so NBS sent it off to the NSA toget their take. The eventual design, which was adopted in 1976 as theData Encryption Standard (DES), differed in some important ways fromLucifer. First, the so-called S-boxes, the source of the cryptologicstrength of DES, were changed, and were now demonstrably not composed ofrandom integers. Many researchers alleged that the S-boxes containedan NSA back door. It took nearly 20 years for the truth to come out: theS-boxes were in fact strengthened, not weakened. Most likely, IBMindependently discovered the attack now known as differentialcryptanalysis, though some scholars suspect that the NSA told themabout it. The nonrandom S-boxes protected against this attack. Thesecond change, though, was clearly insisted on by the NSA: the key sizewas shortened, from Lucifer's 112 bits to DES's 56 bits. We now knowthat the NSA wanted a 48-bit key size, while IBM wanted 64 bits; theycompromised at 56 bits.¶
Whitfield Diffie and Martin Hellman, at Stanford University, wonderedabout the 56-bit keys. In 1979, they published a paper demonstratingthat the US government, but few others, could afford to build abrute-force cracking machine, one that could try all 256 possiblekeys to crack a message. NSA denied tampering with the design; aSenate investigating committee found that assertion to be correct, but didnot discuss the shortened key length issue.¶
This, however, was not Diffie and Hellman's greatest contribution tocryptology. A few years earlier, they had published a paper inventing whatis now known as public key cryptography.(In fact, public key encryption had been invented a few years earlierat UK Government Communications Headquarters (GCHQ), but they kept their discovery classified until 1997.)In 1978, Ronald Rivest, AdiShamir, and Leonard Adleman devised the RSA algorithm, which made itusable. (An NSA employee, acting on his own, sent a letter warningthat academic conferences on cryptology might violate US exportlaws.)¶
Around the same time, George Davida at the University of Wisconsinapplied for a patent on a stream cipher; the NSA slapped a secrecyorder on the application. This barred him from even talking about hisinvention. The publicity was devastating; the NSA had to back down.¶
The Crypto Wars had thus begun: civilians were inventing strongencryption systems, and the NSA was tampering with them or trying tosuppress them. Bobby Inman, the then-director of the NSA, triedcreating a voluntary review process for academic papers, but very fewresearchers were interested in participating[Landau1988].¶
There were few major public battles during the 1980s because therewere few new major use cases for civilian cryptography during thattime. There was one notable incident, though: Shamir, Amos Fiat, andUriel Feige invented zero-knowledge proofs and applied for a USpatent. In response, the US Army slapped a secrecy order on thepatent. After a great deal of public outrage and intervention by, ofall organizations, the NSA, the order was lifted on very narrowgrounds: the inventors were not American, and they had been discussingtheir work all over the world[Landau1988].¶
In the 1990s, though, everything changed.¶
There were three major developments in cryptography in the early1990s. First, Phil Zimmermann released PGP (Pretty Good Privacy), apackage to encrypt email messages. In 1993, AT&T planned to releasethe TSD-3600, an easy-to-use phone encryptor aimed at businesstravelers. Shortly after that, the Netscape Communications Corporation released SSL(Secure Socket Layer) as a way to enable web-based commerce usingtheir browser and web server. All of these were seen as threats by theNSA and the FBI.¶
PGP was, at least arguably, covered by what was known as ITAR, theInternational Trafficking in Arms Regulations -- under American law,encryption software was regarded as a weapon, so exports required alicense. It was also alleged to infringe the patents on the RSAalgorithm. Needless to say, both issues were problematic for what wasintended to be open source software. Eventually, the criminalinvestigation into Zimmermann's role in the spread of PGP overseas wasdropped, but the threat of such investigations remained to deterothers[Levy2001].¶
The TSD-3600 was another matter. AT&T was a major corporation that didnot want to pick a fight with the US government, but internationalbusiness travelers were seen as a major market for the device. At thegovernment's "request", the DES chip was replaced with what was knownas the Clipper chip. The Clipper chip used Skipjack, a cipher with80-bit keys; it was thus much stronger against brute-force attacksthan DES. However, it provided "key escrow". Without going into anydetails, the key escrow mechanism allowed US governmenteavesdroppers to consult a pair of (presumably secure) internaldatabases and decrypt all communications protected by the chip. TheClipper chip proved to be extremely unpopular with industry; that AT&TBell Labs' Matt Blaze found a weakness in the design[Blaze1994], onethat let you use Skipjack without the key escrow feature, didn't helpits reputation.¶
The third major development, SSL, was even trickier. SSL was aimed ate-commerce, and of course Netscape wanted to be able to sell itsproducts outside the US. That would require an export license, so theymade a deal with the government: non-American users would receive aversion that used 40-bit keys, a key length far shorter than what theNSA had agreed to 20 years earlier. (To get ahead of the story: therewas a compromise mode of operation, wherein an export-grade browsercould use strong encryption when talking to a financialinstitution. This hybrid mode led to cryptographic weaknessesdiscovered some 20 years later[Adrian2015].)¶
Technologists and American industry pushed back. The IETF adopted theDanvers Doctrine, described in[RFC3365]:¶
At the 32cd [sic] IETF held in Danvers, Massachusetts during April of 1995the IESG asked the plenary for a consensus on the strength of securitythat should be provided by IETF standards. Although the immediateissue before the IETF was whether or not to support "export" gradesecurity (which is to say weak security) in standards the questionraised the generic issue of security in general.¶
The overwhelming consensus was that the IETF should standardize on theuse of the best security available, regardless of national policies.This consensus is often referred to as the "Danvers Doctrine".¶
Then American companies started losing business to their overseascompetitors, who did not have to comply with US export laws. All ofthis led to what seemed like a happy conclusion: the US governmentdrastically loosened its export rules for cryptographic software. Allwas well -- or so it seemed...¶
Strong cryptography was here to stay, and it was no longer an Americanmonopoly, if indeed it ever was. The Information Assurance Directorateof the NSA, the part of the agency that is supposed to protectUS data, was pleased by the spread of strong cryptography. When theAdvanced Encryption Standard (AES) competition was held, there were noallegations of malign NSA interference; in fact, the winning entry wasdevised by two Europeans, Joan Daemen and Vincent Rijmen. But the NSAand its SIGINT needs did not go away -- the agency merely adopted othertechniques.¶
I have often noted that one doesn't go through strong security, onegoes around it. When strong encryption became more common and muchmore necessary, the NSA started going around it, by targetingcomputers and the software that they run. And it seems clear that theybelieve that AES is quite strong; they've even endorsed its use forprotecting TOP SECRET information. But there was an asterisk attachedto that endorsement: AES is suitable if and only if properly used andimplemented. Therein lies the rub.¶
The first apparent attempt to tamper with outside cryptographicmechanisms was discovered in 2007, when two Microsoft researchers, DanShumow and Niels Ferguson, noted an odd property of aNIST-standardized random number generator, DUAL_EC_DRBG. (The NBShad been renamed to NIST, the National Institute of Standards andTechnology.) Random numbers are vital forcryptography, but Shumow and Ferguson showed that if certain constantsin DUAL_EC_DRBG were chosen in a particular way with aknown-but-hidden other number, whoever knew that number could predictall future random numbers from a system given a few sample bytes tostart from[Kostyuk2022]. These sample bytes could come fromknown keys, nonces, or anything else. Where did the constants inDUAL_EC_DRBG come from and how were they chosen or generated? No onewho knows is talking. But although cryptographers and securityspecialists were very suspicious -- Bruce Schneier wrote in 2007, beforemore facts came out, that "both NIST and the NSA have some explainingto do"; I assigned my students reading on the topic -- the issue didn'treally get any traction until six years later, when among the papersthat Edward Snowden disclosed was the information that the NSA hadindeed tampered with a major cryptographic standard, though publishedreports did not specifically name DUAL_EC_DRBG or explain what thepurpose was.¶
The revelations didn't stop there. There have been allegations thatthe NSA paid some companies to use DUAL_EC_DRBG in theirproducts. Some people have claimed that there were attempts to modifysome IETF standards to make enough random bytes visible, to aid inexploiting the random number generator. A major vendor of networkinggear, Juniper, did use DUAL_EC_DRBG in some of its products, but withdifferent constants[Checkoway2016]. Where did these come from? Werethey from the NSA or some other government? Could their source treehave been hacked by an intelligence agency? There was a different hackof their code at around the same time[Moore2015]. No one is talking.¶
The Snowden revelations also included data suggesting that the NSA hada worldwide eavesdropping network and a group that tried veryspecific, targeted hacks on very specific targets' systems. Inretrospect, neither is surprising: "spies gonna spy". The NSA'sbusiness is signals intelligence; of course they're going to try tointercept traffic. Indeed, the DUAL_EC_DRBG tampering is useless toanyone who has not collected messages to decrypt. And targeted hacksare a natural way around strong encryption: collect the data before itis encrypted or after it is decrypted, and don't worry about thestrength of the algorithms.¶
The privacy community, worldwide, was appalled, though perhaps theyshouldn't have been. It calls to mind the line that Claude Rains'character uttered in the movieCasablanca[Curtiz]: "I'm shocked, shocked to find that gambling is going on inhere." The immediate and continuing reaction was to deploy moreencryption. The standards have long existed; what was missing wasadoption. One barrier was the difficulty and expense of gettingcertificates to use with TLS, thesuccessor to SSL; that void was filled by Let's Encrypt[LE],which made free certificates easy to get online. Today, most HTTPtraffic is encrypted, so much so that Google's search enginedown-ranks sites that do not use it. Major email providers uniformlyuse TLS to protect all traffic. Wi-Fi, though a local area issue, nowuses much stronger encryption. (It's important to remember thatsecurity and insecurity have economic components. Security doesn't haveto be perfect to be very useful, if it raises the attackers' costsby enough.)¶
The news on the software side is less good. Not a day goes by when onedoes not read of organizations being hit by ransomware. It goeswithout saying that any threat actor capable of encrypting disks isalso capable of stealing the information on them; indeed, that is afrequent accompanying activity, since the threat of disclosure isanother incentive to pay for those sites that do have good enoughbackups. Major vendors have put a lot of effort into securing theirsoftware, but bugs and operational errors by end-user sites persist.¶
Signal intelligence agencies, not just the NSA, but its peers aroundthe globe -- most major countries have their own -- are not going to goaway. The challenges that have beset the NSA are common to all suchagencies, and their solutions are likely the same. The question iswhat should be done to protect individual privacy. A number of strongdemocracies, such as Australia and the United Kingdom, are, ina resumption of the Crypto Wars,moving to restrict encryption. Spurred on by complaints from the FBIand other law enforcement agencies, the US Congress frequentlyconsiders bills to do the same.¶
The IETF has long had a commitment to strong, ubiquitousencryption. This is a good thing. It needs to continue, withcryptography and other security features designed into protocols fromthe beginning. But there is also a need for maintenance. Parameterssuch as key lengths and modulus sizes age; a value that is acceptabletoday may not be 10 years hence. (We've already seen apparent problemsfrom 1024-bit moduli specified in an RFC, an RFC that was not modifiedwhen technology improved enough that attacking encryption based onthem had become feasible[Adrian2015].) The IETF can do nothing aboutthe code that vendors ship or that sites use, but it can alert theworld that it thinks things have changed.¶
Cryptoagility is of increasing importance. In the next very few years,we will have so-called post-quantum algorithms. Both protocols and keylengths will need to change, perhaps drastically. Is the IETF ready?What will happen to, say, DNSSEC if key lengths become drasticallylonger? Backwards compatibility will remain important, but that, ofcourse, opens the door to other attacks. We've long thought aboutthem; we need to be sure that our mechanisms work -- we'vebeen surprised in the past[BellovinRescorla2006].¶
We also need to worry more about metadata. General Michael Hayden,former director of both the NSA and the CIA, once remarked, "We killpeople based on metadata"[Ferran2014]. But caution is necessary;attempts to hide metadata can have side effects. To give a trivialexample, Tor is quite strong, but if your exit node is in a differentcountry than you are in, web sites that use IP geolocation may presenttheir content in a language foreign to you.Some sites even block connections from known Tor exit nodes.More generally, manyattempts to hide metadata involve trusting a different party; thatparty may turn out to be untrustworthy or it may itself become atarget of attack. As another prominent IETFer has remarked,"Insecurity is like entropy; you can't destroy it, but you can move itaround." The IETF has done a lot; it needs to do more. And rememberthat the risk here is not just governments acting directly, it's alsoprivate companies that collect the data and sell it to all comers.¶
Finally, the IETF must remember that its middle name is"Engineering". To me, one of the attributes of engineering is the artof picking the right solution in an over-constrainedenvironment. Intelligence agencies won't go away, nor will nationalrestrictions on cryptography. We have to pick the right path whilestaying true to our principles.¶
Each or any of the authors may have forgotten or omitted thingsor gotten things wrong. We're sorry if that's the case, but that'sin the nature of a look-back such as this. Such flaws almost certainly won't worsen security or privacy, though.¶
This document has no IANA actions.¶
Susan Landau added many valuable comments toSteve Bellovin's essay.¶
We thankCarsten Bormann,Brian Carpenter,Wendy Grossman,Kathleen Moriarty,Jan Schaumann,Seth David Schoen, andPaul Wouters for comments and review of this text, thoughthat of course doesn't mean that they necessarily agree with the text.¶
This document was created at the behest ofEliot Lear, who also cat herded and did some editing.¶