Movatterモバイル変換


[0]ホーム

URL:


RFC 9199Considerations for Large Auth DNS OpsMarch 2022
Moura, et al.Informational[Page]
Stream:
Independent Submission
RFC:
9199
Category:
Informational
Published:
ISSN:
2070-1721
Authors:
G. Moura
SIDN Labs/TU Delft
W. Hardaker
USC/Information Sciences Institute
J. Heidemann
USC/Information Sciences Institute
M. Davids
SIDN Labs

RFC 9199

Considerations for Large Authoritative DNS Server Operators

Abstract

Recent research work has explored the deployment characteristics andconfiguration of the Domain Name System (DNS). This documentsummarizes the conclusions from these research efforts and offersspecific, tangible considerations or advice to authoritative DNSserver operators. Authoritative server operators may wish to followthese considerations to improve their DNS services.

It is possible that the results presented in this document could beapplicable in a wider context than just the DNS protocol,as some of the results may generically apply toany stateless/short-duration anycasted service.

This document is not an IETF consensus document: it is published forinformational purposes.

Status of This Memo

This document is not an Internet Standards Track specification; it is published for informational purposes.

This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not candidates for any level of Internet Standard; see Section 2 of RFC 7841.

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained athttps://www.rfc-editor.org/info/rfc9199.

Copyright Notice

Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.

Table of Contents

1.Introduction

This document summarizes recent research that explored thedeployed DNS configurations and offers derived, specific, tangibleadvice to DNS authoritative server operators (referred to as "DNS operators"hereafter). The considerations (C1-C6) presented in this document arebacked by peer-reviewed research, which used wide-scale Internetmeasurements to draw their conclusions. This document summarizes theresearch results and describes the resulting key engineering options.In each section, readers are pointed to the pertinent publications whereadditional details are presented.

These considerations are designed for operators of "large"authoritative DNS servers, which, in this context, are servers with a significant global user population, like top-level domain (TLD) operators, run by either a single operator ormultiple operators. Typically, these networks are deployed on wideanycast networks[RFC1546][AnyBest].These considerations may not beappropriate for smaller domains, such as those used by an organizationwith users in one unicast network or in a single city or region, whereoperational goals such as uniform, global low latency are lessrequired.

It is possible that the results presented in this document could beapplicable in a wider context than just the DNS protocol, as some ofthe results may generically apply to any stateless/short-durationanycasted service. Because the conclusions of the reviewed studiesdon't measure smaller networks, the wording in this documentconcentrates solely on discussing large-scale DNS authoritative services.

This document is not an IETF consensus document: it is published forinformational purposes.

2.Background

The DNS has two main types of DNS servers: authoritative servers andrecursive resolvers, shown by a representational deployment model inFigure 1. An authoritative server (shown as AT1-AT4 inFigure 1) knows the content of a DNS zone and is responsible foranswering queries about that zone. It runs using local (possiblyautomatically updated) copies of the zone and does not need to queryother servers[RFC2181] in order to answer requests. A recursiveresolver (Re1-Re3) is a server that iteratively queries authoritativeand other servers to answer queries received from client requests[RFC1034]. A client typically employs a software library called a "stubresolver" ("stub" inFigure 1) to issue its query to the upstreamrecursive resolvers[RFC1034].

        +-----+  +-----+  +-----+  +-----+        | AT1 |  | AT2 |  | AT3 |  | AT4 |        +-----+  +-----+  +-----+  +-----+          ^         ^        ^        ^          |         |        |        |          |      +-----+     |        |          +------| Re1 |----+|        |          |      +-----+              |          |         ^                 |          |         |                 |          |      +----+   +----+      |          +------|Re2 |   |Re3 |------+                 +----+   +----+                   ^          ^                   |          |                   | +------+ |                   +-| stub |-+                     +------+
Figure 1:Relationship between Recursive Resolvers (Re) and Authoritative Name Servers (ATn)

DNS queries issued by a client contribute to a user's perceived latency and affect the user experience[Singla2014] dependingon how long it takes for responses to be returned. The DNS system hasbeen subject to repeated Denial-of-Service (DoS) attacks (for example,in November 2015[Moura16b]) in order to specifically degrade the userexperience.

To reduce latency and improve resiliency against DoS attacks, the DNSuses several types of service replication. Replication at theauthoritative server level can be achieved with the following:

  1. the deployment ofmultiple servers for the same zone[RFC1035] (AT1-AT4 inFigure 1);
  2. the use of IP anycast[RFC1546][RFC4786][RFC7094] that allows the same IP address tobe announced from multiple locations (each of referred to as an"anycast instance"[RFC8499]); and
  3. the use of load balancers tosupport multiple servers inside a single (potentially anycasted)instance. As a consequence, there are many possible ways anauthoritative DNS provider can engineer its production authoritativeserver network with multiple viable choices, and there is not necessarily a singleoptimal design.

3.Considerations

In the next sections, we cover the specific considerations (C1-C6) forconclusions drawn within academic papers about large authoritativeDNS server operators. These considerations are conclusions reachedfrom academic work that authoritative server operators may wish toconsider in order to improve their DNS service. Each considerationoffers different improvements that may impact service latency,routing, anycast deployment, and defensive strategies, for example.

3.1.C1: Deploy Anycast in Every Authoritative Server to Enhance Distribution and Latency

3.1.1.Research Background

Authoritative DNS server operators announce their service using NSrecords[RFC1034]. Different authoritative servers for a given zoneshould return the same content; typically, they stay synchronized usingDNS zone transfers (authoritative transfer (AXFR)[RFC5936] and incremental zone transfer (IXFR)[RFC1995]), coordinatingthe zone data they all return to their clients.

As discussed above, the DNS heavily relies upon replication to supporthigh reliability, ensure capacity, and reduce latency[Moura16b].The DNS has two complementary mechanisms for service replication:name server replication (multiple NS records) and anycast (multiplephysical locations). Name server replication is strongly recommendedfor all zones (multiple NS records), and IP anycast is used by manylarger zones such as the DNS root[AnyFRoot], most top-leveldomains[Moura16b], and many large commercial enterprises, governments,and other organizations.

Most DNS operators strive to reduce service latency for users, whichis greatly affected by both of these replication techniques. However,because operators only have control over their authoritative serversand not over the client's recursive resolvers, it is difficult toensure that recursives will be served by the closest authoritativeserver. Server selection is ultimately up to the recursive resolver'ssoftware implementation, and different vendors and even differentreleases employ different criteria to choose the authoritative servers with which to communicate.

Understanding how recursive resolvers choose authoritative servers isa key step in improving the effectiveness of authoritative serverdeployments. To measure and evaluate server deployments,[Mueller17b] describes the deployment of seven unicast authoritative name servers indifferent global locations and then queried them from more than 9000Reseaux IP Europeens (RIPE) authoritative server operators and their respective recursiveresolvers.

It was found in[Mueller17b] that recursive resolvers in the wild query allavailable authoritative servers, regardless of the observedlatency. But the distribution of queries tends to be skewed towardsauthoritatives with lower latency: the lower the latency between arecursive resolver and an authoritative server, the more often therecursive will send queries to that server. These results wereobtained by aggregating results from all of the vantage points, andthey were not specific to any vendor or version.

The authors believe this behavior is a consequence of combining thetwo main criteria employed by resolvers when selecting authoritativeservers: resolvers regularly check all listed authoritative servers inan NS set to determine which is closer (the least latent), and when oneisn't available, it selects one of the alternatives.

3.1.2.Resulting Considerations

For an authoritative DNS operator, this result means that the latencyof all authoritative servers (NS records) matter, so they all must besimilarly capable -- all available authoritatives will be queried bymost recursive resolvers. Unicasted services, unfortunately, cannotdeliver good latency worldwide (a unicast authoritative server inEurope will always have high latency to resolvers in California andAustralia, for example, given its geographicaldistance).

[Mueller17b] recommends that DNS operators deploy equallystrong IP anycast instances for every authoritative server (i.e., foreach NS record). Each large authoritative DNS server provider shouldphase out its usage of unicast and deploy a number of well-engineered anycast instances with good peering strategies so they can providegood latency to their global clients.

As a case study, the ".nl" TLD zone was originally served on sevenauthoritative servers with a mixed unicast/anycast setup. In early2018, .nl moved to a setup with 4 anycast authoritativeservers.

The contribution of[Mueller17b] to DNS service engineering shows thatbecause unicast cannot deliver good latency worldwide, anycast needsto be used to provide a low-latency service worldwide.

3.2.C2: Optimizing Routing is More Important than Location Count and Diversity

3.2.1.Research Background

When selecting an anycast DNS provider or setting up an anycastservice, choosing the best number of anycast instances[RFC4786][RFC7094] todeploy is a challenging problem. Selecting the right quantity and set of global locations that should send BGP announcements is tricky. Intuitively, onecould naively think that more instances are better and that simply "more" will always lead to shorter response times.

This is not necessarily true, however. In fact, proper route engineering can matter more than the total number of locations, as found in[Schmidt17a]. To study the relationship between the number ofanycast instances and the associated service performance, the authors measured the round-trip time (RTT) latency of four DNS root servers. The root DNS servers are implemented by 12 separateorganizations serving the DNS root zone at 13 different IPv4/IPv6address pairs.

The results documented in[Schmidt17a] measured the performance ofthe {c,f,k,l}.root-servers.net (referred to as "C", "F", "K", and "L" hereafter)servers from more than 7,900 RIPE Atlas probes. RIPE Atlas is anInternet measurement platform with more than 12,000 global vantagepoints called "Atlas probes", and it is used regularly by bothresearchers and operators[RipeAtlas15a][RipeAtlas19a].

In[Schmidt17a], the authors found that the C server, a smaller anycast deploymentconsisting of only 8 instances, provided very similar overallperformance in comparison to the much larger deployments of K and L,with 33 and 144 instances, respectively. The median RTTs for the C, K, and Lroot servers were all between 30-32 ms.

Because RIPE Atlas is known to have better coverage in Europe thanother regions, the authors specifically analyzed the results perregion and per country (Figure 5 in[Schmidt17a]) and show thatknown Atlas bias toward Europe does not change the conclusion thatproperly selected anycast locations are more important to latency thanthe number of sites.

3.2.2.Resulting Considerations

The important conclusion from[Schmidt17a] is that when engineeringanycast services for performance, factors other than just the numberof instances (such as local routing connectivity) must be considered.Specifically, optimizing routing policies is more important thansimply adding new instances. The authors showed that 12 instances canprovide reasonable latency, assuming they are globally distributed andhave good local interconnectivity. However, additional instances canstill be useful for other reasons, such as when handlingDoS attacks[Moura16b].

3.3.C3: Collect Anycast Catchment Maps to Improve Design

3.3.1.Research Background

An anycast DNS service may be deployed from anywhere and from severallocations to hundreds of locations (for example, l.root-servers.nethas over 150 anycast instances at the time this was written). Anycastleverages Internet routing to distribute incoming queries to aservice's nearest distributed anycast locations measured by the number of routing hops. However, queries are usually not evenly distributed across all anycast locations, asfound in the case of L-Root when analyzed using Hedgehog[IcannHedgehog].

Adding locations to or removing locations from a deployed anycastnetwork changes the load distribution across all of itslocations. When a new location is announced by BGP, locations mayreceive more or less traffic than it was engineered for, leading tosuboptimal service performance or even stressing some locations whileleaving others underutilized. Operators constantly face this scenario when expanding an anycast service. Operators cannot easilydirectly estimate future query distributions based on proposed anycast network engineering decisions.

To address this need and estimate the query loads of an anycast service undergoing changes (in particular expanding),[Vries17b] describes the development of a new technique enabling operators to carry out activemeasurements using an open-source tool called Verfploeter (availableat[VerfSrc]). The results allow the creation of detailed anycastmaps and catchment estimates. By running Verfploeter combined with apublished IPv4 "hit list", the DNS can precisely calculate which remoteprefixes will be matched to each anycast instance in a network. Atthe time of this writing, Verfploeter still does not support IPv6 asthe IPv4 hit lists used are generated via frequent large-scale ICMPecho scans, which is not possible using IPv6.

As proof of concept,[Vries17b] documents how Verfploeter was used to predict both the catchment and query load distribution for a new anycast instance deployed for b.root-servers.net. Using twoanycast test instances in Miami (MIA) and Los Angeles (LAX), an ICMPecho query was sent from an IP anycast address to each IPv4 /24 network routing block on the Internet.

The ICMP echo responses were recorded at both sites and analyzed andoverlaid onto a graphical world map, resulting in an Internet-scalecatchment map. To calculate expected load once the production networkwas enabled, the quantity of traffic received by b.root-servers.net'ssingle site at LAX was recorded based on a single day's traffic(2017-04-12, "day in the life" (DITL) datasets[Ditl17]). In[Vries17b], it was predicted that81.6% of the traffic load would remain at the LAX site. This Verfploeter estimateturned out to be very accurate; the actual measured traffic volume when production service at MIA was enabled was 81.4%.

Verfploeter can also be used to estimate traffic shifts based on otherBGP route engineering techniques (for example, Autonomous System (AS) path prepending orBGP community use) in advance of operational deployment. This was studied in[Vries17b] using prepending with 1-3 hops at each instance, andthe results were compared against real operational changes to validate theaccuracy of the techniques.

3.3.2.Resulting Considerations

An important operational takeaway[Vries17b] provides is how DNS operators can make informed engineering choices when changing DNSanycast network deployments by using Verfploeter in advance.Operators can identify suboptimal routing situations in advance withsignificantly better coverage rather than using other active measurementplatforms such as RIPE Atlas. To date, Verfploeter has been deployedon an operational testbed (anycast testbed)[AnyTest] on a largeunnamed operator and is run daily at b.root-servers.net[Vries17b].

Operators should use active measurement techniques like Verfploeter inadvance of potential anycast network changes to accurately measure thebenefits and potential issues ahead of time.

3.4.C4: Employ Two Strategies When under Stress

3.4.1.Research Background

DDoS attacks are becoming bigger, cheaper, and more frequent[Moura16b]. The most powerful recorded DDoS attack against DNSservers to date reached 1.2 Tbps by using Internet of Things (IoT) devices[Perlroth16].How should a DNS operator engineer its anycastauthoritative DNS server to react to such a DDoS attack?[Moura16b]investigates this question using empirical observations grounded with theoretical option evaluations.

An authoritative DNS server deployed using anycast will have manyserver instances distributed over many networks. Ultimately, therelationship between the DNS provider's network and a client's ISPwill determine which anycast instance will answer queries for a givenclient, given that the BGP protocol maps clients to specificanycast instances using routing information. As aconsequence, when an anycast authoritative server is under attack, theload that each anycast instance receives is likely to be unevenlydistributed (a function of the source of the attacks); thus, someinstances may be more overloaded than others, which is what wasobserved when analyzing the root DNS events of November 2015[Moura16b]. Given the fact that different instances may havedifferent capacities (bandwidth, CPU, etc.), making a decision about how to react to stress becomes even more difficult.

In practice, when an anycast instance is overloaded with incoming traffic,operators have two options:

  • They can withdraw its routes, pre-prepend its AS route to some orall of its neighbors, perform other traffic-shifting tricks (such asreducing route announcement propagation using BGPcommunities[RFC1997]), or communicate with its upstreamnetwork providers to apply filtering (potentially using FlowSpec[RFC8955] or the DDoS Open Threat Signaling (DOTS) protocol[RFC8811][RFC9132][RFC8783]). These techniques shift both legitimate and attack traffic to other anycast instances (with hopefully greater capacity) or block trafficentirely.
  • Alternatively, operators can become degraded absorbers bycontinuing to operate, knowing dropping incoming legitimate requestsdue to queue overflow. However, this approach will also absorbattack traffic directed toward its catchment, hopefully protectingthe other anycast instances.

[Moura16b] describes seeing both of these behaviors deployed in practice when studying instance reachability and RTTs in the DNSroot events. When withdraw strategies were deployed, the stress ofincreased query loads were displaced from one instance to multipleother sites. In other observed events, one site was left to absorbthe brunt of an attack, leaving the other sites to remain relativelyless affected.

3.4.2.Resulting Considerations

Operators should consider having both an anycast site withdraw strategyand an absorption strategy ready to be used before a network overloadoccurs. Operators should be able to deploy one or both of thesestrategies rapidly. Ideally, these should be encoded into operatingplaybooks with defined site measurement guidelines for which strategyto employ based on measured data from past events.

[Moura16b] speculates that careful, explicit, and automatedmanagement policies may provide stronger defenses to overloadevents. DNS operators should be ready to employ both commonfiltering approaches and other routing load-balancing techniques(such as withdrawing routes, prepending Autonomous Systems (ASes), adding communities, or isolating instances),where the best choice depends on the specifics of the attack.

Note that this consideration refers to the operation of just oneanycast service point, i.e., just one anycasted IP address blockcovering one NS record. However, DNS zones with multiple authoritativeanycast servers may also expect loads to shift from one anycastedserver to another, as resolvers switch from one authoritative servicepoint to another when attempting to resolve a name[Mueller17b].

3.5.C5: Consider Longer Time-to-Live Values Whenever Possible

3.5.1.Research Background

Caching is the cornerstone of good DNS performance and reliability. A50 ms response to a new DNS query may be considered fast, but a response of lessthan 1 ms to a cached entry is far faster. In[Moura18b], it wasshown that caching also protects users from short outages and even significant DDoS attacks.

Time-to-live (TTL) values[RFC1034][RFC1035] for DNS records directlycontrol cache durations and affect latency, resilience, and the roleof DNS in Content Delivery Network (CDN) server selection. Some early work modeled caches as afunction of their TTLs[Jung03a], and recent work has examined cacheinteractions with DNS[Moura18b], but until[Moura19b], no researchhad provided considerations about the benefits of various TTL valuechoices. To study this, Moura et al. [Moura19b] carried out ameasurement study investigating TTL choices and their impact on userexperiences in the wild. They performed this study independent ofspecific resolvers (and their caching architectures), vendors, orsetups.

First, they identified several reasons why operators and zone owners maywant to choose longer or shorter TTLs:

  • Longer TTLs, as discussed, lead to a longer cache life, resulting in faster responses. In[Moura19b], this was measured this in the wild, and itshowed that by increasing the TTL for the .uy TLD from 5 minutes(300 s) to 1 day (86,400 s), the latency measured from 15,000 Atlasvantage points changed significantly: the median RTT decreasedfrom 28.7 ms to 8 ms, and the 75th percentile decreased from 183 ms to 21 ms.
  • Longer caching times also result in lower DNS traffic:authoritative servers will experience less traffic with extendedTTLs, as repeated queries are answered by resolver caches.
  • Longer caching consequently results in a lower overall cost ifthe DNS is metered: some providers that offer DNS as a Service charge a per-query (metered) cost (often in addition to a fixed monthly cost).
  • Longer caching is more robust to DDoS attacks on DNSinfrastructure. DNS caching was also measured in[Moura18b], and it showed that the effects of a DDoS on DNS can be greatly reduced, providedthat the caches last longer than the attack.
  • Shorter caching, however, supports deployments that may requirerapid operational changes: an easy way to transition from an oldserver to a new one is to simply change the DNS records. Sincethere is no method to remotely remove cached DNS records, the TTLduration represents a necessary transition delay to fully shiftfrom one server to another. Thus, low TTLs allow for more rapidtransitions. However, when deployments are planned in advance(that is, longer than the TTL), it is possible to lower the TTLsjust before a major operational change and raise them againafterward.
  • Shorter caching can also help with a DNS-based response to DDoSattacks. Specifically, some DDoS-scrubbing services use the DNS toredirect traffic during an attack. Since DDoS attacks arriveunannounced, DNS-based traffic redirection requires that the TTL bekept quite low at all times to allow operators to suddenly havetheir zone served by a DDoS-scrubbing service.
  • Shorter caching helps DNS-based load balancing. Many largeservices are known to rotate traffic among their servers usingDNS-based load balancing. Each arriving DNS request provides anopportunity to adjust the service load by rotating IP address records(A and AAAA) to the lowest unused server. Shorter TTLs may bedesired in these architectures to react more quickly to trafficdynamics. Many recursive resolvers, however, have minimum cachingtimes of tens of seconds, placing a limit on this form of agility.

3.5.2.Resulting Considerations

Given these considerations, the proper choice for a TTL depends inpart on multiple external factors -- no single recommendation isappropriate for all scenarios. Organizations must weigh thesetrade-offs and find a good balance for their situation. Still, someguidelines can be reached when choosing TTLs:

  • For general DNS zone owners,[Moura19b] recommends a longer TTLof at least one hour and ideally 4, 8, or 24 hours. Assumingplanned maintenance can be scheduled at least a day in advance, longTTLs have little cost and may even literally provide cost savings.
  • For TLD and other public registrationoperators (for example, most ccTLDs and .com, .net, and .org) that hostmany delegations (NS records, DS records, and "glue" records),[Moura19b] demonstrates that most resolvers will use the TTLvalues provided by the child delegations while some otherswill choose the TTL provided by the parent's copy of therecord. As such,[Moura19b] recommends longer TTLs (at least anhour or more) for registry operators as well for child NS andother records.
  • Users of DNS-based load balancing or DDoS-prevention services mayrequire shorter TTLs: TTLs may even need to be as short as 5minutes, although 15 minutes may provide sufficient agility formany operators. There is always a tussle between using shorter TTLsthat provide more agility and using longer TTLs that include all the benefits listed above.
  • Regarding the use of A/AAAA and NS records, the TTLs for A/AAAA records shouldbe shorter than or equal to the TTL for the corresponding NS recordsfor in-bailiwick authoritative DNS servers, since[Moura19b]finds that once an NS record expires, their associated A/AAAA willalso be requeried when glue is required to be sent by theparents. For out-of-bailiwick servers, A, AAAA, and NS records areusually all cached independently, so different TTLs can be usedeffectively if desired. In either case, short A and AAAA recordsmay still be desired if DDoS mitigation services are required.

3.6.C6: Consider the Difference in Parent and Children's TTL Values

3.6.1.Research Background

Multiple record types exist or are related between the parent of azone and the child. At a minimum, NS records are supposed to beidentical in the parent (but often are not), as are corresponding IPaddresses in "glue" A/AAAA records that must exist for in-bailiwickauthoritative servers. Additionally, if DNSSEC[RFC4033][RFC4034][RFC4035][RFC4509] is deployed for a zone, theparent's DS record must cryptographically refer to a child's DNSKEYrecord.

Because some information exists in both the parent and a child, it ispossible for the TTL values to differ between the parent's copy andthe child's.[Moura19b] examines resolver behaviors when thesevalues differed in the wild, as they frequently do -- often, parent zoneshave de facto TTL values that a child has no control over. Forexample, NS records for TLDs in the root zone are all set to 2 days(48 hours), but some TLDs have lower values within their publishedrecords (the TTLs for .cl's NS records from their authoritativeservers is 1 hour).[Moura19b] also examines the differences in theTTLs between the NS records and the corresponding A/AAAA records forthe addresses of a name server. RIPE Atlas nodes are used to determinewhat resolvers in the wild do with different information and whetherthe parent's TTL is used for cache lifetimes ("parent-centric") orthe child's ("child-centric").

[Moura19b] found that roughly 90% of resolvers follow the child'sview of the TTL, while 10% appear parent-centric. Additionally, itfound that resolvers behave differently for cache lifetimes forin-bailiwick vs. out-of-bailiwick NS/A/AAAA TTL combinations.Specifically, when NS TTLs are shorter than the corresponding addressrecords, most resolvers will requery for A/AAAA records for thein-bailiwick resolvers and switch to new address records even if thecache indicates the original A/AAAA records could be kept longer. Onthe other hand, the inverse is true for out-of-bailiwick resolvers: ifthe NS record expires first, resolvers will honor the original cachetime of the name server's address.

3.6.2.Resulting Considerations

The important conclusion from this study is that operators cannotdepend on their published TTL values alone -- the parent's values arealso used for timing cache entries in the wild. Operators that areplanning on infrastructure changes should assume that an olderinfrastructure must be left on and operational for at least themaximum of both the parent and child's TTLs.

4.Security Considerations

This document discusses applying measured research results tooperational deployments. Most of the considerations affect mostlyoperational practice, though a few do have security-related impacts.

Specifically,C4 discusses a couple of strategies to employ when aservice is under stress from DDoS attacks and offers operatorsadditional guidance when handling excess traffic.

Similarly,C5 identifies the trade-offs with respect to theoperational and security benefits of using longer TTL values.

5.Privacy Considerations

This document does not add any new, practical privacy issues, asidefrom possible benefits in deploying longer TTLs as suggested inC5.Longer TTLs may help preserve a user's privacy by reducing the numberof requests that get transmitted in both client-to-resolver andresolver-to-authoritative cases.

6.IANA Considerations

This document has no IANA actions.

7.References

7.1.Normative References

[RFC1034]
Mockapetris, P.,"Domain names - concepts and facilities",STD 13,RFC 1034,DOI 10.17487/RFC1034,,<https://www.rfc-editor.org/info/rfc1034>.
[RFC1035]
Mockapetris, P.,"Domain names - implementation and specification",STD 13,RFC 1035,DOI 10.17487/RFC1035,,<https://www.rfc-editor.org/info/rfc1035>.
[RFC1546]
Partridge, C.,Mendez, T., andW. Milliken,"Host Anycasting Service",RFC 1546,DOI 10.17487/RFC1546,,<https://www.rfc-editor.org/info/rfc1546>.
[RFC1995]
Ohta, M.,"Incremental Zone Transfer in DNS",RFC 1995,DOI 10.17487/RFC1995,,<https://www.rfc-editor.org/info/rfc1995>.
[RFC1997]
Chandra, R.,Traina, P., andT. Li,"BGP Communities Attribute",RFC 1997,DOI 10.17487/RFC1997,,<https://www.rfc-editor.org/info/rfc1997>.
[RFC2181]
Elz, R. andR. Bush,"Clarifications to the DNS Specification",RFC 2181,DOI 10.17487/RFC2181,,<https://www.rfc-editor.org/info/rfc2181>.
[RFC4786]
Abley, J. andK. Lindqvist,"Operation of Anycast Services",BCP 126,RFC 4786,DOI 10.17487/RFC4786,,<https://www.rfc-editor.org/info/rfc4786>.
[RFC5936]
Lewis, E. andA. Hoenes, Ed.,"DNS Zone Transfer Protocol (AXFR)",RFC 5936,DOI 10.17487/RFC5936,,<https://www.rfc-editor.org/info/rfc5936>.
[RFC7094]
McPherson, D.,Oran, D.,Thaler, D., andE. Osterweil,"Architectural Considerations of IP Anycast",RFC 7094,DOI 10.17487/RFC7094,,<https://www.rfc-editor.org/info/rfc7094>.
[RFC8499]
Hoffman, P.,Sullivan, A., andK. Fujiwara,"DNS Terminology",BCP 219,RFC 8499,DOI 10.17487/RFC8499,,<https://www.rfc-editor.org/info/rfc8499>.
[RFC8783]
Boucadair, M., Ed. andT. Reddy.K, Ed.,"Distributed Denial-of-Service Open Threat Signaling (DOTS) Data Channel Specification",RFC 8783,DOI 10.17487/RFC8783,,<https://www.rfc-editor.org/info/rfc8783>.
[RFC8955]
Loibl, C.,Hares, S.,Raszuk, R.,McPherson, D., andM. Bacher,"Dissemination of Flow Specification Rules",RFC 8955,DOI 10.17487/RFC8955,,<https://www.rfc-editor.org/info/rfc8955>.
[RFC9132]
Boucadair, M., Ed.,Shallow, J., andT. Reddy.K,"Distributed Denial-of-Service Open Threat Signaling (DOTS) Signal Channel Specification",RFC 9132,DOI 10.17487/RFC9132,,<https://www.rfc-editor.org/info/rfc9132>.

7.2.Informative References

[AnyBest]
Woodcock, B.,"Best Practices in DNS Service-Provision Architecture",Version 1.2,,<https://meetings.icann.org/en/marrakech55/schedule/mon-tech/presentation-dns-service-provision-07mar16-en.pdf>.
[AnyFRoot]
Woolf, S.,"Anycasting f.root-servers.net",,<https://archive.nanog.org/meetings/nanog27/presentations/suzanne.pdf>.
[AnyTest]
Tangled,"Tangled Anycast Testbed",<http://www.anycast-testbed.com/>.
[Ditl17]
DNS-OARC,"2017 DITL Data",,<https://www.dns-oarc.net/oarc/data/ditl/2017>.
[IcannHedgehog]
"hedgehog",commit b136eb0,,<https://github.com/dns-stats/hedgehog>.
[Jung03a]
Jung, J.,Berger, A., andH. Balakrishnan,"Modeling TTL-based Internet Caches",ACM 2003 IEEE INFOCOM,DOI 10.1109/INFCOM.2003.1208693,,<http://www.ieee-infocom.org/2003/papers/11_01.PDF>.
[Moura16b]
Moura, G.C.M.,Schmidt, R. de O.,Heidemann, J.,de Vries, W.,Müller, M.,Wei, L., andC. Hesselman,"Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event",ACM 2016 Internet Measurement Conference,DOI 10.1145/2987443.2987446,,<https://www.isi.edu/~johnh/PAPERS/Moura16b.pdf>.
[Moura18b]
Moura, G.C.M.,Heidemann, J.,Müller, M.,Schmidt, R. de O., andM. Davids,"When the Dike Breaks: Dissecting DNS Defenses During DDoS",ACM 2018 Internet Measurement Conference,DOI 10.1145/3278532.3278534,,<https://www.isi.edu/~johnh/PAPERS/Moura18b.pdf>.
[Moura19b]
Moura, G.C.M.,Hardaker, W.,Heidemann, J., andR. de O. Schmidt,"Cache Me If You Can: Effects of DNS Time-to-Live",ACM 2019 Internet Measurement Conference,DOI 10.1145/3355369.3355568,,<https://www.isi.edu/~hardaker/papers/2019-10-cache-me-ttls.pdf>.
[Mueller17b]
Müller, M.,Moura, G.C.M.,Schmidt, R. de O., andJ. Heidemann,"Recursives in the Wild: Engineering Authoritative DNS Servers",ACM 2017 Internet Measurement Conference,DOI 10.1145/3131365.3131366,,<https://www.isi.edu/%7ejohnh/PAPERS/Mueller17b.pdf>.
[Perlroth16]
Perlroth, N.,"Hackers Used New Weapons to Disrupt Major Websites Across U.S.",,<https://www.nytimes.com/2016/10/22/business/internet-problems-attack.html>.
[RFC4033]
Arends, R.,Austein, R.,Larson, M.,Massey, D., andS. Rose,"DNS Security Introduction and Requirements",RFC 4033,DOI 10.17487/RFC4033,,<https://www.rfc-editor.org/info/rfc4033>.
[RFC4034]
Arends, R.,Austein, R.,Larson, M.,Massey, D., andS. Rose,"Resource Records for the DNS Security Extensions",RFC 4034,DOI 10.17487/RFC4034,,<https://www.rfc-editor.org/info/rfc4034>.
[RFC4035]
Arends, R.,Austein, R.,Larson, M.,Massey, D., andS. Rose,"Protocol Modifications for the DNS Security Extensions",RFC 4035,DOI 10.17487/RFC4035,,<https://www.rfc-editor.org/info/rfc4035>.
[RFC4509]
Hardaker, W.,"Use of SHA-256 in DNSSEC Delegation Signer (DS) Resource Records (RRs)",RFC 4509,DOI 10.17487/RFC4509,,<https://www.rfc-editor.org/info/rfc4509>.
[RFC8811]
Mortensen, A., Ed.,Reddy.K, T., Ed.,Andreasen, F.,Teague, N., andR. Compton,"DDoS Open Threat Signaling (DOTS) Architecture",RFC 8811,DOI 10.17487/RFC8811,,<https://www.rfc-editor.org/info/rfc8811>.
[RipeAtlas15a]
RIPE Network Coordination Centre (RIPE NCC),"RIPE Atlas: A Global Internet Measurement Network",,<http://ipj.dreamhosters.com/wp-content/uploads/issues/2015/ipj18-3.pdf>.
[RipeAtlas19a]
RIPE Network Coordination Centre (RIPE NCC),"RIPE Atlas",<https://atlas.ripe.net>.
[Schmidt17a]
Schmidt, R. de O.,Heidemann, J., andJ. Kuipers,"Anycast Latency: How Many Sites Are Enough?",PAM 2017 Passive and Active Measurement Conference,DOI 10.1007/978-3-319-54328-4_14,,<https://www.isi.edu/%7ejohnh/PAPERS/Schmidt17a.pdf>.
[Singla2014]
Singla, A.,Chandrasekaran, B.,Godfrey, P., andB. Maggs,"The Internet at the Speed of Light",13th ACM Workshop on Hot Topics in Networks,DOI 10.1145/2670518.2673876,,<http://speedierweb.web.engr.illinois.edu/cspeed/papers/hotnets14.pdf>.
[VerfSrc]
"Verfploeter Source Code",commit f4792dc,,<https://github.com/Woutifier/verfploeter>.
[Vries17b]
de Vries, W.,Schmidt, R. de O.,Hardaker, W.,Heidemann, J.,de Boer, P-T., andA. Pras,"Broad and Load-Aware Anycast Mapping with Verfploeter",ACM 2017 Internet Measurement Conference,DOI 10.1145/3131365.3131371,,<https://www.isi.edu/%7ejohnh/PAPERS/Vries17b.pdf>.

Acknowledgements

We would like to thank the reviewers of this document who offeredvaluable suggestions as well as comments at the IETF DNSOPsession (IETF 104):Duane Wessels,Joe Abley,Toema Gavrichenkov,John Levine,Michael StJohns,Kristof Tuyteleers,Stefan Ubbink,Klaus Darilion, andSamir Jafferali.

Additionally, we would like thank those acknowledged in the papersthis document summarizes for helping produce the results: RIPE NCC andDNS OARC for their tools and datasets used in this research, as wellas the funding agencies sponsoring the individual research.

Contributors

This document is a summary of the main considerations of six research papers written by the authors and the following people who contributed substantially to the content and should be considered coauthors; this document would nothave been possible without their hard work:

Authors' Addresses

Giovane C. M. Moura
SIDN Labs/TU Delft
Meander 501
6825 MDArnhem
Netherlands
Phone:+31 26 352 5500
Email:giovane.moura@sidn.nl
Wes Hardaker
USC/Information Sciences Institute
PO Box 382
Davis,CA95617-0382
United States of America
Phone:+1 (530) 404-0099
Email:ietf@hardakers.net
John Heidemann
USC/Information Sciences Institute
4676 Admiralty Way
Marina Del Rey,CA90292-6695
United States of America
Phone:+1 (310) 448-8708
Email:johnh@isi.edu
Marco Davids
SIDN Labs
Meander 501
6825 MDArnhem
Netherlands
Phone:+31 26 352 5500
Email:marco.davids@sidn.nl

[8]ページ先頭

©2009-2025 Movatter.jp