| Internet-Draft | QUIC Transport Protocol | December 2020 |
| Iyengar & Thomson | Expires 16 June 2021 | [Page] |
This document defines the core of the QUIC transport protocol. QUIC providesapplications with flow-controlled streams for structured communication,low-latency connection establishment, and network path migration. QUIC includessecurity measures that ensure confidentiality, integrity, and availability in arange of deployment circumstances. Accompanying documents describe theintegration of TLS for key negotiation, loss detection, and an exemplarycongestion control algorithm.¶
DO NOT DEPLOY THIS VERSION OF QUIC UNTIL IT IS IN AN RFC. This verion is still awork in progress. For trial deployments, please use earlier versions.¶
Discussion of this draft takes place on the QUIC working group mailing list(quic@ietf.org), which is archived athttps://mailarchive.ietf.org/arch/search/?email_list=quic¶
Working Group information can be found athttps://github.com/quicwg; sourcecode and issues list for this draft can be found athttps://github.com/quicwg/base-drafts/labels/-transport.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is athttps://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 16 June 2021.¶
Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
QUIC is a secure general-purpose transport protocol. Thisdocument defines version 1 of QUIC, which conforms to the version-independentproperties of QUIC defined in[QUIC-INVARIANTS].¶
QUIC is a connection-oriented protocol that creates a stateful interactionbetween a client and server.¶
The QUIC handshake combines negotiation of cryptographic and transportparameters. QUIC integrates the TLS ([TLS13]) handshake, although using acustomized framing for protecting packets. The integration of TLS and QUIC isdescribed in more detail in[QUIC-TLS]. The handshake is structured to permitthe exchange of application data as soon as possible. This includes an optionfor clients to send data immediately (0-RTT), which might require priorcommunication to enable.¶
Endpoints communicate in QUIC by exchanging QUIC packets. Most packets containframes, which carry control information and application data betweenendpoints. QUIC authenticates all packets and encrypts as much as is practical.QUIC packets are carried in UDP datagrams ([UDP]) to betterfacilitate deployment in existing systems and networks.¶
Application protocols exchange information over a QUIC connection via streams,which are ordered sequences of bytes. Two types of stream can be created:bidirectional streams, which allow both endpoints to send data; andunidirectional streams, which allow a single endpoint to send data. Acredit-based scheme is used to limit stream creation and to bound the amount ofdata that can be sent.¶
QUIC provides the necessary feedback to implement reliable delivery andcongestion control. An algorithm for detecting and recovering from loss ofdata is described in[QUIC-RECOVERY]. QUIC depends on congestion controlto avoid network congestion. An exemplary congestion control algorithm isalso described in[QUIC-RECOVERY].¶
QUIC connections are not strictly bound to a single network path. Connectionmigration uses connection identifiers to allow connections to transfer to a newnetwork path. Only clients are able to migrate in this version of QUIC. Thisdesign also allows connections to continue after changes in network topology oraddress mappings, such as might be caused by NAT rebinding.¶
Once established, multiple options are provided for connection termination.Applications can manage a graceful shutdown, endpoints can negotiate a timeoutperiod, errors can cause immediate connection teardown, and a statelessmechanism provides for termination of connections after one endpoint has loststate.¶
This document describes the core QUIC protocol and is structured as follows:¶
Streams are the basic service abstraction that QUIC provides.¶
Connections are the context in which QUIC endpoints communicate.¶
Packets and frames are the basic unit used by QUIC to communicate.¶
Finally, encoding details of QUIC protocol elements are described in:¶
Accompanying documents describe QUIC's loss detection and congestion control[QUIC-RECOVERY], and the use of TLS for key negotiation[QUIC-TLS].¶
This document defines QUIC version 1, which conforms to the protocol invariantsin[QUIC-INVARIANTS].¶
To refer to QUIC version 1, cite this document. References to the limitedset of version-independent properties of QUIC can cite[QUIC-INVARIANTS].¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALLNOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED","MAY", and "OPTIONAL" in this document are to be interpreted asdescribed in BCP 14[RFC2119][RFC8174] when, and only when, theyappear in all capitals, as shown here.¶
Commonly used terms in the document are described below.¶
The transport protocol described by this document. QUIC is a name, not anacronym.¶
An entity that can participate in a QUIC connection by generating, receiving,and processing QUIC packets. There are only two types of endpoint in QUIC:client and server.¶
The endpoint that initiates a QUIC connection.¶
The endpoint that accepts a QUIC connection.¶
A complete processable unit of QUIC that can be encapsulated in a UDPdatagram. One or more QUIC packets can be encapsulated in a single UDPdatagram.¶
A QUIC packet that contains frames other than ACK, PADDING, andCONNECTION_CLOSE. These cause a recipient to send an acknowledgment; seeSection 13.2.1.¶
A unit of structured protocol information. There are multiple frame types,each of which carries different information. Frames are contained in QUICpackets.¶
When used without qualification, the tuple of IP version, IP address, and UDPport number that represents one end of a network path.¶
An identifier that is used to identify a QUIC connection at an endpoint.Each endpoint selects one or more Connection IDs for its peer to include inpackets sent towards the endpoint. This value is opaque to the peer.¶
A unidirectional or bidirectional channel of ordered bytes within a QUICconnection. A QUIC connection can carry multiple simultaneous streams.¶
An entity that uses QUIC to send and receive data.¶
This document uses the terms "QUIC packets", "UDP datagrams", and "IP packets"to refer to the units of the respective protocols. That is, one or more QUICpackets can be encapsulated in a UDP datagram, which is in turn encapsulated inan IP packet.¶
Packet and frame diagrams in this document use a custom format. The purpose ofthis format is to summarize, not define, protocol elements. Prose defines thecomplete semantics and details of structures.¶
Complex fields are named and then followed by a list of fields surrounded by apair of matching braces. Each field in this list is separated by commas.¶
Individual fields include length information, plus indications about fixedvalue, optionality, or repetitions. Individual fields use the followingnotational conventions, with all lengths in bits:¶
Indicates that x is A bits long¶
Indicates that x uses the variable-length encoding inSection 16¶
Indicates that x can be any length from A to B; A can be omitted to indicatea minimum of zero bits and B can be omitted to indicate no set upper limit;values in this format always end on an octet boundary¶
Indicates that x has a fixed value of C with the length described by?, as above¶
Indicates that x has a value in the range from C to D, inclusive,with the length described by ?, as above¶
Indicates that x is optional (and has length of E)¶
Indicates that x is repeated zero or more times (and that each instance islength E)¶
This document uses network byte order (that is, big endian) values. Fieldsare placed starting from the high-order bits of each byte.¶
By convention, individual fields reference a complex field by using the name ofthe complex field.¶
For example:¶
Example Structure { One-bit Field (1), 7-bit Field with Fixed Value (7) = 61, Field with Variable-Length Integer (i), Arbitrary-Length Field (..), Variable-Length Field (8..24), Field With Minimum Length (16..), Field With Maximum Length (..128), [Optional Field (64)], Repeated Field (8) ...,}When a single-bit field is referenced in prose, the position of that field canbe clarified by using the value of the byte that carries the field with thefield's value set. For example, the value 0x80 could be used to refer to thesingle-bit field in the most significant bit of the byte, such as One-bit FieldinFigure 1.¶
Streams in QUIC provide a lightweight, ordered byte-stream abstraction to anapplication. Streams can be unidirectional or bidirectional.¶
Streams can be created by sending data. Other processes associated with streammanagement - ending, cancelling, and managing flow control - are all designed toimpose minimal overheads. For instance, a single STREAM frame (Section 19.8)can open, carry data for, and close a stream. Streams can also be long-lived andcan last the entire duration of a connection.¶
Streams can be created by either endpoint, can concurrently send datainterleaved with other streams, and can be cancelled. QUIC does not provide anymeans of ensuring ordering between bytes on different streams.¶
QUIC allows for an arbitrary number of streams to operate concurrently and foran arbitrary amount of data to be sent on any stream, subject to flow controlconstraints and stream limits; seeSection 4.¶
Streams can be unidirectional or bidirectional. Unidirectional streams carrydata in one direction: from the initiator of the stream to its peer.Bidirectional streams allow for data to be sent in both directions.¶
Streams are identified within a connection by a numeric value, referred to asthe stream ID. A stream ID is a 62-bit integer (0 to 2^62-1) that is unique forall streams on a connection. Stream IDs are encoded as variable-lengthintegers; seeSection 16. A QUIC endpoint MUST NOT reuse a stream IDwithin a connection.¶
The least significant bit (0x1) of the stream ID identifies the initiator of thestream. Client-initiated streams have even-numbered stream IDs (with the bitset to 0), and server-initiated streams have odd-numbered stream IDs (with thebit set to 1).¶
The second least significant bit (0x2) of the stream ID distinguishes betweenbidirectional streams (with the bit set to 0) and unidirectional streams (withthe bit set to 1).¶
The two least significant bits from a stream ID therefore identify a stream asone of four types, as summarized inTable 1.¶
| Bits | Stream Type |
|---|---|
| 0x0 | Client-Initiated, Bidirectional |
| 0x1 | Server-Initiated, Bidirectional |
| 0x2 | Client-Initiated, Unidirectional |
| 0x3 | Server-Initiated, Unidirectional |
The stream space for each type begins at the minimum value (0x0 through 0x3respectively); successive streams of each type are created with numericallyincreasing stream IDs. A stream ID that is used out of order results in allstreams of that type with lower-numbered stream IDs also being opened.¶
STREAM frames (Section 19.8) encapsulate data sent by an application. Anendpoint uses the Stream ID and Offset fields in STREAM frames to place data inorder.¶
Endpoints MUST be able to deliver stream data to an application as an orderedbyte-stream. Delivering an ordered byte-stream requires that an endpoint bufferany data that is received out of order, up to the advertised flow control limit.¶
QUIC makes no specific allowances for delivery of stream data out oforder. However, implementations MAY choose to offer the ability to deliver dataout of order to a receiving application.¶
An endpoint could receive data for a stream at the same stream offset multipletimes. Data that has already been received can be discarded. The data at agiven offset MUST NOT change if it is sent multiple times; an endpoint MAY treatreceipt of different data at the same offset within a stream as a connectionerror of type PROTOCOL_VIOLATION.¶
Streams are an ordered byte-stream abstraction with no other structure visibleto QUIC. STREAM frame boundaries are not expected to be preserved whendata is transmitted, retransmitted after packet loss, or delivered to theapplication at a receiver.¶
An endpoint MUST NOT send data on any stream without ensuring that it is withinthe flow control limits set by its peer. Flow control is described in detail inSection 4.¶
Stream multiplexing can have a significant effect on application performance ifresources allocated to streams are correctly prioritized.¶
QUIC does not provide a mechanism for exchanging prioritization information.Instead, it relies on receiving priority information from the application.¶
A QUIC implementation SHOULD provide ways in which an application can indicatethe relative priority of streams. An implementation uses information providedby the application to determine how to allocate resources to active streams.¶
This document does not define an API for QUIC, but instead defines a set offunctions on streams that application protocols can rely upon. An applicationprotocol can assume that a QUIC implementation provides an interface thatincludes the operations described in this section. An implementation designedfor use with a specific application protocol might provide only those operationsthat are used by that protocol.¶
On the sending part of a stream, an application protocol can:¶
On the receiving part of a stream, an application protocol can:¶
An application protocol can also request to be informed of state changes onstreams, including when the peer has opened or reset a stream, when a peeraborts reading on a stream, when new data is available, and when data can orcannot be written to the stream due to flow control.¶
This section describes streams in terms of their send or receive components.Two state machines are described: one for the streams on which an endpointtransmits data (Section 3.1), and another for streams on which anendpoint receives data (Section 3.2).¶
Unidirectional streams use the applicable state machine directly. Bidirectionalstreams use both state machines. For the most part, the use of these statemachines is the same whether the stream is unidirectional or bidirectional. Theconditions for opening a stream are slightly more complex for a bidirectionalstream because the opening of either the send or receive side causes the streamto open in both directions.¶
The state machines shown in this section are largely informative. Thisdocument uses stream states to describe rules for when and how different typesof frames can be sent and the reactions that are expected when different typesof frames are received. Though these state machines are intended to be usefulin implementing QUIC, these states are not intended to constrainimplementations. An implementation can define a different state machine as longas its behavior is consistent with an implementation that implements thesestates.¶
In some cases, a single event or action can cause a transition throughmultiple states. For instance, sending STREAM with a FIN bit set can causetwo state transitions for a sending stream: from the Ready state to the Sendstate, and from the Send state to the Data Sent state.¶
Figure 2 shows the states for the part of a stream that sendsdata to a peer.¶
o | Create Stream (Sending) | Peer Creates Bidirectional Stream v +-------+ | Ready | Send RESET_STREAM | |-----------------------. +-------+ | | | | Send STREAM / | | STREAM_DATA_BLOCKED | | | | Peer Creates | | Bidirectional Stream | v | +-------+ | | Send | Send RESET_STREAM | | |---------------------->| +-------+ | | | | Send STREAM + FIN | v v +-------+ +-------+ | Data | Send RESET_STREAM | Reset | | Sent |------------------>| Sent | +-------+ +-------+ | | | Recv All ACKs | Recv ACK v v +-------+ +-------+ | Data | | Reset | | Recvd | | Recvd | +-------+ +-------+
The sending part of a stream that the endpoint initiates (types 0and 2 for clients, 1 and 3 for servers) is opened by the application. The"Ready" state represents a newly created stream that is able to accept data fromthe application. Stream data might be buffered in this state in preparation forsending.¶
Sending the first STREAM or STREAM_DATA_BLOCKED frame causes a sending part of astream to enter the "Send" state. An implementation might choose to deferallocating a stream ID to a stream until it sends the first STREAM frame andenters this state, which can allow for better stream prioritization.¶
The sending part of a bidirectional stream initiated by a peer (type 0 for aserver, type 1 for a client) starts in the "Ready" state when the receiving partis created.¶
In the "Send" state, an endpoint transmits - and retransmits as necessary -stream data in STREAM frames. The endpoint respects the flow control limits setby its peer, and continues to accept and process MAX_STREAM_DATA frames. Anendpoint in the "Send" state generates STREAM_DATA_BLOCKED frames if it isblocked from sending by stream or connection flow control limitsSection 4.1.¶
After the application indicates that all stream data has been sent and a STREAMframe containing the FIN bit is sent, the sending part of the stream enters the"Data Sent" state. From this state, the endpoint only retransmits stream dataas necessary. The endpoint does not need to check flow control limits or sendSTREAM_DATA_BLOCKED frames for a stream in this state. MAX_STREAM_DATA framesmight be received until the peer receives the final stream offset. The endpointcan safely ignore any MAX_STREAM_DATA frames it receives from its peer for astream in this state.¶
Once all stream data has been successfully acknowledged, the sending part of thestream enters the "Data Recvd" state, which is a terminal state.¶
From any of the "Ready", "Send", or "Data Sent" states, an application cansignal that it wishes to abandon transmission of stream data. Alternatively, anendpoint might receive a STOP_SENDING frame from its peer. In either case, theendpoint sends a RESET_STREAM frame, which causes the stream to enter the "ResetSent" state.¶
An endpoint MAY send a RESET_STREAM as the first frame that mentions a stream;this causes the sending part of that stream to open and then immediatelytransition to the "Reset Sent" state.¶
Once a packet containing a RESET_STREAM has been acknowledged, the sending partof the stream enters the "Reset Recvd" state, which is a terminal state.¶
Figure 3 shows the states for the part of a stream thatreceives data from a peer. The states for a receiving part of a stream mirroronly some of the states of the sending part of the stream at the peer. Thereceiving part of a stream does not track states on the sending part that cannotbe observed, such as the "Ready" state. Instead, the receiving part of a streamtracks the delivery of data to the application, some of which cannot be observedby the sender.¶
o | Recv STREAM / STREAM_DATA_BLOCKED / RESET_STREAM | Create Bidirectional Stream (Sending) | Recv MAX_STREAM_DATA / STOP_SENDING (Bidirectional) | Create Higher-Numbered Stream v +-------+ | Recv | Recv RESET_STREAM | |-----------------------. +-------+ | | | | Recv STREAM + FIN | v | +-------+ | | Size | Recv RESET_STREAM | | Known |---------------------->| +-------+ | | | | Recv All Data | v v +-------+ Recv RESET_STREAM +-------+ | Data |--- (optional) --->| Reset | | Recvd | Recv All Data | Recvd | +-------+<-- (optional) ----+-------+ | | | App Read All Data | App Read RST v v +-------+ +-------+ | Data | | Reset | | Read | | Read | +-------+ +-------+
The receiving part of a stream initiated by a peer (types 1 and 3 for a client,or 0 and 2 for a server) is created when the first STREAM, STREAM_DATA_BLOCKED,or RESET_STREAM frame is received for that stream. For bidirectional streamsinitiated by a peer, receipt of a MAX_STREAM_DATA or STOP_SENDING frame for thesending part of the stream also creates the receiving part. The initial statefor the receiving part of a stream is "Recv".¶
The receiving part of a stream enters the "Recv" state when the sending part ofa bidirectional stream initiated by the endpoint (type 0 for a client, type 1for a server) enters the "Ready" state.¶
An endpoint opens a bidirectional stream when a MAX_STREAM_DATA or STOP_SENDINGframe is received from the peer for that stream. Receiving a MAX_STREAM_DATAframe for an unopened stream indicates that the remote peer has opened thestream and is providing flow control credit. Receiving a STOP_SENDING frame foran unopened stream indicates that the remote peer no longer wishes to receivedata on this stream. Either frame might arrive before a STREAM orSTREAM_DATA_BLOCKED frame if packets are lost or reordered.¶
Before a stream is created, all streams of the same type with lower-numberedstream IDs MUST be created. This ensures that the creation order for streams isconsistent on both endpoints.¶
In the "Recv" state, the endpoint receives STREAM and STREAM_DATA_BLOCKEDframes. Incoming data is buffered and can be reassembled into the correct orderfor delivery to the application. As data is consumed by the application andbuffer space becomes available, the endpoint sends MAX_STREAM_DATA frames toallow the peer to send more data.¶
When a STREAM frame with a FIN bit is received, the final size of the stream isknown; seeSection 4.5. The receiving part of the stream then enters the"Size Known" state. In this state, the endpoint no longer needs to sendMAX_STREAM_DATA frames, it only receives any retransmissions of stream data.¶
Once all data for the stream has been received, the receiving part enters the"Data Recvd" state. This might happen as a result of receiving the same STREAMframe that causes the transition to "Size Known". After all data has beenreceived, any STREAM or STREAM_DATA_BLOCKED frames for the stream can bediscarded.¶
The "Data Recvd" state persists until stream data has been delivered to theapplication. Once stream data has been delivered, the stream enters the "DataRead" state, which is a terminal state.¶
Receiving a RESET_STREAM frame in the "Recv" or "Size Known" states causes thestream to enter the "Reset Recvd" state. This might cause the delivery ofstream data to the application to be interrupted.¶
It is possible that all stream data has already been received when aRESET_STREAM is received (that is, in the "Data Recvd" state). Similarly, it ispossible for remaining stream data to arrive after receiving a RESET_STREAMframe (the "Reset Recvd" state). An implementation is free to manage thissituation as it chooses.¶
Sending RESET_STREAM means that an endpoint cannot guarantee delivery of streamdata; however there is no requirement that stream data not be delivered if aRESET_STREAM is received. An implementation MAY interrupt delivery of streamdata, discard any data that was not consumed, and signal the receipt of theRESET_STREAM. A RESET_STREAM signal might be suppressed or withheld if streamdata is completely received and is buffered to be read by the application. Ifthe RESET_STREAM is suppressed, the receiving part of the stream remains in"Data Recvd".¶
Once the application receives the signal indicating that the streamwas reset, the receiving part of the stream transitions to the "Reset Read"state, which is a terminal state.¶
The sender of a stream sends just three frame types that affect the state of astream at either sender or receiver: STREAM (Section 19.8),STREAM_DATA_BLOCKED (Section 19.13), and RESET_STREAM(Section 19.4).¶
A sender MUST NOT send any of these frames from a terminal state ("Data Recvd"or "Reset Recvd"). A sender MUST NOT send a STREAM or STREAM_DATA_BLOCKED framefor a stream in the "Reset Sent" state or any terminal state, that is, aftersending a RESET_STREAM frame. A receiver could receive any of these threeframes in any state, due to the possibility of delayed delivery of packetscarrying them.¶
The receiver of a stream sends MAX_STREAM_DATA (Section 19.10) andSTOP_SENDING frames (Section 19.5).¶
The receiver only sends MAX_STREAM_DATA in the "Recv" state. A receiver MAYsend STOP_SENDING in any state where it has not received a RESET_STREAM frame;that is states other than "Reset Recvd" or "Reset Read". However there islittle value in sending a STOP_SENDING frame in the "Data Recvd" state, sinceall stream data has been received. A sender could receive either of these twoframes in any state as a result of delayed delivery of packets.¶
A bidirectional stream is composed of sending and receiving parts.Implementations can represent states of the bidirectional stream as compositesof sending and receiving stream states. The simplest model presents the streamas "open" when either sending or receiving parts are in a non-terminal state and"closed" when both sending and receiving streams are in terminal states.¶
Table 2 shows a more complex mapping of bidirectional streamstates that loosely correspond to the stream states in HTTP/2[HTTP2]. This shows that multiple states on sending or receivingparts of streams are mapped to the same composite state. Note that this is justone possibility for such a mapping; this mapping requires that data isacknowledged before the transition to a "closed" or "half-closed" state.¶
| Sending Part | Receiving Part | Composite State |
|---|---|---|
| No Stream/Ready | No Stream/Recv *1 | idle |
| Ready/Send/Data Sent | Recv/Size Known | open |
| Ready/Send/Data Sent | Data Recvd/Data Read | half-closed (remote) |
| Ready/Send/Data Sent | Reset Recvd/Reset Read | half-closed (remote) |
| Data Recvd | Recv/Size Known | half-closed (local) |
| Reset Sent/Reset Recvd | Recv/Size Known | half-closed (local) |
| Reset Sent/Reset Recvd | Data Recvd/Data Read | closed |
| Reset Sent/Reset Recvd | Reset Recvd/Reset Read | closed |
| Data Recvd | Data Recvd/Data Read | closed |
| Data Recvd | Reset Recvd/Reset Read | closed |
A stream is considered "idle" if it has not yet been created, or if thereceiving part of the stream is in the "Recv" state without yet havingreceived any frames.¶
If an application is no longer interested in the data it is receiving on astream, it can abort reading the stream and specify an application error code.¶
If the stream is in the "Recv" or "Size Known" states, the transport SHOULDsignal this by sending a STOP_SENDING frame to prompt closure of the stream inthe opposite direction. This typically indicates that the receiving applicationis no longer reading data it receives from the stream, but it is not a guaranteethat incoming data will be ignored.¶
STREAM frames received after sending a STOP_SENDING frame are still countedtoward connection and stream flow control, even though these frames can bediscarded upon receipt.¶
A STOP_SENDING frame requests that the receiving endpoint send a RESET_STREAMframe. An endpoint that receives a STOP_SENDING frame MUST send a RESET_STREAMframe if the stream is in the Ready or Send state. If the stream is in the"Data Sent" state, the endpoint MAY defer sending the RESET_STREAM frame untilthe packets containing outstanding data are acknowledged or declared lost. Ifany outstanding data is declared lost, the endpoint SHOULD send a RESET_STREAMframe instead of retransmitting the data.¶
An endpoint SHOULD copy the error code from the STOP_SENDING frame to theRESET_STREAM frame it sends, but MAY use any application error code. Anendpoint that sends a STOP_SENDING frame MAY ignore the error code inany RESET_STREAM frames subsequently received for that stream.¶
STOP_SENDING SHOULD only be sent for a stream that has not been reset by thepeer. STOP_SENDING is most useful for streams in the "Recv" or "Size Known"states.¶
An endpoint is expected to send another STOP_SENDING frame if a packetcontaining a previous STOP_SENDING is lost. However, once either all streamdata or a RESET_STREAM frame has been received for the stream - that is, thestream is in any state other than "Recv" or "Size Known" - sending aSTOP_SENDING frame is unnecessary.¶
An endpoint that wishes to terminate both directions of a bidirectional streamcan terminate one direction by sending a RESET_STREAM frame, and it canencourage prompt termination in the opposite direction by sending a STOP_SENDINGframe.¶
It is necessary to limit the amount of data that a receiver could buffer, toprevent a fast sender from overwhelming a slow receiver, or to prevent amalicious sender from consuming a large amount of memory at a receiver. Toenable a receiver to limit memory commitment to a connection and to apply backpressure on the sender, streams are flow controlled both individually and as anaggregate. A QUIC receiver controls the maximum amount of data the sender cansend on a stream at any time, as described inSection 4.1 andSection 4.2.¶
Similarly, to limit concurrency within a connection, a QUIC endpoint controlsthe maximum cumulative number of streams that its peer can initiate, asdescribed inSection 4.6.¶
Data sent in CRYPTO frames is not flow controlled in the same way as streamdata. QUIC relies on the cryptographic protocol implementation to avoidexcessive buffering of data; see[QUIC-TLS]. To avoid excessive buffering atmultiple layers, QUIC implementations SHOULD provide an interface for thecryptographic protocol implementation to communicate its buffering limits.¶
QUIC employs a limit-based flow-control scheme where a receiver advertises thelimit of total bytes it is prepared to receive on a given stream or for theentire connection. This leads to two levels of data flow control in QUIC:¶
Senders MUST NOT send data in excess of either limit.¶
A receiver sets initial limits for all streams through transport parametersduring the handshake (Section 7.4). Subsequently, a receiver sendsMAX_STREAM_DATA (Section 19.10) or MAX_DATA (Section 19.9)frames to the sender to advertise larger limits.¶
A receiver can advertise a larger limit for a stream by sending aMAX_STREAM_DATA frame with the corresponding stream ID. A MAX_STREAM_DATA frameindicates the maximum absolute byte offset of a stream. A receiver coulddetermine the flow control offset to be advertised based on the current offsetof data consumed on that stream.¶
A receiver can advertise a larger limit for a connection by sending a MAX_DATAframe, which indicates the maximum of the sum of the absolute byte offsets ofall streams. A receiver maintains a cumulative sum of bytes received on allstreams, which is used to check for violations of the advertised connection orstream data limits. A receiver could determine the maximum data limit to beadvertised based on the sum of bytes consumed on all streams.¶
Once a receiver advertises a limit for the connection or a stream, it MAYadvertise a smaller limit, but this has no effect.¶
A receiver MUST close the connection with a FLOW_CONTROL_ERROR error(Section 11) if the sender violates the advertised connection or streamdata limits.¶
A sender MUST ignore any MAX_STREAM_DATA or MAX_DATA frames that do not increaseflow control limits.¶
If a sender has sent data up to the limit, it will be unable to send new dataand is considered blocked. A sender SHOULD send a STREAM_DATA_BLOCKED orDATA_BLOCKED frame to indicate to the receiver that it has data to write but isblocked by flow control limits. If a sender is blocked for a period longer thanthe idle timeout (Section 10.1), the receiver might close the connectioneven when the sender has data that is available for transmission. To keep theconnection from closing, a sender that is flow control limited SHOULDperiodically send a STREAM_DATA_BLOCKED or DATA_BLOCKED frame when it has noack-eliciting packets in flight.¶
Implementations decide when and how much credit to advertise in MAX_STREAM_DATAand MAX_DATA frames, but this section offers a few considerations.¶
To avoid blocking a sender, a receiver MAY send a MAX_STREAM_DATA or MAX_DATAframe multiple times within a round trip or send it early enough to allow timefor loss of the frame and subsequent recovery.¶
Control frames contribute to connection overhead. Therefore, frequently sendingMAX_STREAM_DATA and MAX_DATA frames with small changes is undesirable. On theother hand, if updates are less frequent, larger increments to limits arenecessary to avoid blocking a sender, requiring larger resource commitments atthe receiver. There is a trade-off between resource commitment and overheadwhen determining how large a limit is advertised.¶
A receiver can use an autotuning mechanism to tune the frequency and amount ofadvertised additional credit based on a round-trip time estimate and the rate atwhich the receiving application consumes data, similar to common TCPimplementations. As an optimization, an endpoint could send frames related toflow control only when there are other frames to send, ensuring that flowcontrol does not cause extra packets to be sent.¶
A blocked sender is not required to send STREAM_DATA_BLOCKED or DATA_BLOCKEDframes. Therefore, a receiver MUST NOT wait for a STREAM_DATA_BLOCKED orDATA_BLOCKED frame before sending a MAX_STREAM_DATA or MAX_DATA frame; doing socould result in the sender being blocked for the rest of the connection. Even ifthe sender sends these frames, waiting for them will result in the sender beingblocked for at least an entire round trip.¶
When a sender receives credit after being blocked, it might be able to send alarge amount of data in response, resulting in short-term congestion; seeSection 6.9 in[QUIC-RECOVERY] for a discussion of how a sender can avoid thiscongestion.¶
If an endpoint cannot ensure that its peer always has available flow controlcredit that is greater than the peer's bandwidth-delay product on thisconnection, its receive throughput will be limited by flow control.¶
Packet loss can cause gaps in the receive buffer, preventing the applicationfrom consuming data and freeing up receive buffer space.¶
Sending timely updates of flow control limits can improve performance.Sending packets only to provide flow control updates can increase networkload and adversely affect performance. Sending flow control updates along withother frames, such as ACK frames, reduces the cost of those updates.¶
Endpoints need to eventually agree on the amount of flow control credit that hasbeen consumed on every stream, to be able to account for all bytes forconnection-level flow control.¶
On receipt of a RESET_STREAM frame, an endpoint will tear down state for thematching stream and ignore further data arriving on that stream.¶
RESET_STREAM terminates one direction of a stream abruptly. For a bidirectionalstream, RESET_STREAM has no effect on data flow in the opposite direction. Bothendpoints MUST maintain flow control state for the stream in the unterminateddirection until that direction enters a terminal state.¶
The final size is the amount of flow control credit that is consumed by astream. Assuming that every contiguous byte on the stream was sent once, thefinal size is the number of bytes sent. More generally, this is one higherthan the offset of the byte with the largest offset sent on the stream, or zeroif no bytes were sent.¶
A sender always communicates the final size of a stream to the receiverreliably, no matter how the stream is terminated. The final size is the sum ofthe Offset and Length fields of a STREAM frame with a FIN flag, noting thatthese fields might be implicit. Alternatively, the Final Size field of aRESET_STREAM frame carries this value. This guarantees that both endpoints agreeon how much flow control credit was consumed by the sender on that stream.¶
An endpoint will know the final size for a stream when the receiving part of thestream enters the "Size Known" or "Reset Recvd" state (Section 3). Thereceiver MUST use the final size of the stream to account for all bytes sent onthe stream in its connection level flow controller.¶
An endpoint MUST NOT send data on a stream at or beyond the final size.¶
Once a final size for a stream is known, it cannot change. If a RESET_STREAM orSTREAM frame is received indicating a change in the final size for the stream,an endpoint SHOULD respond with a FINAL_SIZE_ERROR error; seeSection 11. A receiver SHOULD treat receipt of data at or beyond thefinal size as a FINAL_SIZE_ERROR error, even after a stream is closed.Generating these errors is not mandatory, because requiring that anendpoint generate these errors also means that the endpoint needs to maintainthe final size state for closed streams, which could mean a significant statecommitment.¶
An endpoint limits the cumulative number of incoming streams a peer can open.Only streams with a stream ID less than (max_stream * 4 +initial_stream_id_for_type) can be opened; seeTable 1. Initiallimits are set in the transport parameters; seeSection 18.2. Subsequent limits are advertised usingMAX_STREAMS frames; seeSection 19.11. Separate limits apply tounidirectional and bidirectional streams.¶
If a max_streams transport parameter or a MAX_STREAMS frame is received with avalue greater than 2^60, this would allow a maximum stream ID that cannot beexpressed as a variable-length integer; seeSection 16. If either isreceived, the connection MUST be closed immediately with a connection error oftype TRANSPORT_PARAMETER_ERROR if the offending value was received in atransport parameter or of type FRAME_ENCODING_ERROR if it was received in aframe; seeSection 10.2.¶
Endpoints MUST NOT exceed the limit set by their peer. An endpoint thatreceives a frame with a stream ID exceeding the limit it has sent MUST treatthis as a connection error of type STREAM_LIMIT_ERROR (Section 11).¶
Once a receiver advertises a stream limit using the MAX_STREAMS frame,advertising a smaller limit has no effect. A receiver MUST ignore anyMAX_STREAMS frame that does not increase the stream limit.¶
As with stream and connection flow control, this document leaves implementationsto decide when and how many streams should be advertisedto a peer via MAX_STREAMS. Implementations might choose to increase limits asstreams are closed, to keep the number of streams available to peers roughlyconsistent.¶
An endpoint that is unable to open a new stream due to the peer's limits SHOULDsend a STREAMS_BLOCKED frame (Section 19.14). This signal isconsidered useful for debugging. An endpoint MUST NOT wait to receive thissignal before advertising additional credit, since doing so will mean that thepeer will be blocked for at least an entire round trip, and potentiallyindefinitely if the peer chooses not to send STREAMS_BLOCKED frames.¶
A QUIC connection is shared state between a client and a server.¶
Each connection starts with a handshake phase, during which the two endpointsestablish a shared secret using the cryptographic handshake protocol[QUIC-TLS] and negotiate the application protocol. The handshake(Section 7) confirms that both endpoints are willing to communicate(Section 8.1) and establishes parameters for the connection(Section 7.4).¶
An application protocol can use the connection during the handshake phase withsome limitations. 0-RTT allows application data to be sent by a client beforereceiving a response from the server. However, 0-RTT provides no protectionagainst replay attacks; see Section 9.2 of[QUIC-TLS]. A server can also sendapplication data to a client before it receives the final cryptographichandshake messages that allow it to confirm the identity and liveness of theclient. These capabilities allow an application protocol to offer the option oftrading some security guarantees for reduced latency.¶
The use of connection IDs (Section 5.1) allows connections to migrate to anew network path, both as a direct choice of an endpoint and when forced by achange in a middlebox.Section 9 describes mitigations for the security andprivacy issues associated with migration.¶
For connections that are no longer needed or desired, there are several ways fora client and server to terminate a connection, as described inSection 10.¶
Each connection possesses a set of connection identifiers, or connection IDs,each of which can identify the connection. Connection IDs are independentlyselected by endpoints; each endpoint selects the connection IDs that its peeruses.¶
The primary function of a connection ID is to ensure that changes in addressingat lower protocol layers (UDP, IP) do not cause packets for a QUICconnection to be delivered to the wrong endpoint. Each endpoint selectsconnection IDs using an implementation-specific (and perhapsdeployment-specific) method that will allow packets with that connection ID tobe routed back to the endpoint and to be identified by the endpoint uponreceipt.¶
Connection IDs MUST NOT contain any information that can be used by an externalobserver (that is, one that does not cooperate with the issuer) to correlatethem with other connection IDs for the same connection. As a trivial example,this means the same connection ID MUST NOT be issued more than once on the sameconnection.¶
Packets with long headers include Source Connection ID and DestinationConnection ID fields. These fields are used to set the connection IDs for newconnections; seeSection 7.2 for details.¶
Packets with short headers (Section 17.3) only include the DestinationConnection ID and omit the explicit length. The length of the DestinationConnection ID field is expected to be known to endpoints. Endpoints using aload balancer that routes based on connection ID could agree with the loadbalancer on a fixed length for connection IDs, or agree on an encoding scheme.A fixed portion could encode an explicit length, which allows the entireconnection ID to vary in length and still be used by the load balancer.¶
A Version Negotiation (Section 17.2.1) packet echoes the connection IDsselected by the client, both to ensure correct routing toward the client and todemonstrate that the packet is in response to a packet sent by the client.¶
A zero-length connection ID can be used when a connection ID is not needed toroute to the correct endpoint. However, multiplexing connections on the samelocal IP address and port while using zero-length connection IDs will causefailures in the presence of peer connection migration, NAT rebinding, and clientport reuse. An endpoint MUST NOT use the same IP address and port for multipleconnections with zero-length connection IDs, unless it is certain that thoseprotocol features are not in use.¶
When an endpoint uses a non-zero-length connection ID, it needs to ensure thatthe peer has a supply of connection IDs from which to choose for packets sent tothe endpoint. These connection IDs are supplied by the endpoint using theNEW_CONNECTION_ID frame (Section 19.15).¶
Each Connection ID has an associated sequence number to assist in detecting whenNEW_CONNECTION_ID or RETIRE_CONNECTION_ID frames refer to the same value. Theinitial connection ID issued by an endpoint is sent in the Source Connection IDfield of the long packet header (Section 17.2) during the handshake. Thesequence number of the initial connection ID is 0. If the preferred_addresstransport parameter is sent, the sequence number of the supplied connection IDis 1.¶
Additional connection IDs are communicated to the peer using NEW_CONNECTION_IDframes (Section 19.15). The sequence number on each newly issuedconnection ID MUST increase by 1. The connection ID randomly selected by theclient in the Initial packet and any connection ID provided by a Retry packetare not assigned sequence numbers unless a server opts to retain them as itsinitial connection ID.¶
When an endpoint issues a connection ID, it MUST accept packets that carry thisconnection ID for the duration of the connection or until its peer invalidatesthe connection ID via a RETIRE_CONNECTION_ID frame(Section 19.16). Connection IDs that are issued and notretired are considered active; any active connection ID is valid for use withthe current connection at any time, in any packet type. This includes theconnection ID issued by the server via the preferred_address transportparameter.¶
An endpoint SHOULD ensure that its peer has a sufficient number of available andunused connection IDs. Endpoints advertise the number of active connection IDsthey are willing to maintain using the active_connection_id_limit transportparameter. An endpoint MUST NOT provide more connection IDs than the peer'slimit. An endpoint MAY send connection IDs that temporarily exceed a peer'slimit if the NEW_CONNECTION_ID frame also requires the retirement of any excess,by including a sufficiently large value in the Retire Prior To field.¶
A NEW_CONNECTION_ID frame might cause an endpoint to add some active connectionIDs and retire others based on the value of the Retire Prior To field. Afterprocessing a NEW_CONNECTION_ID frame and adding and retiring active connectionIDs, if the number of active connection IDs exceeds the value advertised in itsactive_connection_id_limit transport parameter, an endpoint MUST close theconnection with an error of type CONNECTION_ID_LIMIT_ERROR.¶
An endpoint SHOULD supply a new connection ID when the peer retires a connectionID. If an endpoint provided fewer connection IDs than the peer'sactive_connection_id_limit, it MAY supply a new connection ID when it receives apacket with a previously unused connection ID. An endpoint MAY limit thetotal number of connection IDs issued for each connection toavoid the risk of running out of connection IDs; seeSection 10.3.2. Anendpoint MAY also limit the issuance of connection IDs to reduce the amount ofper-path state it maintains, such as path validation status, as its peermight interact with it over as many paths as there are issued connectionIDs.¶
An endpoint that initiates migration and requires non-zero-length connection IDsSHOULD ensure that the pool of connection IDs available to its peer allows thepeer to use a new connection ID on migration, as the peer will be unable torespond if the pool is exhausted.¶
An endpoint that selects a zero-length connection ID during the handshakecannot issue a new connection ID. A zero-length Destination Connection IDfield is used in all packets sent toward such an endpoint over any networkpath.¶
An endpoint can change the connection ID it uses for a peer to another availableone at any time during the connection. An endpoint consumes connection IDs inresponse to a migrating peer; seeSection 9.5 for more.¶
An endpoint maintains a set of connection IDs received from its peer, any ofwhich it can use when sending packets. When the endpoint wishes to remove aconnection ID from use, it sends a RETIRE_CONNECTION_ID frame to its peer.Sending a RETIRE_CONNECTION_ID frame indicates that the connection ID will notbe used again and requests that the peer replace it with a new connection IDusing a NEW_CONNECTION_ID frame.¶
As discussed inSection 9.5, endpoints limit the use of aconnection ID to packets sent from a single local address to a singledestination address. Endpoints SHOULD retire connection IDs when they are nolonger actively using either the local or destination address for which theconnection ID was used.¶
An endpoint might need to stop accepting previously issued connection IDs incertain circumstances. Such an endpoint can cause its peer to retire connectionIDs by sending a NEW_CONNECTION_ID frame with an increased Retire Prior Tofield. The endpoint SHOULD continue to accept the previously issued connectionIDs until they are retired by the peer. If the endpoint can no longer processthe indicated connection IDs, it MAY close the connection.¶
Upon receipt of an increased Retire Prior To field, the peer MUST stop usingthe corresponding connection IDs and retire them with RETIRE_CONNECTION_IDframes before adding the newly provided connection ID to the set of activeconnection IDs. This ordering allows an endpoint to replace all activeconnection IDs without the possibility of a peer having no available connectionIDs and without exceeding the limit the peer sets in theactive_connection_id_limit transport parameter; seeSection 18.2. Failure to cease using the connection IDswhen requested can result in connection failures, as the issuing endpoint mightbe unable to continue using the connection IDs with the active connection.¶
An endpoint SHOULD limit the number of connection IDs it has retired locally andhave not yet been acknowledged. An endpoint SHOULD allow for sending andtracking a number of RETIRE_CONNECTION_ID frames of at least twice theactive_connection_id limit. An endpoint MUST NOT forget a connection ID withoutretiring it, though it MAY choose to treat having connection IDs in need ofretirement that exceed this limit as a connection error of typeCONNECTION_ID_LIMIT_ERROR.¶
Endpoints SHOULD NOT issue updates of the Retire Prior To field before receivingRETIRE_CONNECTION_ID frames that retire all connection IDs indicated by theprevious Retire Prior To value.¶
Incoming packets are classified on receipt. Packets can either be associatedwith an existing connection, or - for servers - potentially create a newconnection.¶
Endpoints try to associate a packet with an existing connection. If the packethas a non-zero-length Destination Connection ID corresponding to an existingconnection, QUIC processes that packet accordingly. Note that more than oneconnection ID can be associated with a connection; seeSection 5.1.¶
If the Destination Connection ID is zero length and the addressing informationin the packet matches the addressing information the endpoint uses to identify aconnection with a zero-length connection ID, QUIC processes the packet as partof that connection. An endpoint can use just destination IP and port or bothsource and destination addresses for identification, though this makesconnections fragile as described inSection 5.1.¶
Endpoints can send a Stateless Reset (Section 10.3) for any packets thatcannot be attributed to an existing connection. A stateless reset allows a peerto more quickly identify when a connection becomes unusable.¶
Packets that are matched to an existing connection are discarded if the packetsare inconsistent with the state of that connection. For example, packets arediscarded if they indicate a different protocol version than that of theconnection, or if the removal of packet protection is unsuccessful once theexpected keys are available.¶
Invalid packets that lack strong integrity protection, such as Initial, Retry,or Version Negotiation, MAY be discarded. An endpoint MUST generate aconnection error if processing the contents of these packets prior todiscovering an error, unless it fully reverts these changes.¶
Valid packets sent to clients always include a Destination Connection ID thatmatches a value the client selects. Clients that choose to receive zero-lengthconnection IDs can use the local address and port to identify a connection.Packets that do not match an existing connection, based on DestinationConnection ID or, if this value is zero-length, local IP address and port, arediscarded.¶
Due to packet reordering or loss, a client might receive packets for aconnection that are encrypted with a key it has not yet computed. The client MAYdrop these packets, or MAY buffer them in anticipation of later packets thatallow it to compute the key.¶
If a client receives a packet that uses a different version than it initiallyselected, it MUST discard that packet.¶
If a server receives a packet that indicates an unsupported version but is largeenough to initiate a new connection for any supported version, the serverSHOULD send a Version Negotiation packet as described inSection 6.1. A serverMAY limit the number of packets to which it responds with a Version Negotiationpacket. Servers MUST drop smaller packets that specify unsupported versions.¶
The first packet for an unsupported version can use different semantics andencodings for any version-specific field. In particular, different packetprotection keys might be used for different versions. Servers that do notsupport a particular version are unlikely to be able to decrypt the payload ofthe packet or properly interpret the result. Servers SHOULD respond with aVersion Negotiation packet, provided that the datagram is sufficiently long.¶
Packets with a supported version, or no version field, are matched to aconnection using the connection ID or - for packets with zero-length connectionIDs - the local address and port. These packets are processed using theselected connection; otherwise, the server continues below.¶
If the packet is an Initial packet fully conforming with the specification, theserver proceeds with the handshake (Section 7). This commits the server tothe version that the client selected.¶
If a server refuses to accept a new connection, it SHOULD send an Initial packetcontaining a CONNECTION_CLOSE frame with error code CONNECTION_REFUSED.¶
If the packet is a 0-RTT packet, the server MAY buffer a limited number of thesepackets in anticipation of a late-arriving Initial packet. Clients are not ableto send Handshake packets prior to receiving a server response, so serversSHOULD ignore any such packets.¶
Servers MUST drop incoming packets under all other circumstances.¶
A server deployment could load balance among servers using only source anddestination IP addresses and ports. Changes to the client's IP address or portcould result in packets being forwarded to the wrong server. Such a serverdeployment could use one of the following methods for connection continuitywhen a client's address changes.¶
A server in a deployment that does not implement a solution to maintainconnection continuity when the client address changes SHOULD indicate migrationis not supported using the disable_active_migration transport parameter. Thedisable_active_migration transport parameter does not prohibit connectionmigration after a client has acted on a preferred_address transport parameter.¶
Server deployments that use this simple form of load balancing MUST avoid thecreation of a stateless reset oracle; seeSection 21.11.¶
This document does not define an API for QUIC, but instead defines a set offunctions for QUIC connections that application protocols can rely upon. Anapplication protocol can assume that an implementation of QUIC provides aninterface that includes the operations described in this section. Animplementation designed for use with a specific application protocol mightprovide only those operations that are used by that protocol.¶
When implementing the client role, an application protocol can:¶
When implementing the server role, an application protocol can:¶
In either role, an application protocol can:¶
Version negotiation allows a server to indicate that it does not supportthe version the client used. A server sends a Version Negotiation packet inresponse to each packet that might initiate a new connection; seeSection 5.2 for details.¶
The size of the first packet sent by a client will determine whether a serversends a Version Negotiation packet. Clients that support multiple QUIC versionsSHOULD ensure that the first UDP datagram they send is sized to the largest ofthe minimum datagram sizes from all versions they support, using PADDING frames(Section 19.1) as necessary. This ensures that the server responds if thereis a mutually supported version. A server might not send a Version Negotiationpacket if the datagram it receives is smaller than the minimum size specified ina different version; seeSection 14.1.¶
If the version selected by the client is not acceptable to the server, theserver responds with a Version Negotiation packet; seeSection 17.2.1. Thisincludes a list of versions that the server will accept. An endpoint MUST NOTsend a Version Negotiation packet in response to receiving a Version Negotiationpacket.¶
This system allows a server to process packets with unsupported versions withoutretaining state. Though either the Initial packet or the Version Negotiationpacket that is sent in response could be lost, the client will send new packetsuntil it successfully receives a response or it abandons the connection attempt.As a result, the client discards all state for the connection and does not sendany more packets on the connection.¶
A server MAY limit the number of Version Negotiation packets it sends. Forinstance, a server that is able to recognize packets as 0-RTT might choose notto send Version Negotiation packets in response to 0-RTT packets with theexpectation that it will eventually receive an Initial packet.¶
Version Negotiation packets are designed to allow future versions of QUIC tonegotiate the version in use between endpoints. Future versions of QUIC mightchange how implementations that support multiple versions of QUIC react toVersion Negotiation packets when attempting to establish a connection using thisversion.¶
A client that supports only this version of QUIC MUST abandon the currentconnection attempt if it receives a Version Negotiation packet, with thefollowing two exceptions. A client MUST discard any Version Negotiation packetif it has received and successfully processed any other packet, including anearlier Version Negotiation packet. A client MUST discard a Version Negotiationpacket that lists the QUIC version selected by the client.¶
How to perform version negotiation is left as future work defined by futureversions of QUIC. In particular, that future work will ensure robustnessagainst version downgrade attacks; seeSection 21.12.¶
[[RFC editor: please remove this section before publication.]]¶
When a draft implementation receives a Version Negotiation packet, it MAY useit to attempt a new connection with one of the versions listed in the packet,instead of abandoning the current connection attempt; seeSection 6.2.¶
The client MUST check that the Destination and Source Connection ID fieldsmatch the Source and Destination Connection ID fields in a packet that theclient sent. If this check fails, the packet MUST be discarded.¶
Once the Version Negotiation packet is determined to be valid, the client thenselects an acceptable protocol version from the list provided by the server.The client then attempts to create a new connection using that version. The newconnection MUST use a new random Destination Connection ID different from theone it had previously sent.¶
Note that this mechanism does not protect against downgrade attacks andMUST NOT be used outside of draft implementations.¶
For a server to use a new version in the future, clients need to correctlyhandle unsupported versions. Some version numbers (0x?a?a?a?a as defined inSection 15) are reserved for inclusion in fields that contain versionnumbers.¶
Endpoints MAY add reserved versions to any field where unknown or unsupportedversions are ignored to test that a peer correctly ignores the value. Forinstance, an endpoint could include a reserved version in a Version Negotiationpacket; seeSection 17.2.1. Endpoints MAY send packets with a reservedversion to test that a peer correctly discards the packet.¶
QUIC relies on a combined cryptographic and transport handshake to minimizeconnection establishment latency. QUIC uses the CRYPTO frame (Section 19.6)to transmit the cryptographic handshake. Version 0x00000001 of QUIC uses TLS asdescribed in[QUIC-TLS]; a different QUIC version number could indicate that adifferent cryptographic handshake protocol is in use.¶
QUIC provides reliable, ordered delivery of the cryptographic handshakedata. QUIC packet protection is used to encrypt as much of the handshakeprotocol as possible. The cryptographic handshake MUST provide the followingproperties:¶
authenticated key exchange, where¶
Endpoints can use packets sent during the handshake to test for ExplicitCongestion Notification (ECN) support; seeSection 13.4. An endpoint verifies supportfor ECN by observing whether the ACK frames acknowledging the first packets itsends carry ECN counts, as described inSection 13.4.2.¶
The CRYPTO frame can be sent in different packet number spaces(Section 12.3). The offsets used by CRYPTO frames to ensure ordereddelivery of cryptographic handshake data start from zero in each packet numberspace.¶
Figure 4 shows a simplified handshake and the exchange of packets and framesthat are used to advance the handshake. Exchange of application data during thehandshake is enabled where possible, shown with a '*'. Once completed,endpoints are able to exchange application data.¶
Client ServerInitial (CRYPTO)0-RTT (*) ----------> Initial (CRYPTO) Handshake (CRYPTO) <---------- 1-RTT (*)Handshake (CRYPTO)1-RTT (*) ----------> <---------- 1-RTT (HANDSHAKE_DONE,*)1-RTT (*) <=========> 1-RTT (*)
An endpoint validates support for Explicit Congestion Notification (ECN) byobserving whether the ACK frames acknowledging the first packets it sends carryECN counts, as described inSection 13.4.2.¶
Endpoints MUST explicitly negotiate an application protocol. This avoidssituations where there is a disagreement about the protocol that is in use.¶
Details of how TLS is integrated with QUIC are provided in[QUIC-TLS], butsome examples are provided here. An extension of this exchange to supportclient address validation is shown inSection 8.1.2.¶
Once any address validation exchanges are complete, thecryptographic handshake is used to agree on cryptographic keys. Thecryptographic handshake is carried in Initial (Section 17.2.2) and Handshake(Section 17.2.4) packets.¶
Figure 5 provides an overview of the 1-RTT handshake. Each lineshows a QUIC packet with the packet type and packet number shown first, followedby the frames that are typically contained in those packets. So, for instancethe first packet is of type Initial, with packet number 0, and contains a CRYPTOframe carrying the ClientHello.¶
Multiple QUIC packets -- even of different packet types -- can be coalesced intoa single UDP datagram; seeSection 12.2. As a result, this handshakecould consist of as few as 4 UDP datagrams, or any number more (subject tolimits inherent to the protocol, such as congestion control andanti-amplification). For instance, the server's first flight contains Initialpackets, Handshake packets, and "0.5-RTT data" in 1-RTT packets.¶
Client ServerInitial[0]: CRYPTO[CH] -> Initial[0]: CRYPTO[SH] ACK[0] Handshake[0]: CRYPTO[EE, CERT, CV, FIN] <- 1-RTT[0]: STREAM[1, "..."]Initial[1]: ACK[0]Handshake[0]: CRYPTO[FIN], ACK[0]1-RTT[0]: STREAM[0, "..."], ACK[0] -> Handshake[1]: ACK[0] <- 1-RTT[1]: HANDSHAKE_DONE, STREAM[3, "..."], ACK[0]
Figure 6 shows an example of a connection with a 0-RTT handshakeand a single packet of 0-RTT data. Note that as described inSection 12.3, the server acknowledges 0-RTT data in 1-RTT packets, andthe client sends 1-RTT packets in the same packet number space.¶
Client ServerInitial[0]: CRYPTO[CH]0-RTT[0]: STREAM[0, "..."] -> Initial[0]: CRYPTO[SH] ACK[0] Handshake[0] CRYPTO[EE, FIN] <- 1-RTT[0]: STREAM[1, "..."] ACK[0]Initial[1]: ACK[0]Handshake[0]: CRYPTO[FIN], ACK[0]1-RTT[1]: STREAM[0, "..."] ACK[0] -> Handshake[1]: ACK[0] <- 1-RTT[1]: HANDSHAKE_DONE, STREAM[3, "..."], ACK[1]
A connection ID is used to ensure consistent routing of packets, as described inSection 5.1. The long header contains two connection IDs: the DestinationConnection ID is chosen by the recipient of the packet and is used to provideconsistent routing; the Source Connection ID is used to set the DestinationConnection ID used by the peer.¶
During the handshake, packets with the long header (Section 17.2) are usedto establish the connection IDs used by both endpoints. Each endpoint uses theSource Connection ID field to specify the connection ID that is used in theDestination Connection ID field of packets being sent to them. After processingthe first Initial packet, each endpoint sets the Destination Connection IDfield in subsequent packets it sends to the value of the Source Connection IDfield that it received.¶
When an Initial packet is sent by a client that has not previously received anInitial or Retry packet from the server, the client populates the DestinationConnection ID field with an unpredictable value. This Destination Connection IDMUST be at least 8 bytes in length. Until a packet is received from the server,the client MUST use the same Destination Connection ID value on all packets inthis connection.¶
The Destination Connection ID field from the first Initial packet sent by aclient is used to determine packet protection keys for Initial packets. Thesekeys change after receiving a Retry packet; see Section 5.2 of[QUIC-TLS].¶
The client populates the Source Connection ID field with a value of its choosingand sets the Source Connection ID Length field to indicate the length.¶
The first flight of 0-RTT packets use the same Destination Connection ID andSource Connection ID values as the client's first Initial packet.¶
Upon first receiving an Initial or Retry packet from the server, the client usesthe Source Connection ID supplied by the server as the Destination Connection IDfor subsequent packets, including any 0-RTT packets. This means that a clientmight have to change the connection ID it sets in the Destination Connection IDfield twice during connection establishment: once in response to a Retry, andonce in response to an Initial packet from the server. Once a client hasreceived a valid Initial packet from the server, it MUST discard any subsequentpacket it receives with a different Source Connection ID.¶
A client MUST change the Destination Connection ID it uses for sending packetsin response to only the first received Initial or Retry packet. A server MUSTset the Destination Connection ID it uses for sending packets based on the firstreceived Initial packet. Any further changes to the Destination Connection IDare only permitted if the values are taken from NEW_CONNECTION_ID frames; ifsubsequent Initial packets include a different Source Connection ID, they MUSTbe discarded. This avoids unpredictable outcomes that might otherwise resultfrom stateless processing of multiple Initial packets with different SourceConnection IDs.¶
The Destination Connection ID that an endpoint sends can change over thelifetime of a connection, especially in response to connection migration(Section 9); seeSection 5.1.1 for details.¶
The choice each endpoint makes about connection IDs during the handshake isauthenticated by including all values in transport parameters; seeSection 7.4. This ensures that all connection IDs used for thehandshake are also authenticated by the cryptographic handshake.¶
Each endpoint includes the value of the Source Connection ID field from thefirst Initial packet it sent in the initial_source_connection_id transportparameter; seeSection 18.2. A server includes theDestination Connection ID field from the first Initial packet it received fromthe client in the original_destination_connection_id transport parameter; if theserver sent a Retry packet, this refers to the first Initial packet receivedbefore sending the Retry packet. If it sends a Retry packet, a server alsoincludes the Source Connection ID field from the Retry packet in theretry_source_connection_id transport parameter.¶
The values provided by a peer for these transport parameters MUST match thevalues that an endpoint used in the Destination and Source Connection ID fieldsof Initial packets that it sent. Including connection ID values in transportparameters and verifying them ensures that that an attacker cannot influencethe choice of connection ID for a successful connection by injecting packetscarrying attacker-chosen connection IDs during the handshake.¶
An endpoint MUST treat absence of the initial_source_connection_id transportparameter from either endpoint or absence of theoriginal_destination_connection_id transport parameter from the server as aconnection error of type TRANSPORT_PARAMETER_ERROR.¶
An endpoint MUST treat the following as a connection error of typeTRANSPORT_PARAMETER_ERROR or PROTOCOL_VIOLATION:¶
If a zero-length connection ID is selected, the corresponding transportparameter is included with a zero-length value.¶
Figure 7 shows the connection IDs (with DCID=Destination Connection ID,SCID=Source Connection ID) that are used in a complete handshake. The exchangeof Initial packets is shown, plus the later exchange of 1-RTT packets thatincludes the connection ID established during the handshake.¶
Client ServerInitial: DCID=S1, SCID=C1 -> <- Initial: DCID=C1, SCID=S3 ...1-RTT: DCID=S3 -> <- 1-RTT: DCID=C1
Figure 8 shows a similar handshake that includes a Retry packet.¶
Client ServerInitial: DCID=S1, SCID=C1 -> <- Retry: DCID=C1, SCID=S2Initial: DCID=S2, SCID=C1 -> <- Initial: DCID=C1, SCID=S3 ...1-RTT: DCID=S3 -> <- 1-RTT: DCID=C1
In both cases (Figure 7 andFigure 8), the client sets thevalue of the initial_source_connection_id transport parameter toC1.¶
When the handshake does not include a Retry (Figure 7), the server setsoriginal_destination_connection_id toS1 and initial_source_connection_id toS3. In this case, the server does not include a retry_source_connection_idtransport parameter.¶
When the handshake includes a Retry (Figure 8), the server setsoriginal_destination_connection_id toS1, retry_source_connection_id toS2,and initial_source_connection_id toS3.¶
During connection establishment, both endpoints make authenticated declarationsof their transport parameters. Endpoints are required to comply with therestrictions that each parameter defines; the description of each parameterincludes rules for its handling.¶
Transport parameters are declarations that are made unilaterally by eachendpoint. Each endpoint can choose values for transport parameters independentof the values chosen by its peer.¶
The encoding of the transport parameters is detailed inSection 18.¶
QUIC includes the encoded transport parameters in the cryptographic handshake.Once the handshake completes, the transport parameters declared by the peer areavailable. Each endpoint validates the values provided by its peer.¶
Definitions for each of the defined transport parameters are included inSection 18.2.¶
An endpoint MUST treat receipt of a transport parameter with an invalid value asa connection error of type TRANSPORT_PARAMETER_ERROR.¶
An endpoint MUST NOT send a parameter more than once in a given transportparameters extension. An endpoint SHOULD treat receipt of duplicate transportparameters as a connection error of type TRANSPORT_PARAMETER_ERROR.¶
Endpoints use transport parameters to authenticate the negotiation ofconnection IDs during the handshake; seeSection 7.3.¶
Application Layer Protocol Negotiation (ALPN; see[ALPN]) allowsclients to offer multiple application protocols during connectionestablishment. The transport parameters that a client includes during thehandshake apply to all application protocols that the client offers. Applicationprotocols can recommend values for transport parameters, such as the initialflow control limits. However, application protocols that set constraints onvalues for transport parameters could make it impossible for a client to offermultiple application protocols if these constraints conflict.¶
Using 0-RTT depends on both client and server using protocol parameters thatwere negotiated from a previous connection. To enable 0-RTT, endpoints storethe value of the server transport parameters from a connection and apply themto any 0-RTT packets that are sent in subsequent connections to that peer. Thisinformation is stored with any information required by the applicationprotocol or cryptographic handshake; see Section 4.6 of[QUIC-TLS].¶
Remembered transport parameters apply to the new connection until the handshakecompletes and the client starts sending 1-RTT packets. Once the handshakecompletes, the client uses the transport parameters established in thehandshake. Not all transport parameters are remembered, as some do not apply tofuture connections or they have no effect on use of 0-RTT.¶
The definition of a new transport parameter (Section 7.4.2) MUSTspecify whether storing the transport parameter for 0-RTT is mandatory,optional, or prohibited. A client need not store a transport parameter it cannotprocess.¶
A client MUST NOT use remembered values for the following parameters:ack_delay_exponent, max_ack_delay, initial_source_connection_id,original_destination_connection_id, preferred_address,retry_source_connection_id, and stateless_reset_token. The client MUST use theserver's new values in the handshake instead; if the server does not provide newvalues, the default value is used.¶
A client that attempts to send 0-RTT data MUST remember all other transportparameters used by the server that it is able to process. The server canremember these transport parameters, or store an integrity-protected copy ofthe values in the ticket and recover the information when accepting 0-RTT data.A server uses the transport parameters in determining whether to accept 0-RTTdata.¶
If 0-RTT data is accepted by the server, the server MUST NOT reduce anylimits or alter any values that might be violated by the client with its0-RTT data. In particular, a server that accepts 0-RTT data MUST NOT setvalues for the following parameters (Section 18.2)that are smaller than the remembered value of the parameters.¶
Omitting or setting a zero value for certain transport parameters can result in0-RTT data being enabled, but not usable. The applicable subset of transportparameters that permit sending of application data SHOULD be set to non-zerovalues for 0-RTT. This includes initial_max_data and eitherinitial_max_streams_bidi and initial_max_stream_data_bidi_remote, orinitial_max_streams_uni and initial_max_stream_data_uni.¶
A server MAY store and recover the previously sent values of themax_idle_timeout, max_udp_payload_size, and disable_active_migration parametersand reject 0-RTT if it selects smaller values. Lowering the values of theseparameters while also accepting 0-RTT data could degrade the performance of theconnection. Specifically, lowering the max_udp_payload_size could result indropped packets leading to worse performance compared to rejecting 0-RTT dataoutright.¶
A server MUST reject 0-RTT data if the restored values for transportparameters cannot be supported.¶
When sending frames in 0-RTT packets, a client MUST only use rememberedtransport parameters; importantly, it MUST NOT use updated values that it learnsfrom the server's updated transport parameters or from frames received in 1-RTTpackets. Updated values of transport parameters from the handshake apply onlyto 1-RTT packets. For instance, flow control limits from remembered transportparameters apply to all 0-RTT packets even if those values are increased by thehandshake or by frames sent in 1-RTT packets. A server MAY treat use of updatedtransport parameters in 0-RTT as a connection error of type PROTOCOL_VIOLATION.¶
New transport parameters can be used to negotiate new protocol behavior. Anendpoint MUST ignore transport parameters that it does not support. Absence ofa transport parameter therefore disables any optional protocol feature that isnegotiated using the parameter. As described inSection 18.1,some identifiers are reserved in order to exercise this requirement.¶
A client that does not understand a transport parameter can discard it andattempt 0-RTT on subsequent connections. However, if the client adds supportfor a discarded transport parameter, it risks violating the constraints thatthe transport parameter establishes if it attempts 0-RTT. New transportparameters can avoid this problem by setting a default of the most conservativevalue.¶
New transport parameters can be registered according to the rules inSection 22.3.¶
Implementations need to maintain a buffer of CRYPTO data received out of order.Because there is no flow control of CRYPTO frames, an endpoint couldpotentially force its peer to buffer an unbounded amount of data.¶
Implementations MUST support buffering at least 4096 bytes of data received inout-of-order CRYPTO frames. Endpoints MAY choose to allow more data to bebuffered during the handshake. A larger limit during the handshake could allowfor larger keys or credentials to be exchanged. An endpoint's buffer size doesnot need to remain constant during the life of the connection.¶
Being unable to buffer CRYPTO frames during the handshake can lead to aconnection failure. If an endpoint's buffer is exceeded during the handshake, itcan expand its buffer temporarily to complete the handshake. If an endpointdoes not expand its buffer, it MUST close the connection with aCRYPTO_BUFFER_EXCEEDED error code.¶
Once the handshake completes, if an endpoint is unable to buffer all data in aCRYPTO frame, it MAY discard that CRYPTO frame and all CRYPTO frames received inthe future, or it MAY close the connection with a CRYPTO_BUFFER_EXCEEDED errorcode. Packets containing discarded CRYPTO frames MUST be acknowledged becausethe packet has been received and processed by the transport even though theCRYPTO frame was discarded.¶
Address validation ensures that an endpoint cannot be used for a trafficamplification attack. In such an attack, a packet is sent to a server withspoofed source address information that identifies a victim. If a servergenerates more or larger packets in response to that packet, the attacker canuse the server to send more data toward the victim than it would be able to sendon its own.¶
The primary defense against amplification attacks is verifying that a peer isable to receive packets at the transport address that it claims. Therefore,after receiving packets from an address that is not yet validated, an endpointMUST limit the amount of data it sends to the unvalidated address to three timesthe amount of data received from that address. This limit on the size ofresponses is known as the anti-amplification limit.¶
Address validation is performed both during connection establishment (seeSection 8.1) and during connection migration (seeSection 8.2).¶
Connection establishment implicitly provides address validation for bothendpoints. In particular, receipt of a packet protected with Handshake keysconfirms that the peer successfully processed an Initial packet. Once anendpoint has successfully processed a Handshake packet from the peer, it canconsider the peer address to have been validated.¶
Additionally, an endpoint MAY consider the peer address validated if the peeruses a connection ID chosen by the endpoint and the connection ID contains atleast 64 bits of entropy.¶
For the client, the value of the Destination Connection ID field in its firstInitial packet allows it to validate the server address as a part ofsuccessfully processing any packet. Initial packets from the server areprotected with keys that are derived from this value (see Section 5.2 of[QUIC-TLS]). Alternatively, the value is echoed by the server in VersionNegotiation packets (Section 6) or included in the Integrity Tagin Retry packets (Section 5.8 of[QUIC-TLS]).¶
Prior to validating the client address, servers MUST NOT send more than threetimes as many bytes as the number of bytes they have received. This limits themagnitude of any amplification attack that can be mounted using spoofed sourceaddresses. For the purposes of avoiding amplification prior to addressvalidation, servers MUST count all of the payload bytes received in datagramsthat are uniquely attributed to a single connection. This includes datagramsthat contain packets that are successfully processed and datagrams that containpackets that are all discarded.¶
Clients MUST ensure that UDP datagrams containing Initial packets have UDPpayloads of at least 1200 bytes, adding PADDING frames as necessary.A client that sends padded datagrams allows the server tosend more data prior to completing address validation.¶
Loss of an Initial or Handshake packet from the server can cause a deadlock ifthe client does not send additional Initial or Handshake packets. A deadlockcould occur when the server reaches its anti-amplification limit and the clienthas received acknowledgements for all the data it has sent. In this case, whenthe client has no reason to send additional packets, the server will be unableto send more data because it has not validated the client's address. To preventthis deadlock, clients MUST send a packet on a probe timeout (PTO, see Section6.2 of[QUIC-RECOVERY]). Specifically, the client MUST send an Initial packetin a UDP datagram that contains at least 1200 bytes if it does not haveHandshake keys, and otherwise send a Handshake packet.¶
A server might wish to validate the client address before starting thecryptographic handshake. QUIC uses a token in the Initial packet to provideaddress validation prior to completing the handshake. This token is delivered tothe client during connection establishment with a Retry packet (seeSection 8.1.2) or in a previous connection using the NEW_TOKEN frame (seeSection 8.1.3).¶
In addition to sending limits imposed prior to address validation, servers arealso constrained in what they can send by the limits set by the congestioncontroller. Clients are only constrained by the congestion controller.¶
A token sent in a NEW_TOKEN frames or a Retry packet MUST be constructed in away that allows the server to identify how it was provided to a client. Thesetokens are carried in the same field, but require different handling fromservers.¶
Upon receiving the client's Initial packet, the server can request addressvalidation by sending a Retry packet (Section 17.2.5) containing a token. Thistoken MUST be repeated by the client in all Initial packets it sends for thatconnection after it receives the Retry packet.¶
In response to processing an Initial containing a token that was provided in aRetry packet, a server cannot send another Retry packet; it can only refuse theconnection or permit it to proceed.¶
As long as it is not possible for an attacker to generate a valid token forits own address (seeSection 8.1.4) and the client is able to returnthat token, it proves to the server that it received the token.¶
A server can also use a Retry packet to defer the state and processing costs ofconnection establishment. Requiring the server to provide a differentconnection ID, along with the original_destination_connection_id transportparameter defined inSection 18.2, forces the server todemonstrate that it, or an entity it cooperates with, received the originalInitial packet from the client. Providing a different connection ID also grantsa server some control over how subsequent packets are routed. This can be usedto direct connections to a different server instance.¶
If a server receives a client Initial that can be unprotected but contains aninvalid Retry token, it knows the client will not accept another Retry token.The server can discard such a packet and allow the client to time out todetect handshake failure, but that could impose a significant latency penalty onthe client. Instead, the server SHOULD immediately close (Section 10.2)the connection with an INVALID_TOKEN error. Note that a server has notestablished any state for the connection at this point and so does not enter theclosing period.¶
A flow showing the use of a Retry packet is shown inFigure 9.¶
Client ServerInitial[0]: CRYPTO[CH] -> <- Retry+TokenInitial+Token[1]: CRYPTO[CH] -> Initial[0]: CRYPTO[SH] ACK[1] Handshake[0]: CRYPTO[EE, CERT, CV, FIN] <- 1-RTT[0]: STREAM[1, "..."]
A server MAY provide clients with an address validation token during oneconnection that can be used on a subsequent connection. Address validation isespecially important with 0-RTT because a server potentially sends a significantamount of data to a client in response to 0-RTT data.¶
The server uses the NEW_TOKEN frame (Section 19.7) to provide the clientwith an address validation token that can be used to validate futureconnections. In a future connection, the client includes this token in Initialpackets to provide address validation. The client MUST include the token in allInitial packets it sends, unless a Retry replaces the token with a newer one.The client MUST NOT use the token provided in a Retry for future connections.Servers MAY discard any Initial packet that does not carry the expected token.¶
Unlike the token that is created for a Retry packet, which is used immediately,the token sent in the NEW_TOKEN frame can be used after some period oftime has passed. Thus, a token SHOULD have an expiration time, which couldbe either an explicit expiration time or an issued timestamp that can beused to dynamically calculate the expiration time. A server can store theexpiration time or include it in an encrypted form in the token.¶
A token issued with NEW_TOKEN MUST NOT include information that would allowvalues to be linked by an observer to the connection on which it wasissued. For example, it cannot include the previous connection ID or addressinginformation, unless the values are encrypted. A server MUST ensure thatevery NEW_TOKEN frame it sends is unique across all clients, with the exceptionof those sent to repair losses of previously sent NEW_TOKEN frames. Informationthat allows the server to distinguish between tokens from Retry and NEW_TOKENMAY be accessible to entities other than the server.¶
It is unlikely that the client port number is the same on two differentconnections; validating the port is therefore unlikely to be successful.¶
A token received in a NEW_TOKEN frame is applicable to any server that theconnection is considered authoritative for (e.g., server names included in thecertificate). When connecting to a server for which the client retains anapplicable and unused token, it SHOULD include that token in the Token field ofits Initial packet. Including a token might allow the server to validate theclient address without an additional round trip. A client MUST NOT include atoken that is not applicable to the server that it is connecting to, unless theclient has the knowledge that the server that issued the token and the serverthe client is connecting to are jointly managing the tokens. A client MAY use atoken from any previous connection to that server.¶
A token allows a server to correlate activity between the connection where thetoken was issued and any connection where it is used. Clients that want tobreak continuity of identity with a server MAY discard tokens provided using theNEW_TOKEN frame. In comparison, a token obtained in a Retry packet MUST be usedimmediately during the connection attempt and cannot be used in subsequentconnection attempts.¶
A client SHOULD NOT reuse a NEW_TOKEN token for different connection attempts.Reusing a token allows connections to be linked by entities on the network path;seeSection 9.5.¶
Clients might receive multiple tokens on a single connection. Aside frompreventing linkability, any token can be used in any connection attempt.Servers can send additional tokens to either enable address validation formultiple connection attempts or to replace older tokens that might becomeinvalid. For a client, this ambiguity means that sending the most recent unusedtoken is most likely to be effective. Though saving and using older tokens hasno negative consequences, clients can regard older tokens as being less likelybe useful to the server for address validation.¶
When a server receives an Initial packet with an address validation token, itMUST attempt to validate the token, unless it has already completed addressvalidation. If the token is invalid then the server SHOULD proceed as ifthe client did not have a validated address, including potentially sendinga Retry. A server SHOULD encode tokens provided with NEW_TOKEN frames and Retrypackets differently, and validate the latter more strictly. If the validationsucceeds, the server SHOULD then allow the handshake to proceed.¶
The rationale for treating the client as unvalidated rather than discardingthe packet is that the client might have received the token in a previousconnection using the NEW_TOKEN frame, and if the server has lost state, itmight be unable to validate the token at all, leading to connection failure ifthe packet is discarded.¶
In a stateless design, a server can use encrypted and authenticated tokens topass information to clients that the server can later recover and use tovalidate a client address. Tokens are not integrated into the cryptographichandshake and so they are not authenticated. For instance, a client might beable to reuse a token. To avoid attacks that exploit this property, a servercan limit its use of tokens to only the information needed to validate clientaddresses.¶
Clients MAY use tokens obtained on one connection for any connection attemptusing the same version. When selecting a token to use, clients do not need toconsider other properties of the connection that is being attempted, includingthe choice of possible application protocols, session tickets, or otherconnection properties.¶
An address validation token MUST be difficult to guess. Including a largeenough random value in the token would be sufficient, but this depends on theserver remembering the value it sends to clients.¶
A token-based scheme allows the server to offload any state associated withvalidation to the client. For this design to work, the token MUST be covered byintegrity protection against modification or falsification by clients. Withoutintegrity protection, malicious clients could generate or guess values fortokens that would be accepted by the server. Only the server requires access tothe integrity protection key for tokens.¶
There is no need for a single well-defined format for the token because theserver that generates the token also consumes it. Tokens sent in Retry packetsSHOULD include information that allows the server to verify that the source IPaddress and port in client packets remain constant.¶
Tokens sent in NEW_TOKEN frames MUST include information that allows the serverto verify that the client IP address has not changed from when the token wasissued. Servers can use tokens from NEW_TOKEN in deciding not to send a Retrypacket, even if the client address has changed. If the client IP address haschanged, the server MUST adhere to the anti-amplification limit; seeSection 8. Note that in the presence of NAT, this requirementmight be insufficient to protect other hosts that share the NAT fromamplification attack.¶
Attackers could replay tokens to use servers as amplifiers in DDoS attacks. Toprotect against such attacks, servers MUST ensure that replay of tokens isprevented or limited. Servers SHOULD ensure that tokens sent in Retry packetsare only accepted for a short time. Tokens that are provided in NEW_TOKEN frames(Section 19.7) need to be valid for longer, but SHOULD NOT be acceptedmultiple times in a short period. Servers are encouraged to allow tokens to beused only once, if possible; tokens MAY include additional information aboutclients to further narrow applicability or reuse.¶
Path validation is used by both peers during connection migration(seeSection 9) to verify reachability after a change of address.In path validation, endpoints test reachability between a specific localaddress and a specific peer address, where an address is the two-tuple ofIP address and port.¶
Path validation tests that packets sent on a path to a peer arereceived by that peer. Path validation is used to ensure that packets receivedfrom a migrating peer do not carry a spoofed source address.¶
Path validation does not validate that a peer can send in the return direction.Acknowledgments cannot be used for return path validation because they containinsufficient entropy and might be spoofed. Endpoints independently determinereachability on each direction of a path, and therefore return reachability canonly be established by the peer.¶
Path validation can be used at any time by either endpoint. For instance, anendpoint might check that a peer is still in possession of its address after aperiod of quiescence.¶
Path validation is not designed as a NAT traversal mechanism. Though themechanism described here might be effective for the creation of NAT bindingsthat support NAT traversal, the expectation is that one or other peer is able toreceive packets without first having sent a packet on that path. Effective NATtraversal needs additional synchronization mechanisms that are not providedhere.¶
An endpoint MAY include other frames with the PATH_CHALLENGE and PATH_RESPONSEframes used for path validation. In particular, an endpoint can include PADDINGframes with a PATH_CHALLENGE frame for Path Maximum Transmission Unit Discovery(PMTUD; seeSection 14.2.1); it can also include its own PATH_CHALLENGE frame witha PATH_RESPONSE frame.¶
An endpoint uses a new connection ID for probes sent from a new local address;seeSection 9.5. When probing a new path, an endpoint canensure that its peer has an unused connection ID available forresponses. Sending NEW_CONNECTION_ID and PATH_CHALLENGE frames in the samepacket, if the peer's active_connection_id_limit permits, ensures that an unusedconnection ID will be available to the peer when sending a response.¶
An endpoint can choose to simultaneously probe multiple paths. The number ofsimultaneous paths used for probes is limited by the number of extra connectionIDs its peer has previously supplied, since each new local address used for aprobe requires a previously unused connection ID.¶
To initiate path validation, an endpoint sends a PATH_CHALLENGE frame containingan unpredictable payload on the path to be validated.¶
An endpoint MAY send multiple PATH_CHALLENGE frames to guard against packetloss. However, an endpoint SHOULD NOT send multiple PATH_CHALLENGE frames in asingle packet.¶
An endpoint SHOULD NOT probe a new path with packets containing a PATH_CHALLENGEframe more frequently than it would send an Initial packet. This ensures thatconnection migration is no more load on a new path than establishing a newconnection.¶
The endpoint MUST use unpredictable data in every PATH_CHALLENGE frame so thatit can associate the peer's response with the corresponding PATH_CHALLENGE.¶
An endpoint MUST expand datagrams that contain a PATH_CHALLENGE frame to atleast the smallest allowed maximum datagram size of 1200 bytes, unless theanti-amplification limit for the path does not permit sending a datagram ofthis size. Sending UDP datagrams of this size ensures that the network pathfrom the endpoint to the peer can be used for QUIC; seeSection 14.¶
When an endpoint is unable to expand the datagram size to 1200 bytes due to theanti-amplification limit, the path MTU will not be validated. To ensure thatthe path MTU is large enough, the endpoint MUST perform a second path validationby sending a PATH_CHALLENGE frame in a datagram of at least 1200 bytes. Thisadditional validation can be performed after a PATH_RESPONSE is successfullyreceived or when enough bytes have been received on the path that sending thelarger datagram will not result in exceeding the anti-amplification limit.¶
Unlike other cases where datagrams are expanded, endpoints MUST NOT discarddatagrams that appear to be too small when they contain PATH_CHALLENGE orPATH_RESPONSE.¶
On receiving a PATH_CHALLENGE frame, an endpoint MUST respond by echoing thedata contained in the PATH_CHALLENGE frame in a PATH_RESPONSE frame. Anendpoint MUST NOT delay transmission of a packet containing a PATH_RESPONSEframe unless constrained by congestion control.¶
A PATH_RESPONSE frame MUST be sent on the network path where thePATH_CHALLENGE was received. This ensures that path validation by a peer onlysucceeds if the path is functional in both directions. This requirement MUSTNOT be enforced by the endpoint that initiates path validation as that wouldenable an attack on migration; seeSection 9.3.3.¶
An endpoint MUST expand datagrams that contain a PATH_RESPONSE frame to atleast the smallest allowed maximum datagram size of 1200 bytes. This verifiesthat the path is able to carry datagrams of this size in both directions.However, an endpoint MUST NOT expand the datagram containing the PATH_RESPONSEif the resulting data exceeds the anti-amplification limit. This is expected toonly occur if the received PATH_CHALLENGE was not sent in an expanded datagram.¶
An endpoint MUST NOT send more than one PATH_RESPONSE frame in response to onePATH_CHALLENGE frame; seeSection 13.3. The peer isexpected to send more PATH_CHALLENGE frames as necessary to evoke additionalPATH_RESPONSE frames.¶
Path validation succeeds when a PATH_RESPONSE frame is received that containsthe data that was sent in a previous PATH_CHALLENGE frame. A PATH_RESPONSEframe received on any network path validates the path on which thePATH_CHALLENGE was sent.¶
If the PATH_CHALLENGE frame that resulted in successful path validation was sentin a datagram that was not expanded to at least 1200 bytes, the endpoint canregard the address as valid. The endpoint is then able to send more than threetimes the amount of data that has been received. However, the endpoint MUSTinitiate another path validation with an expanded datagram to verify that thepath supports required MTU.¶
Receipt of an acknowledgment for a packet containing a PATH_CHALLENGE frame isnot adequate validation, since the acknowledgment can be spoofed by a maliciouspeer.¶
Path validation only fails when the endpoint attempting to validate the pathabandons its attempt to validate the path.¶
Endpoints SHOULD abandon path validation based on a timer. When setting thistimer, implementations are cautioned that the new path could have a longerround-trip time than the original. A value of three times the larger of thecurrent Probe Timeout (PTO) or the PTO for the new path (that is, usingkInitialRtt as defined in[QUIC-RECOVERY]) is RECOMMENDED.¶
This timeout allows for multiple PTOs to expire prior to failing pathvalidation, so that loss of a single PATH_CHALLENGE or PATH_RESPONSE framedoes not cause path validation failure.¶
Note that the endpoint might receive packets containing other frames on the newpath, but a PATH_RESPONSE frame with appropriate data is required for pathvalidation to succeed.¶
When an endpoint abandons path validation, it determines that the path isunusable. This does not necessarily imply a failure of the connection -endpoints can continue sending packets over other paths as appropriate. If nopaths are available, an endpoint can wait for a new path to become available orclose the connection. An endpoint that has no valid network path to its peerMAY signal this using the NO_VIABLE_PATH connection error, noting that this isonly possible if the network path exists but does not support the requiredMTU (Section 14).¶
A path validation might be abandoned for other reasons besidesfailure. Primarily, this happens if a connection migration to a new path isinitiated while a path validation on the old path is in progress.¶
The use of a connection ID allows connections to survive changes to endpointaddresses (IP address and port), such as those caused by anendpoint migrating to a new network. This section describes the process bywhich an endpoint migrates to a new address.¶
The design of QUIC relies on endpoints retaining a stable address for theduration of the handshake. An endpoint MUST NOT initiate connection migrationbefore the handshake is confirmed, as defined in section 4.1.2 of[QUIC-TLS].¶
If the peer sent the disable_active_migration transport parameter, an endpointalso MUST NOT send packets (including probing packets; seeSection 9.1) from adifferent local address to the address the peer used during the handshake,unless the endpoint has acted on a preferred_address transport parameter fromthe peer. If the peer violates this requirement, the endpoint MUST either dropthe incoming packets on that path without generating a stateless reset orproceed with path validation and allow the peer to migrate. Generating astateless reset or closing the connection would allow third parties in thenetwork to cause connections to close by spoofing or otherwise manipulatingobserved traffic.¶
Not all changes of peer address are intentional, or active, migrations. The peercould experience NAT rebinding: a change of address due to a middlebox, usuallya NAT, allocating a new outgoing port or even a new outgoing IP address for aflow. An endpoint MUST perform path validation (Section 8.2) if itdetects any change to a peer's address, unless it has previously validated thataddress.¶
When an endpoint has no validated path on which to send packets, it MAY discardconnection state. An endpoint capable of connection migration MAY wait for anew path to become available before discarding connection state.¶
This document limits migration of connections to new client addresses, except asdescribed inSection 9.6. Clients are responsible for initiating allmigrations. Servers do not send non-probing packets (seeSection 9.1) toward aclient address until they see a non-probing packet from that address. If aclient receives packets from an unknown server address, the client MUST discardthese packets.¶
An endpoint MAY probe for peer reachability from a new local address using pathvalidation (Section 8.2) prior to migrating the connection to the newlocal address. Failure of path validation simply means that the new path is notusable for this connection. Failure to validate a path does not cause theconnection to end unless there are no valid alternative paths available.¶
PATH_CHALLENGE, PATH_RESPONSE, NEW_CONNECTION_ID, and PADDING frames are"probing frames", and all other frames are "non-probing frames". A packetcontaining only probing frames is a "probing packet", and a packet containingany other frame is a "non-probing packet".¶
An endpoint can migrate a connection to a new local address by sending packetscontaining non-probing frames from that address.¶
Each endpoint validates its peer's address during connection establishment.Therefore, a migrating endpoint can send to its peer knowing that the peer iswilling to receive at the peer's current address. Thus an endpoint can migrateto a new local address without first validating the peer's address.¶
To establish reachability on the new path, an endpoint initiates pathvalidation (Section 8.2) on the new path. An endpoint MAY defer pathvalidation until after a peer sends the next non-probing frame to its newaddress.¶
When migrating, the new path might not support the endpoint's current sendingrate. Therefore, the endpoint resets its congestion controller and RTT estimate,as described inSection 9.4.¶
The new path might not have the same ECN capability. Therefore, the endpointvalidates ECN capability as described inSection 13.4.¶
Receiving a packet from a new peer address containing a non-probing frameindicates that the peer has migrated to that address.¶
If the recipient permits the migration, it MUST send subsequent packetsto the new peer address and MUST initiate path validation (Section 8.2)to verify the peer's ownership of the address if validation is not alreadyunderway.¶
An endpoint only changes the address to which it sends packets in response tothe highest-numbered non-probing packet. This ensures that an endpoint does notsend packets to an old peer address in the case that it receives reorderedpackets.¶
An endpoint MAY send data to an unvalidated peer address, but it MUST protectagainst potential attacks as described inSection 9.3.1 andSection 9.3.2. An endpoint MAY skip validation of a peer address if thataddress has been seen recently. In particular, if an endpoint returns to apreviously-validated path after detecting some form of spurious migration,skipping address validation and restoring loss detection and congestion statecan reduce the performance impact of the attack.¶
After changing the address to which it sends non-probing packets, an endpointcan abandon any path validation for other addresses.¶
Receiving a packet from a new peer address could be the result of a NATrebinding at the peer.¶
After verifying a new client address, the server SHOULD send new addressvalidation tokens (Section 8) to the client.¶
It is possible that a peer is spoofing its source address to cause an endpointto send excessive amounts of data to an unwilling host. If the endpoint sendssignificantly more data than the spoofing peer, connection migration might beused to amplify the volume of data that an attacker can generate toward avictim.¶
As described inSection 9.3, an endpoint is required to validate apeer's new address to confirm the peer's possession of the new address. Until apeer's address is deemed valid, an endpoint limits the amount of data it sendsto that address; seeSection 8. In the absence of this limit, anendpoint risks being used for a denial of service attack against anunsuspecting victim.¶
If an endpoint skips validation of a peer address as described above, it doesnot need to limit its sending rate.¶
An on-path attacker could cause a spurious connection migration by copying andforwarding a packet with a spoofed address such that it arrives before theoriginal packet. The packet with the spoofed address will be seen to come froma migrating connection, and the original packet will be seen as a duplicate anddropped. After a spurious migration, validation of the source address will failbecause the entity at the source address does not have the necessarycryptographic keys to read or respond to the PATH_CHALLENGE frame that is sentto it even if it wanted to.¶
To protect the connection from failing due to such a spurious migration, anendpoint MUST revert to using the last validated peer address when validationof a new peer address fails. Additionally, receipt of packets with higherpacket numbers from the legitimate peer address will trigger another connectionmigration. This will cause the validation of the address of the spuriousmigration to be abandoned, thus containing migrations initiated by the attackerinjecting a single packet.¶
If an endpoint has no state about the last validated peer address, it MUST closethe connection silently by discarding all connection state. This results in newpackets on the connection being handled generically. For instance, an endpointMAY send a stateless reset in response to any further incoming packets.¶
An off-path attacker that can observe packets might forward copies of genuinepackets to endpoints. If the copied packet arrives before the genuine packet,this will appear as a NAT rebinding. Any genuine packet will be discarded as aduplicate. If the attacker is able to continue forwarding packets, it might beable to cause migration to a path via the attacker. This places the attacker onpath, giving it the ability to observe or drop all subsequent packets.¶
This style of attack relies on the attacker using a path that has approximatelythe same characteristics as the direct path between endpoints. The attack ismore reliable if relatively few packets are sent or if packet loss coincideswith the attempted attack.¶
A non-probing packet received on the original path that increases the maximumreceived packet number will cause the endpoint to move back to that path.Eliciting packets on this path increases the likelihood that the attack isunsuccessful. Therefore, mitigation of this attack relies on triggering theexchange of packets.¶
In response to an apparent migration, endpoints MUST validate the previouslyactive path using a PATH_CHALLENGE frame. This induces the sending of newpackets on that path. If the path is no longer viable, the validation attemptwill time out and fail; if the path is viable, but no longer desired, thevalidation will succeed, but only results in probing packets being sent on thepath.¶
An endpoint that receives a PATH_CHALLENGE on an active path SHOULD send anon-probing packet in response. If the non-probing packet arrives before anycopy made by an attacker, this results in the connection being migrated back tothe original path. Any subsequent migration to another path restarts thisentire process.¶
This defense is imperfect, but this is not considered a serious problem. If thepath via the attack is reliably faster than the original path despite multipleattempts to use that original path, it is not possible to distinguish betweenattack and an improvement in routing.¶
An endpoint could also use heuristics to improve detection of this style ofattack. For instance, NAT rebinding is improbable if packets were recentlyreceived on the old path, similarly rebinding is rare on IPv6 paths. Endpointscan also look for duplicated packets. Conversely, a change in connection ID ismore likely to indicate an intentional migration rather than an attack.¶
The capacity available on the new path might not be the same as the old path.Packets sent on the old path MUST NOT contribute to congestion control or RTTestimation for the new path.¶
On confirming a peer's ownership of its new address, an endpoint MUSTimmediately reset the congestion controller and round-trip time estimator forthe new path to initial values (see Appendices A.3 and B.3 in[QUIC-RECOVERY])unless the only change in the peer's address is its port number. Becauseport-only changes are commonly the result of NAT rebinding or other middleboxactivity, the endpoint MAY instead retain its congestion control state andround-trip estimate in those cases instead of reverting to initial values.In cases where congestion control stateretained from an old path is used on a new path with substantially differentcharacteristics, a sender could transmit too aggressively until the congestioncontroller and the RTT estimator have adapted. Generally, implementations areadvised to be cautious when using previous values on a new path.¶
There could be apparent reordering at the receiver when an endpoint sends dataand probes from/to multiple addresses during the migration period, since the tworesulting paths could have different round-trip times. A receiver of packets onmultiple paths will still send ACK frames covering all received packets.¶
While multiple paths might be used during connection migration, a singlecongestion control context and a single loss recovery context (as described in[QUIC-RECOVERY]) could be adequate. For instance, an endpoint might delayswitching to a new congestion control context until it is confirmed that an oldpath is no longer needed (such as the case inSection 9.3.3).¶
A sender can make exceptions for probe packets so that their loss detection isindependent and does not unduly cause the congestion controller to reduce itssending rate. An endpoint might set a separate timer when a PATH_CHALLENGE issent, which is cancelled if the corresponding PATH_RESPONSE is received. If thetimer fires before the PATH_RESPONSE is received, the endpoint might send a newPATH_CHALLENGE, and restart the timer for a longer period of time. This timerSHOULD be set as described in Section 6.2.1 of[QUIC-RECOVERY] and MUST NOT bemore aggressive.¶
Using a stable connection ID on multiple network paths would allow a passiveobserver to correlate activity between those paths. An endpoint that movesbetween networks might not wish to have their activity correlated by any entityother than their peer, so different connection IDs are used when sending fromdifferent local addresses, as discussed inSection 5.1. For this to beeffective, endpoints need to ensure that connection IDs they provide cannot belinked by any other entity.¶
At any time, endpoints MAY change the Destination Connection ID they transmitwith to a value that has not been used on another path.¶
An endpoint MUST NOT reuse a connection ID when sending from more than one localaddress, for example when initiating connection migration as described inSection 9.2 or when probing a new network path as described inSection 9.1.¶
Similarly, an endpoint MUST NOT reuse a connection ID when sending to more thanone destination address. Due to network changes outside the control of itspeer, an endpoint might receive packets from a new source address with the samedestination connection ID, in which case it MAY continue to use the currentconnection ID with the new remote address while still sending from the samelocal address.¶
These requirements regarding connection ID reuse apply only to the sending ofpackets, as unintentional changes in path without a change in connection ID arepossible. For example, after a period of network inactivity, NAT rebindingmight cause packets to be sent on a new path when the client resumes sending.An endpoint responds to such an event as described inSection 9.3.¶
Using different connection IDs for packets sent in both directions on each newnetwork path eliminates the use of the connection ID for linking packets fromthe same connection across different network paths. Header protection ensuresthat packet numbers cannot be used to correlate activity. This does not preventother properties of packets, such as timing and size, from being used tocorrelate activity.¶
An endpoint SHOULD NOT initiate migration with a peer that has requested azero-length connection ID, because traffic over the new path might be triviallylinkable to traffic over the old one. If the server is able to associatepackets with a zero-length connection ID to the right connection, it means thatthe server is using other information to demultiplex packets. For example, aserver might provide a unique address to every client, for instance using HTTPalternative services[ALTSVC]. Information that might allow correctrouting of packets across multiple network paths will also allow activity onthose paths to be linked by entities other than the peer.¶
A client might wish to reduce linkability by employing a new connection ID andsource UDP port when sending traffic after a period of inactivity. Changing theUDP port from which it sends packets at the same time might cause the packet toappear as a connection migration. This ensures that the mechanisms that supportmigration are exercised even for clients that do not experience NAT rebindingsor genuine migrations. Changing port number can cause a peer to reset itscongestion state (seeSection 9.4), so the port SHOULD only be changedinfrequently.¶
An endpoint that exhausts available connection IDs cannot probe new paths orinitiate migration, nor can it respond to probes or attempts by its peer tomigrate. To ensure that migration is possible and packets sent on differentpaths cannot be correlated, endpoints SHOULD provide new connection IDs beforepeers migrate; seeSection 5.1.1. If a peer might have exhausted availableconnection IDs, a migrating endpoint could include a NEW_CONNECTION_ID frame inall packets sent on a new network path.¶
QUIC allows servers to accept connections on one IP address and attempt totransfer these connections to a more preferred address shortly after thehandshake. This is particularly useful when clients initially connect to anaddress shared by multiple servers but would prefer to use a unicast address toensure connection stability. This section describes the protocol for migrating aconnection to a preferred server address.¶
Migrating a connection to a new server address mid-connection is not supportedby the version of QUIC specified in this document. If a client receives packetsfrom a new server address when the client has not initiated a migration to thataddress, the client SHOULD discard these packets.¶
A server conveys a preferred address by including the preferred_addresstransport parameter in the TLS handshake.¶
Servers MAY communicate a preferred address of each address family (IPv4 andIPv6) to allow clients to pick the one most suited to their network attachment.¶
Once the handshake is confirmed, the client SHOULD select one of the twoaddresses provided by the server and initiate path validation (seeSection 8.2). A client constructs packets using any previously unusedactive connection ID, taken from either the preferred_address transportparameter or a NEW_CONNECTION_ID frame.¶
As soon as path validation succeeds, the client SHOULD begin sending allfuture packets to the new server address using the new connection ID anddiscontinue use of the old server address. If path validation fails, the clientMUST continue sending all future packets to the server's original IP address.¶
A client that migrates to a preferred address MUST validate the address itchooses before migrating; seeSection 21.5.3.¶
A server might receive a packet addressed to its preferred IP address at anytime after it accepts a connection. If this packet contains a PATH_CHALLENGEframe, the server sends a packet containing a PATH_RESPONSE frame as perSection 8.2. The server MUST send non-probing packets from itsoriginal address until it receives a non-probing packet from the client at itspreferred address and until the server has validated the new path.¶
The server MUST probe on the path toward the client from its preferred address.This helps to guard against spurious migration initiated by an attacker.¶
Once the server has completed its path validation and has received a non-probingpacket with a new largest packet number on its preferred address, the serverbegins sending non-probing packets to the client exclusively from its preferredIP address. It SHOULD drop packets for this connection received on the old IPaddress, but MAY continue to process delayed packets.¶
The addresses that a server provides in the preferred_address transportparameter are only valid for the connection in which they are provided. Aclient MUST NOT use these for other connections, including connections that areresumed from the current connection.¶
A client might need to perform a connection migration before it has migrated tothe server's preferred address. In this case, the client SHOULD perform pathvalidation to both the original and preferred server address from the client'snew address concurrently.¶
If path validation of the server's preferred address succeeds, the client MUSTabandon validation of the original address and migrate to using the server'spreferred address. If path validation of the server's preferred address failsbut validation of the server's original address succeeds, the client MAY migrateto its new address and continue sending to the server's original address.¶
If packets received at the server's preferred address have a different sourceaddress than observed from the client during the handshake, the server MUSTprotect against potential attacks as described inSection 9.3.1 andSection 9.3.2. In addition to intentional simultaneous migration, thismight also occur because the client's access network used a different NATbinding for the server's preferred address.¶
Servers SHOULD initiate path validation to the client's new address uponreceiving a probe packet from a different address; seeSection 8.¶
A client that migrates to a new address SHOULD use a preferred address from thesame address family for the server.¶
The connection ID provided in the preferred_address transport parameter is notspecific to the addresses that are provided. This connection ID is provided toensure that the client has a connection ID available for migration, but theclient MAY use this connection ID on any path.¶
Endpoints that send data using IPv6 SHOULD apply an IPv6 flow labelin compliance with[RFC6437], unless the local API does not allowsetting IPv6 flow labels.¶
The IPv6 flow label SHOULD be a pseudo-random function of the source anddestination addresses, source and destination UDP ports, and the DestinationConnection ID field. The flow label generation MUST be designed to minimize thechances of linkability with a previously used flow label, as this would enablecorrelating activity on multiple paths; seeSection 9.5.¶
A possible implementation is to compute the flow label as a cryptographic hashfunction of the source and destination addresses, source and destinationUDP ports, Destination Connection ID field, and a local secret.¶
An established QUIC connection can be terminated in one of three ways:¶
An endpoint MAY discard connection state if it does not have a validated path onwhich it can send packets; seeSection 8.2.¶
If a max_idle_timeout is specified by either peer in its transport parameters(Section 18.2), the connection is silently closedand its state is discarded when it remains idle for longer than the minimum ofboth peers max_idle_timeout values.¶
Each endpoint advertises a max_idle_timeout, but the effective valueat an endpoint is computed as the minimum of the two advertised values. Byannouncing a max_idle_timeout, an endpoint commits to initiating an immediateclose (Section 10.2) if it abandons the connection prior to the effectivevalue.¶
An endpoint restarts its idle timer when a packet from its peer is received andprocessed successfully. An endpoint also restarts its idle timer when sending anack-eliciting packet if no other ack-eliciting packets have been sent since lastreceiving and processing a packet. Restarting this timer when sending a packetensures that connections are not closed after new activity is initiated.¶
To avoid excessively small idle timeout periods, endpoints MUST increase theidle timeout period to be at least three times the current Probe Timeout (PTO).This allows for multiple PTOs to expire, and therefore multiple probes to besent and lost, prior to idle timeout.¶
An endpoint that sends packets close to the effective timeout risks havingthem be discarded at the peer, since the idle timeout period might have expiredat the peer before these packets arrive.¶
An endpoint can send a PING or another ack-eliciting frame to test theconnection for liveness if the peer could time out soon, such as within a PTO;see Section 6.2 of[QUIC-RECOVERY]. This is especially useful if anyavailable application data cannot be safely retried. Note that the applicationdetermines what data is safe to retry.¶
An endpoint might need to send ack-eliciting packets to avoid an idle timeoutif it is expecting response data, but does not have or is unable to sendapplication data.¶
An implementation of QUIC might provide applications with an option to defer anidle timeout. This facility could be used when the application wishes to avoidlosing state that has been associated with an open connection, but does notexpect to exchange application data for some time. With this option, anendpoint could send a PING frame (Section 19.2) periodically, which will causethe peer to restart its idle timeout period. Sending a packet containing a PINGframe restarts the idle timeout for this endpoint also if this is the firstack-eliciting packet sent since receiving a packet. Sending a PING frame causesthe peer to respond with an acknowledgment, which also restarts the idletimeout for the endpoint.¶
Application protocols that use QUIC SHOULD provide guidance on when deferring anidle timeout is appropriate. Unnecessary sending of PING frames could have adetrimental effect on performance.¶
A connection will time out if no packets are sent or received for a periodlonger than the time negotiated using the max_idle_timeout transport parameter;seeSection 10. However, state in middleboxes might time out earlier thanthat. Though REQ-5 in[RFC4787] recommends a 2 minute timeout interval,experience shows that sending packets every 30 seconds is necessary to preventthe majority of middleboxes from losing state for UDP flows[GATEWAY].¶
An endpoint sends a CONNECTION_CLOSE frame (Section 19.19) toterminate the connection immediately. A CONNECTION_CLOSE frame causes allstreams to immediately become closed; open streams can be assumed to beimplicitly reset.¶
After sending a CONNECTION_CLOSE frame, an endpoint immediately enters theclosing state; seeSection 10.2.1. After receiving a CONNECTION_CLOSE frame,endpoints enter the draining state; seeSection 10.2.2.¶
Violations of the protocol lead to an immediate close.¶
An immediate close can be used after an application protocol has arranged toclose a connection. This might be after the application protocol negotiates agraceful shutdown. The application protocol can exchange messages that areneeded for both application endpoints to agree that the connection can beclosed, after which the application requests that QUIC close the connection.When QUIC consequently closes the connection, a CONNECTION_CLOSE frame with anapplication-supplied error code will be used to signal closure to the peer.¶
The closing and draining connection states exist to ensure that connectionsclose cleanly and that delayed or reordered packets are properly discarded.These states SHOULD persist for at least three times the current Probe Timeout(PTO) interval as defined in[QUIC-RECOVERY].¶
Disposing of connection state prior to exiting the closing or draining statecould result in an endpoint generating a stateless reset unnecessarily when itreceives a late-arriving packet. Endpoints that have some alternative meansto ensure that late-arriving packets do not induce a response, such as thosethat are able to close the UDP socket, MAY end these states earlier to allowfor faster resource recovery. Servers that retain an open socket for acceptingnew connections SHOULD NOT end the closing or draining states early.¶
Once its closing or draining state ends, an endpoint SHOULD discard allconnection state. The endpoint MAY send a stateless reset in response to anyfurther incoming packets belonging to this connection.¶
An endpoint enters the closing state after initiating an immediate close.¶
In the closing state, an endpoint retains only enough information to generate apacket containing a CONNECTION_CLOSE frame and to identify packets as belongingto the connection. An endpoint in the closing state sends a packet containing aCONNECTION_CLOSE frame in response to any incoming packet that it attributes tothe connection.¶
An endpoint SHOULD limit the rate at which it generates packets in the closingstate. For instance, an endpoint could wait for a progressively increasingnumber of received packets or amount of time before responding to receivedpackets.¶
An endpoint's selected connection ID and the QUIC version are sufficientinformation to identify packets for a closing connection; the endpoint MAYdiscard all other connection state. An endpoint that is closing is not requiredto process any received frame. An endpoint MAY retain packet protection keys forincoming packets to allow it to read and process a CONNECTION_CLOSE frame.¶
An endpoint MAY drop packet protection keys when entering the closing state andsend a packet containing a CONNECTION_CLOSE frame in response to any UDPdatagram that is received. However, an endpoint that discards packet protectionkeys cannot identify and discard invalid packets. To avoid being used for anamplication attack, such endpoints MUST limit the cumulative size of packets itsends to three times the cumulative size of the packets that are received andattributed to the connection. To minimize the state that an endpoint maintainsfor a closing connection, endpoints MAY send the exact same packet in responseto any received packet.¶
Allowing retransmission of a closing packet is an exception to the requirementthat a new packet number be used for each packet inSection 12.3.Sending new packet numbers is primarily of advantage to loss recovery andcongestion control, which are not expected to be relevant for a closedconnection. Retransmitting the final packet requires less state.¶
While in the closing state, an endpoint could receive packets from a new sourceaddress, possibly indicating a connection migration; seeSection 9. Anendpoint in the closing state MUST either discard packets received from anunvalidated address or limit the cumulative size of packets it sends to anunvalidated address to three times the size of packets it receives from thataddress.¶
An endpoint is not expected to handle key updates when it is closing (Section 6of[QUIC-TLS]). A key update might prevent the endpoint from moving from theclosing state to the draining state, as the endpoint will not be able to processsubsequently received packets, but it otherwise has no impact.¶
The draining state is entered once an endpoint receives a CONNECTION_CLOSEframe, which indicates that its peer is closing or draining. While otherwiseidentical to the closing state, an endpoint in the draining state MUST NOT sendany packets. Retaining packet protection keys is unnecessary once a connectionis in the draining state.¶
An endpoint that receives a CONNECTION_CLOSE frame MAY send a single packetcontaining a CONNECTION_CLOSE frame before entering the draining state, using aNO_ERROR code if appropriate. An endpoint MUST NOT send further packets. Doingso could result in a constant exchange of CONNECTION_CLOSE frames until one ofthe endpoints exits the closing state.¶
An endpoint MAY enter the draining state from the closing state if it receives aCONNECTION_CLOSE frame, which indicates that the peer is also closing ordraining. In this case, the draining state SHOULD end when the closing statewould have ended. In other words, the endpoint uses the same end time, butceases transmission of any packets on this connection.¶
When sending CONNECTION_CLOSE, the goal is to ensure that the peer will processthe frame. Generally, this means sending the frame in a packet with the highestlevel of packet protection to avoid the packet being discarded. After thehandshake is confirmed (see Section 4.1.2 of[QUIC-TLS]), an endpoint MUSTsend any CONNECTION_CLOSE frames in a 1-RTT packet. However, prior toconfirming the handshake, it is possible that more advanced packet protectionkeys are not available to the peer, so another CONNECTION_CLOSE frame MAY besent in a packet that uses a lower packet protection level. More specifically:¶
Sending a CONNECTION_CLOSE of type 0x1d in an Initial or Handshake packet couldexpose application state or be used to alter application state. ACONNECTION_CLOSE of type 0x1d MUST be replaced by a CONNECTION_CLOSE of type0x1c when sending the frame in Initial or Handshake packets. Otherwise,information about the application state might be revealed. Endpoints MUST clearthe value of the Reason Phrase field and SHOULD use the APPLICATION_ERROR codewhen converting to a CONNECTION_CLOSE of type 0x1c.¶
CONNECTION_CLOSE frames sent in multiple packet types can be coalesced into asingle UDP datagram; seeSection 12.2.¶
An endpoint can send a CONNECTION_CLOSE frame in an Initial packet. This mightbe in response to unauthenticated information received in Initial or Handshakepackets. Such an immediate close might expose legitimate connections to adenial of service. QUIC does not include defensive measures for on-path attacksduring the handshake; seeSection 21.2. However, at the cost of reducingfeedback about errors for legitimate peers, some forms of denial of service canbe made more difficult for an attacker if endpoints discard illegal packetsrather than terminating a connection with CONNECTION_CLOSE. For this reason,endpoints MAY discard packets rather than immediately close if errors aredetected in packets that lack authentication.¶
An endpoint that has not established state, such as a server that detects anerror in an Initial packet, does not enter the closing state. An endpoint thathas no state for the connection does not enter a closing or draining period onsending a CONNECTION_CLOSE frame.¶
A stateless reset is provided as an option of last resort for an endpoint thatdoes not have access to the state of a connection. A crash or outage mightresult in peers continuing to send data to an endpoint that is unable toproperly continue the connection. An endpoint MAY send a stateless reset inresponse to receiving a packet that it cannot associate with an activeconnection.¶
A stateless reset is not appropriate for indicating errors in activeconnections. An endpoint that wishes to communicate a fatal connection errorMUST use a CONNECTION_CLOSE frame if it is able.¶
To support this process, an endpoint issues a stateless reset token, which is a16-byte value that is hard to guess. If the peer subsequently receives astateless reset, which is a UDP datagram that ends in that stateless resettoken, the peer will immediately end the connection.¶
A stateless reset token is specific to a connection ID. An endpoint issues astateless reset token by including the value in the Stateless Reset Token fieldof a NEW_CONNECTION_ID frame. Servers can also issue a stateless_reset_tokentransport parameter during the handshake that applies to the connection ID thatit selected during the handshake. These exchanges are protected by encryption,so only client and server know their value. Note that clients cannot use thestateless_reset_token transport parameter because their transport parameters donot have confidentiality protection.¶
Tokens are invalidated when their associated connection ID is retired via aRETIRE_CONNECTION_ID frame (Section 19.16).¶
An endpoint that receives packets that it cannot process sends a packet in thefollowing layout (seeSection 1.3):¶
Stateless Reset { Fixed Bits (2) = 1, Unpredictable Bits (38..), Stateless Reset Token (128),}This design ensures that a stateless reset packet is - to the extent possible -indistinguishable from a regular packet with a short header.¶
A stateless reset uses an entire UDP datagram, starting with the first two bitsof the packet header. The remainder of the first byte and an arbitrary numberof bytes following it are set to values that SHOULD be indistinguishablefrom random. The last 16 bytes of the datagram contain a Stateless Reset Token.¶
To entities other than its intended recipient, a stateless reset will appear tobe a packet with a short header. For the stateless reset to appear as a validQUIC packet, the Unpredictable Bits field needs to include at least 38 bits ofdata (or 5 bytes, less the two fixed bits).¶
The resulting minimum size of 21 bytes does not guarantee that a stateless resetis difficult to distinguish from other packets if the recipient requires the useof a connection ID. To achieve that end, the endpoint SHOULD ensure that allpackets it sends are at least 22 bytes longer than the minimum connection IDlength that it requests the peer to include in its packets, adding PADDINGframes as necessary. This ensures that any stateless reset sent by the peeris indistinguishable from a valid packet sent to the endpoint. An endpoint thatsends a stateless reset in response to a packet that is 43 bytes or shorterSHOULD send a stateless reset that is one byte shorter than the packet itresponds to.¶
These values assume that the Stateless Reset Token is the same length as theminimum expansion of the packet protection AEAD. Additional unpredictable bytesare necessary if the endpoint could have negotiated a packet protection schemewith a larger minimum expansion.¶
An endpoint MUST NOT send a stateless reset that is three times or more largerthan the packet it receives to avoid being used for amplification.Section 10.3.3 describes additional limits on stateless reset size.¶
Endpoints MUST discard packets that are too small to be valid QUIC packets. Togive an example, with the set of AEAD functions defined in[QUIC-TLS], shortheader packets that are smaller than 21 bytes are never valid.¶
Endpoints MUST send stateless reset packets formatted as a packet with a shortheader. However, endpoints MUST treat any packet ending in a valid statelessreset token as a stateless reset, as other QUIC versions might allow the use ofa long header.¶
An endpoint MAY send a stateless reset in response to a packet with a longheader. Sending a stateless reset is not effective prior to the stateless resettoken being available to a peer. In this QUIC version, packets with a longheader are only used during connection establishment. Because the statelessreset token is not available until connection establishment is complete or nearcompletion, ignoring an unknown packet with a long header might be as effectiveas sending a stateless reset.¶
An endpoint cannot determine the Source Connection ID from a packet with a shortheader, therefore it cannot set the Destination Connection ID in the statelessreset packet. The Destination Connection ID will therefore differ from thevalue used in previous packets. A random Destination Connection ID makes theconnection ID appear to be the result of moving to a new connection ID that wasprovided using a NEW_CONNECTION_ID frame (Section 19.15).¶
Using a randomized connection ID results in two problems:¶
This stateless reset design is specific to QUIC version 1. An endpoint thatsupports multiple versions of QUIC needs to generate a stateless reset that willbe accepted by peers that support any version that the endpoint might support(or might have supported prior to losing state). Designers of new versions ofQUIC need to be aware of this and either reuse this design, or use a portion ofthe packet other than the last 16 bytes for carrying data.¶
An endpoint detects a potential stateless reset using the trailing 16 bytes ofthe UDP datagram. An endpoint remembers all Stateless Reset Tokens associatedwith the connection IDs and remote addresses for datagrams it has recently sent.This includes Stateless Reset Tokens from NEW_CONNECTION_ID frames and theserver's transport parameters but excludes Stateless Reset Tokens associatedwith connection IDs that are either unused or retired. The endpoint identifiesa received datagram as a stateless reset by comparing the last 16 bytes of thedatagram with all Stateless Reset Tokens associated with the remote address onwhich the datagram was received.¶
This comparison can be performed for every inbound datagram. Endpoints MAY skipthis check if any packet from a datagram is successfully processed. However,the comparison MUST be performed when the first packet in an incoming datagrameither cannot be associated with a connection, or cannot be decrypted.¶
An endpoint MUST NOT check for any Stateless Reset Tokens associated withconnection IDs it has not used or for connection IDs that have been retired.¶
When comparing a datagram to Stateless Reset Token values, endpoints MUSTperform the comparison without leaking information about the value of the token.For example, performing this comparison in constant time protects the value ofindividual Stateless Reset Tokens from information leakage through timing sidechannels. Another approach would be to store and compare the transformed valuesof Stateless Reset Tokens instead of the raw token values, where thetransformation is defined as a cryptographically-secure pseudo-random functionusing a secret key (e.g., block cipher, HMAC[RFC2104]). An endpoint is notexpected to protect information about whether a packet was successfullydecrypted, or the number of valid Stateless Reset Tokens.¶
If the last 16 bytes of the datagram are identical in value to a Stateless ResetToken, the endpoint MUST enter the draining period and not send any furtherpackets on this connection.¶
The stateless reset token MUST be difficult to guess. In order to create aStateless Reset Token, an endpoint could randomly generate ([RFC4086]) asecret for every connection that it creates. However, this presents acoordination problem when there are multiple instances in a cluster or a storageproblem for an endpoint that might lose state. Stateless reset specificallyexists to handle the case where state is lost, so this approach is suboptimal.¶
A single static key can be used across all connections to the same endpoint bygenerating the proof using a second iteration of a preimage-resistant functionthat takes a static key and the connection ID chosen by the endpoint (seeSection 5.1) as input. An endpoint could use HMAC[RFC2104] (forexample, HMAC(static_key, connection_id)) or HKDF[RFC5869] (for example,using the static key as input keying material, with the connection ID as salt).The output of this function is truncated to 16 bytes to produce the StatelessReset Token for that connection.¶
An endpoint that loses state can use the same method to generate a validStateless Reset Token. The connection ID comes from the packet that theendpoint receives.¶
This design relies on the peer always sending a connection ID in its packets sothat the endpoint can use the connection ID from a packet to reset theconnection. An endpoint that uses this design MUST either use the sameconnection ID length for all connections or encode the length of the connectionID such that it can be recovered without state. In addition, it cannot providea zero-length connection ID.¶
Revealing the Stateless Reset Token allows any entity to terminate theconnection, so a value can only be used once. This method for choosing theStateless Reset Token means that the combination of connection ID and static keyMUST NOT be used for another connection. A denial of service attack is possibleif the same connection ID is used by instances that share a static key, or if anattacker can cause a packet to be routed to an instance that has no state butthe same static key; seeSection 21.11. A connection ID from a connectionthat is reset by revealing the Stateless Reset Token MUST NOT be reused for newconnections at nodes that share a static key.¶
The same Stateless Reset Token MUST NOT be used for multiple connection IDs.Endpoints are not required to compare new values against all previous values,but a duplicate value MAY be treated as a connection error of typePROTOCOL_VIOLATION.¶
Note that Stateless Reset packets do not have any cryptographic protection.¶
The design of a Stateless Reset is such that without knowing the stateless resettoken it is indistinguishable from a valid packet. For instance, if a serversends a Stateless Reset to another server it might receive another StatelessReset in response, which could lead to an infinite exchange.¶
An endpoint MUST ensure that every Stateless Reset that it sends is smaller thanthe packet that triggered it, unless it maintains state sufficient to preventlooping. In the event of a loop, this results in packets eventually being toosmall to trigger a response.¶
An endpoint can remember the number of Stateless Reset packets that it has sentand stop generating new Stateless Reset packets once a limit is reached. Usingseparate limits for different remote addresses will ensure that Stateless Resetpackets can be used to close connections when other peers or connections haveexhausted limits.¶
Reducing the size of a Stateless Reset below 41 bytes means that the packetcould reveal to an observer that it is a Stateless Reset, depending upon thelength of the peer's connection IDs. Conversely, refusing to send a StatelessReset in response to a small packet might result in Stateless Reset not beinguseful in detecting cases of broken connections where only very small packetsare sent; such failures might only be detected by other means, such as timers.¶
An endpoint that detects an error SHOULD signal the existence of that error toits peer. Both transport-level and application-level errors can affect anentire connection; seeSection 11.1. Only application-levelerrors can be isolated to a single stream; seeSection 11.2.¶
The most appropriate error code (Section 20) SHOULD be included in theframe that signals the error. Where this specification identifies errorconditions, it also identifies the error code that is used; though these areworded as requirements, different implementation strategies might lead todifferent errors being reported. In particular, an endpoint MAY use anyapplicable error code when it detects an error condition; a generic error code(such as PROTOCOL_VIOLATION or INTERNAL_ERROR) can always be used in place ofspecific error codes.¶
A stateless reset (Section 10.3) is not suitable for any error that canbe signaled with a CONNECTION_CLOSE or RESET_STREAM frame. A stateless resetMUST NOT be used by an endpoint that has the state necessary to send a frame onthe connection.¶
Errors that result in the connection being unusable, such as an obviousviolation of protocol semantics or corruption of state that affects an entireconnection, MUST be signaled using a CONNECTION_CLOSE frame(Section 19.19).¶
Application-specific protocol errors are signaled using the CONNECTION_CLOSEframe with a frame type of 0x1d. Errors that are specific to the transport,including all those described in this document, are carried in theCONNECTION_CLOSE frame with a frame type of 0x1c.¶
A CONNECTION_CLOSE frame could be sent in a packet that is lost. An endpointSHOULD be prepared to retransmit a packet containing a CONNECTION_CLOSE frame ifit receives more packets on a terminated connection. Limiting the number ofretransmissions and the time over which this final packet is sent limits theeffort expended on terminated connections.¶
An endpoint that chooses not to retransmit packets containing a CONNECTION_CLOSEframe risks a peer missing the first such packet. The only mechanism availableto an endpoint that continues to receive data for a terminated connection is touse the stateless reset process (Section 10.3).¶
As the AEAD on Initial packets does not provide strong authentication, anendpoint MAY discard an invalid Initial packet. Discarding an Initial packet ispermitted even where this specification otherwise mandates a connection error.An endpoint can only discard a packet if it does not process the frames in thepacket or reverts the effects of any processing. Discarding invalid Initialpackets might be used to reduce exposure to denial of service; seeSection 21.2.¶
If an application-level error affects a single stream, but otherwise leaves theconnection in a recoverable state, the endpoint can send a RESET_STREAM frame(Section 19.4) with an appropriate error code to terminate just theaffected stream.¶
Resetting a stream without the involvement of the application protocol couldcause the application protocol to enter an unrecoverable state. RESET_STREAMMUST only be instigated by the application protocol that uses QUIC.¶
The semantics of the application error code carried in RESET_STREAM aredefined by the application protocol. Only the application protocol is able tocause a stream to be terminated. A local instance of the application protocoluses a direct API call and a remote instance uses the STOP_SENDING frame, whichtriggers an automatic RESET_STREAM.¶
Application protocols SHOULD define rules for handling streams that areprematurely cancelled by either endpoint.¶
QUIC endpoints communicate by exchanging packets. Packets have confidentialityand integrity protection; seeSection 12.1. Packets are carried in UDPdatagrams; seeSection 12.2.¶
This version of QUIC uses the long packet header during connectionestablishment; seeSection 17.2. Packets with the long header are Initial(Section 17.2.2), 0-RTT (Section 17.2.3), Handshake (Section 17.2.4),and Retry (Section 17.2.5). Version negotiation uses a version-independentpacket with a long header; seeSection 17.2.1.¶
Packets with the short header are designed for minimal overhead and are usedafter a connection is established and 1-RTT keys are available; seeSection 17.3.¶
QUIC packets have different levels of cryptographic protection based on thetype of packet. Details of packet protection are found in[QUIC-TLS]; thissection includes an overview of the protections that are provided.¶
Version Negotiation packets have no cryptographic protection; see[QUIC-INVARIANTS].¶
Retry packets use an authenticated encryption with associated data function(AEAD;[AEAD]) to protect against accidental modification.¶
Initial packets use an AEAD, the keys for which are derived using a value thatis visible on the wire. Initial packets therefore do not have effectiveconfidentiality protection. Initial protection exists to ensure that the senderof the packet is on the network path. Any entity that receives an Initial packetfrom a client can recover the keys that will allow them to both read thecontents of the packet and generate Initial packets that will be successfullyauthenticated at either endpoint. The AEAD also protects Initial packetsagainst accidental modification.¶
All other packets are protected with keys derived from the cryptographichandshake. The cryptographic handshake ensures that only the communicatingendpoints receive the corresponding keys for Handshake, 0-RTT, and 1-RTTpackets. Packets protected with 0-RTT and 1-RTT keys have strongconfidentiality and integrity protection.¶
The Packet Number field that appears in some packet types has alternativeconfidentiality protection that is applied as part of header protection; seeSection 5.4 of[QUIC-TLS] for details. The underlying packet number increaseswith each packet sent in a given packet number space; seeSection 12.3 fordetails.¶
Initial (Section 17.2.2), 0-RTT (Section 17.2.3), and Handshake(Section 17.2.4) packets contain a Length field that determines the endof the packet. The length includes both the Packet Number and Payloadfields, both of which are confidentiality protected and initially of unknownlength. The length of the Payload field is learned once header protection isremoved.¶
Using the Length field, a sender can coalesce multiple QUIC packets into one UDPdatagram. This can reduce the number of UDP datagrams needed to complete thecryptographic handshake and start sending data. This can also be used toconstruct PMTU probes; seeSection 14.4.1. Receivers MUST be able toprocess coalesced packets.¶
Coalescing packets in order of increasing encryption levels (Initial, 0-RTT,Handshake, 1-RTT; see Section 4.1.4 of[QUIC-TLS]) makes it more likely thereceiver will be able to process all the packets in a single pass. A packetwith a short header does not include a length, so it can only be the lastpacket included in a UDP datagram. An endpoint SHOULD include multiple framesin a single packet if they are to be sent at the same encryption level, insteadof coalescing multiple packets at the same encryption level.¶
Receivers MAY route based on the information in the first packet contained in aUDP datagram. Senders MUST NOT coalesce QUIC packets with different connectionIDs into a single UDP datagram. Receivers SHOULD ignore any subsequent packetswith a different Destination Connection ID than the first packet in thedatagram.¶
Every QUIC packet that is coalesced into a single UDP datagram is separate andcomplete. The receiver of coalesced QUIC packets MUST individually process eachQUIC packet and separately acknowledge them, as if they were received as thepayload of different UDP datagrams. For example, if decryption fails (becausethe keys are not available or any other reason), the receiver MAY either discardor buffer the packet for later processing and MUST attempt to process theremaining packets.¶
Retry packets (Section 17.2.5), Version Negotiation packets(Section 17.2.1), and packets with a short header (Section 17.3) do notcontain a Length field and so cannot be followed by other packets in the sameUDP datagram. Note also that there is no situation where a Retry or VersionNegotiation packet is coalesced with another packet.¶
The packet number is an integer in the range 0 to 2^62-1. This number is usedin determining the cryptographic nonce for packet protection. Each endpointmaintains a separate packet number for sending and receiving.¶
Packet numbers are limited to this range because they need to be representablein whole in the Largest Acknowledged field of an ACK frame (Section 19.3).When present in a long or short header however, packet numbers are reduced andencoded in 1 to 4 bytes; seeSection 17.1.¶
Version Negotiation (Section 17.2.1) and Retry (Section 17.2.5) packetsdo not include a packet number.¶
Packet numbers are divided into 3 spaces in QUIC:¶
As described in[QUIC-TLS], each packet type uses different protection keys.¶
Conceptually, a packet number space is the context in which a packet can beprocessed and acknowledged. Initial packets can only be sent with Initialpacket protection keys and acknowledged in packets that are also Initialpackets. Similarly, Handshake packets are sent at the Handshake encryptionlevel and can only be acknowledged in Handshake packets.¶
This enforces cryptographic separation between the data sent in the differentpacket number spaces. Packet numbers in each space start at packet number 0.Subsequent packets sent in the same packet number space MUST increase the packetnumber by at least one.¶
0-RTT and 1-RTT data exist in the same packet number space to make loss recoveryalgorithms easier to implement between the two packet types.¶
A QUIC endpoint MUST NOT reuse a packet number within the same packet numberspace in one connection. If the packet number for sending reaches 2^62 - 1, thesender MUST close the connection without sending a CONNECTION_CLOSE frame or anyfurther packets; an endpoint MAY send a Stateless Reset (Section 10.3) inresponse to further packets that it receives.¶
A receiver MUST discard a newly unprotected packet unless it is certain that ithas not processed another packet with the same packet number from the samepacket number space. Duplicate suppression MUST happen after removing packetprotection for the reasons described in Section 9.5 of[QUIC-TLS].¶
Endpoints that track all individual packets for the purposes of detectingduplicates are at risk of accumulating excessive state. The data required fordetecting duplicates can be limited by maintaining a minimum packet number belowwhich all packets are immediately dropped. Any minimum needs to account forlarge variations in round trip time, which includes the possibility that a peermight probe network paths with much larger round trip times; seeSection 9.¶
Packet number encoding at a sender and decoding at a receiver are described inSection 17.1.¶
The payload of QUIC packets, after removing packet protection, consists of asequence of complete frames, as shown inFigure 11. VersionNegotiation, Stateless Reset, and Retry packets do not contain frames.¶
Packet Payload { Frame (8..) ...,}The payload of a packet that contains frames MUST contain at least one frame,and MAY contain multiple frames and multiple frame types. An endpoint MUSTtreat receipt of a packet containing no frames as a connection error of typePROTOCOL_VIOLATION. Frames always fit within a single QUIC packet and cannotspan multiple packets.¶
Each frame begins with a Frame Type, indicating its type, followed byadditional type-dependent fields:¶
Frame { Frame Type (i), Type-Dependent Fields (..),}Table 3 lists and summarizes information about each frame type that isdefined in this specification. A description of this summary is included afterthe table.¶
| Type Value | Frame Type Name | Definition | Pkts | Spec |
|---|---|---|---|---|
| 0x00 | PADDING | Section 19.1 | IH01 | NP |
| 0x01 | PING | Section 19.2 | IH01 | |
| 0x02 - 0x03 | ACK | Section 19.3 | IH_1 | NC |
| 0x04 | RESET_STREAM | Section 19.4 | __01 | |
| 0x05 | STOP_SENDING | Section 19.5 | __01 | |
| 0x06 | CRYPTO | Section 19.6 | IH_1 | |
| 0x07 | NEW_TOKEN | Section 19.7 | ___1 | |
| 0x08 - 0x0f | STREAM | Section 19.8 | __01 | F |
| 0x10 | MAX_DATA | Section 19.9 | __01 | |
| 0x11 | MAX_STREAM_DATA | Section 19.10 | __01 | |
| 0x12 - 0x13 | MAX_STREAMS | Section 19.11 | __01 | |
| 0x14 | DATA_BLOCKED | Section 19.12 | __01 | |
| 0x15 | STREAM_DATA_BLOCKED | Section 19.13 | __01 | |
| 0x16 - 0x17 | STREAMS_BLOCKED | Section 19.14 | __01 | |
| 0x18 | NEW_CONNECTION_ID | Section 19.15 | __01 | P |
| 0x19 | RETIRE_CONNECTION_ID | Section 19.16 | __01 | |
| 0x1a | PATH_CHALLENGE | Section 19.17 | __01 | P |
| 0x1b | PATH_RESPONSE | Section 19.18 | __01 | P |
| 0x1c - 0x1d | CONNECTION_CLOSE | Section 19.19 | ih01 | N |
| 0x1e | HANDSHAKE_DONE | Section 19.20 | ___1 |
The format and semantics of each frame type are explained in more detail inSection 19. The remainder of this section provides a summary ofimportant and general information.¶
The Frame Type in ACK, STREAM, MAX_STREAMS, STREAMS_BLOCKED, andCONNECTION_CLOSE frames is used to carry other frame-specific flags. For allother frames, the Frame Type field simply identifies the frame.¶
The "Pkts" column inTable 3 lists the types of packets that each frametype could appear in, indicated by the following characters:¶
Initial (Section 17.2.2)¶
Handshake (Section 17.2.4)¶
0-RTT (Section 17.2.3)¶
1-RTT (Section 17.3.1)¶
Only a CONNECTION_CLOSE frame of type 0x1c can appear in Initial or Handshakepackets.¶
For more detail about these restrictions, seeSection 12.5. Notethat all frames can appear in 1-RTT packets. An endpoint MUST treat receipt ofa frame in a packet type that is not permitted as a connection error of typePROTOCOL_VIOLATION.¶
The "Spec" column inTable 3 summarizes any special rules governing theprocessing or generation of the frame type, as indicated by the followingcharacters:¶
Packets containing only frames with this marking are not ack-eliciting; seeSection 13.2.¶
Packets containing only frames with this marking do not count toward bytesin flight for congestion control purposes; see[QUIC-RECOVERY].¶
Packets containing only frames with this marking can be used to probe newnetwork paths during connection migration; seeSection 9.1.¶
The content of frames with this marking are flow controlled; seeSection 4.¶
The "Pkts" and "Spec" columns inTable 3 do not form part of the IANAregistry; seeSection 22.4.¶
An endpoint MUST treat the receipt of a frame of unknown type as a connectionerror of type FRAME_ENCODING_ERROR.¶
All frames are idempotent in this version of QUIC. That is, a valid frame doesnot cause undesirable side effects or errors when received more than once.¶
The Frame Type field uses a variable-length integer encoding (seeSection 16) with one exception. To ensure simple and efficientimplementations of frame parsing, a frame type MUST use the shortest possibleencoding. For frame types defined in this document, this means a single-byteencoding, even though it is possible to encode these values as a two-, four-or eight-byte variable-length integer. For instance, though 0x4001 isa legitimate two-byte encoding for a variable-length integer with a valueof 1, PING frames are always encoded as a single byte with the value 0x01.This rule applies to all current and future QUIC frame types. An endpointMAY treat the receipt of a frame type that uses a longer encoding thannecessary as a connection error of type PROTOCOL_VIOLATION.¶
Some frames are prohibited in different packet number spaces. The rules heregeneralize those of TLS, in that frames associated with establishing theconnection can usually appear in packets in any packet number space, whereasthose associated with transferring data can only appear in the applicationdata packet number space:¶
Note that it is not possible to send the following frames in 0-RTT packets forvarious reasons: ACK, CRYPTO, HANDSHAKE_DONE, NEW_TOKEN, PATH_RESPONSE, andRETIRE_CONNECTION_ID. A server MAY treat receipt of these frames in 0-RTTpackets as a connection error of type PROTOCOL_VIOLATION.¶
A sender sends one or more frames in a QUIC packet; seeSection 12.4.¶
A sender can minimize per-packet bandwidth and computational costs by includingas many frames as possible in each QUIC packet. A sender MAY wait for a shortperiod of time to collect multiple frames before sending a packet that is notmaximally packed, to avoid sending out large numbers of small packets. Animplementation MAY use knowledge about application sending behavior orheuristics to determine whether and for how long to wait. This waiting periodis an implementation decision, and an implementation should be careful to delayconservatively, since any delay is likely to increase application-visiblelatency.¶
Stream multiplexing is achieved by interleaving STREAM frames from multiplestreams into one or more QUIC packets. A single QUIC packet can includemultiple STREAM frames from one or more streams.¶
One of the benefits of QUIC is avoidance of head-of-line blocking acrossmultiple streams. When a packet loss occurs, only streams with data in thatpacket are blocked waiting for a retransmission to be received, while otherstreams can continue making progress. Note that when data from multiple streamsis included in a single QUIC packet, loss of that packet blocks all thosestreams from making progress. Implementations are advised to include as fewstreams as necessary in outgoing packets without losing transmission efficiencyto underfilled packets.¶
A packet MUST NOT be acknowledged until packet protection has been successfullyremoved and all frames contained in the packet have been processed. For STREAMframes, this means the data has been enqueued in preparation to be received bythe application protocol, but it does not require that data is delivered andconsumed.¶
Once the packet has been fully processed, a receiver acknowledges receipt bysending one or more ACK frames containing the packet number of the receivedpacket.¶
An endpoint SHOULD treat receipt of an acknowledgment for a packet it did notsend as a connection error of type PROTOCOL_VIOLATION, if it is able to detectthe condition.¶
Endpoints acknowledge all packets they receive and process. However, onlyack-eliciting packets cause an ACK frame to be sent within the maximum ackdelay. Packets that are not ack-eliciting are only acknowledged when an ACKframe is sent for other reasons.¶
When sending a packet for any reason, an endpoint SHOULD attempt to include anACK frame if one has not been sent recently. Doing so helps with timely lossdetection at the peer.¶
In general, frequent feedback from a receiver improves loss and congestionresponse, but this has to be balanced against excessive load generated by areceiver that sends an ACK frame in response to every ack-eliciting packet. Theguidance offered below seeks to strike this balance.¶
Every packet SHOULD be acknowledged at least once, and ack-eliciting packetsMUST be acknowledged at least once within the maximum delay an endpointcommunicated using the max_ack_delay transport parameter; seeSection 18.2. max_ack_delay declares an explicitcontract: an endpoint promises to never intentionally delay acknowledgments ofan ack-eliciting packet by more than the indicated value. If it does, any excessaccrues to the RTT estimate and could result in spurious or delayedretransmissions from the peer. A sender uses the receiver's max_ack_delay valuein determining timeouts for timer-based retransmission, as detailed in Section6.2 of[QUIC-RECOVERY].¶
An endpoint MUST acknowledge all ack-eliciting Initial and Handshake packetsimmediately and all ack-eliciting 0-RTT and 1-RTT packets within its advertisedmax_ack_delay, with the following exception. Prior to handshake confirmation, anendpoint might not have packet protection keys for decrypting Handshake, 0-RTT,or 1-RTT packets when they are received. It might therefore buffer them andacknowledge them when the requisite keys become available.¶
Since packets containing only ACK frames are not congestion controlled, anendpoint MUST NOT send more than one such packet in response to receiving anack-eliciting packet.¶
An endpoint MUST NOT send a non-ack-eliciting packet in response to anon-ack-eliciting packet, even if there are packet gaps that precede thereceived packet. This avoids an infinite feedback loop of acknowledgements,which could prevent the connection from ever becoming idle. Non-ack-elicitingpackets are eventually acknowledged when the endpoint sends an ACK frame inresponse to other events.¶
In order to assist loss detection at the sender, an endpoint SHOULD generateand send an ACK frame without delay when it receives an ack-eliciting packeteither:¶
Similarly, packets marked with the ECN Congestion Experienced (CE) codepoint inthe IP header SHOULD be acknowledged immediately, to reduce the peer's responsetime to congestion events.¶
The algorithms in[QUIC-RECOVERY] are expected to be resilient to receiversthat do not follow the guidance offered above. However, an implementationshould only deviate from these requirements after careful consideration of theperformance implications of a change, for connections made by the endpoint andfor other users of the network.¶
An endpoint that is only sending ACK frames will not receive acknowledgmentsfrom its peer unless those acknowledgements are included in packets withack-eliciting frames. An endpoint SHOULD send an ACK frame with other frameswhen there are new ack-eliciting packets to acknowledge. When onlynon-ack-eliciting packets need to be acknowledged, an endpoint MAY wait until anack-eliciting packet has been received to include an ACK frame with outgoingframes.¶
A receiver MUST NOT send an ack-eliciting frame in all packets that wouldotherwise be non-ack-eliciting, to avoid an infinite feedback loop ofacknowledgements.¶
A receiver determines how frequently to send acknowledgements in response toack-eliciting packets. This determination involves a trade-off.¶
Endpoints rely on timely acknowledgment to detect loss; see Section 6 of[QUIC-RECOVERY]. Window-based congestion controllers, such as the one inSection 7 of[QUIC-RECOVERY], rely on acknowledgments to manage theircongestion window. In both cases, delaying acknowledgments can adversely affectperformance.¶
On the other hand, reducing the frequency of packets that carry onlyacknowledgements reduces packet transmission and processing cost at bothendpoints. It can improve connection throughput on severely asymmetric linksand reduce the volume of acknowledgment traffic using return path capacity;see Section 3 of[RFC3449].¶
A receiver SHOULD send an ACK frame after receiving at least two ack-elicitingpackets. This recommendation is general in nature and consistent withrecommendations for TCP endpoint behavior[RFC5681]. Knowledge of networkconditions, knowledge of the peer's congestion controller, or further researchand experimentation might suggest alternative acknowledgment strategies withbetter performance characteristics.¶
A receiver MAY process multiple available packets before determining whether tosend an ACK frame in response.¶
When an ACK frame is sent, one or more ranges of acknowledged packets areincluded. Including acknowledgements for older packets reduces the chance ofspurious retransmissions caused by losing previously sent ACK frames, at thecost of larger ACK frames.¶
ACK frames SHOULD always acknowledge the most recently received packets, and themore out-of-order the packets are, the more important it is to send an updatedACK frame quickly, to prevent the peer from declaring a packet as lost andspuriously retransmitting the frames it contains. An ACK frame is expectedto fit within a single QUIC packet. If it does not, then older ranges(those with the smallest packet numbers) are omitted.¶
A receiver limits the number of ACK Ranges (Section 19.3.1) it remembers andsends in ACK frames, both to limit the size of ACK frames and to avoid resourceexhaustion. After receiving acknowledgments for an ACK frame, the receiverSHOULD stop tracking those acknowledged ACK Ranges. Senders can expectacknowledgements for most packets, but QUIC does not guarantee receipt of anacknowledgment for every packet that the receiver processes.¶
It is possible that retaining many ACK Ranges could cause an ACK frame to becometoo large. A receiver can discard unacknowledged ACK Ranges to limit ACK framesize, at the cost of increased retransmissions from the sender. This isnecessary if an ACK frame would be too large to fit in a packet.Receivers MAY also limit ACK frame size further to preserve space for otherframes or to limit the capacity that acknowledgments consume.¶
A receiver MUST retain an ACK Range unless it can ensure that it will notsubsequently accept packets with numbers in that range. Maintaining a minimumpacket number that increases as ranges are discarded is one way to achieve thiswith minimal state.¶
Receivers can discard all ACK Ranges, but they MUST retain the largest packetnumber that has been successfully processed as that is used to recover packetnumbers from subsequent packets; seeSection 17.1.¶
A receiver SHOULD include an ACK Range containing the largest received packetnumber in every ACK frame. The Largest Acknowledged field is used in ECNvalidation at a sender and including a lower value than what was included in aprevious ACK frame could cause ECN to be unnecessarily disabled; seeSection 13.4.2.¶
Section 13.2.4 describes an exemplary approach for determining what packetsto acknowledge in each ACK frame. Though the goal of this algorithm is togenerate an acknowledgment for every packet that is processed, it is stillpossible for acknowledgments to be lost.¶
When a packet containing an ACK frame is sent, the largest acknowledged in thatframe can be saved. When a packet containing an ACK frame is acknowledged, thereceiver can stop acknowledging packets less than or equal to the largestacknowledged in the sent ACK frame.¶
A receiver that sends only non-ack-eliciting packets, such as ACK frames, mightnot receive an acknowledgement for a long period of time. This could cause thereceiver to maintain state for a large number of ACK frames for a long period oftime, and ACK frames it sends could be unnecessarily large. In such a case, areceiver could send a PING or other small ack-eliciting frame occasionally,such as once per round trip, to elicit an ACK from the peer.¶
In cases without ACK frame loss, this algorithm allows for a minimum of 1 RTT ofreordering. In cases with ACK frame loss and reordering, this approach does notguarantee that every acknowledgement is seen by the sender before it is nolonger included in the ACK frame. Packets could be received out of order and allsubsequent ACK frames containing them could be lost. In this case, the lossrecovery algorithm could cause spurious retransmissions, but the sender willcontinue making forward progress.¶
An endpoint measures the delays intentionally introduced between the time thepacket with the largest packet number is received and the time an acknowledgmentis sent. The endpoint encodes this acknowledgement delay in the ACK Delay fieldof an ACK frame; seeSection 19.3. This allows the receiver of the ACK frameto adjust for any intentional delays, which is important for getting a betterestimate of the path RTT when acknowledgments are delayed.¶
A packet might be held in the OS kernel or elsewhere on the host before beingprocessed. An endpoint MUST NOT include delays that it does not control whenpopulating the ACK Delay field in an ACK frame. However, endpoints SHOULDinclude buffering delays caused by unavailability of decryption keys, sincethese delays can be large and are likely to be non-repeating.¶
When the measured acknowledgement delay is larger than its max_ack_delay, anendpoint SHOULD report the measured delay. This information is especially usefulduring the handshake when delays might be large; seeSection 13.2.1.¶
ACK frames MUST only be carried in a packet that has the same packet numberspace as the packet being acknowledged; seeSection 12.1. For instance,packets that are protected with 1-RTT keys MUST be acknowledged in packets thatare also protected with 1-RTT keys.¶
Packets that a client sends with 0-RTT packet protection MUST be acknowledged bythe server in packets protected by 1-RTT keys. This can mean that the client isunable to use these acknowledgments if the server cryptographic handshakemessages are delayed or lost. Note that the same limitation applies to otherdata sent by the server protected by the 1-RTT keys.¶
Packets containing PADDING frames are considered to be in flight for congestioncontrol purposes[QUIC-RECOVERY]. Packets containing only PADDING framestherefore consume congestion window but do not generate acknowledgments thatwill open the congestion window. To avoid a deadlock, a sender SHOULD ensurethat other frames are sent periodically in addition to PADDING frames to elicitacknowledgments from the receiver.¶
QUIC packets that are determined to be lost are not retransmitted whole. Thesame applies to the frames that are contained within lost packets. Instead, theinformation that might be carried in frames is sent again in new frames asneeded.¶
New frames and packets are used to carry information that is determined to havebeen lost. In general, information is sent again when a packet containing thatinformation is determined to be lost and sending ceases when a packetcontaining that information is acknowledged.¶
Endpoints SHOULD prioritize retransmission of data over sending new data, unlesspriorities specified by the application indicate otherwise; seeSection 2.3.¶
Even though a sender is encouraged to assemble frames containing up-to-dateinformation every time it sends a packet, it is not forbidden to retransmitcopies of frames from lost packets. A sender that retransmits copies of framesneeds to handle decreases in available payload size due to change in packetnumber length, connection ID length, and path MTU. A receiver MUST acceptpackets containing an outdated frame, such as a MAX_DATA frame carrying asmaller maximum data than one found in an older packet.¶
A sender SHOULD avoid retransmitting information from packets once they areacknowledged. This includes packets that are acknowledged after being declaredlost, which can happen in the presence of network reordering. Doing so requiressenders to retain information about packets after they are declared lost. Asender can discard this information after a period of time elapses thatadequately allows for reordering, such as a PTO (Section 6.2 of[QUIC-RECOVERY]), or on other events, such as reaching a memory limit.¶
Upon detecting losses, a sender MUST take appropriate congestion control action.The details of loss detection and congestion control are described in[QUIC-RECOVERY].¶
QUIC endpoints can use Explicit Congestion Notification (ECN)[RFC3168] todetect and respond to network congestion. ECN allows an endpoint to set an ECTcodepoint in the ECN field of an IP packet. A network node can then indicatecongestion by setting the CE codepoint in the ECN field instead of dropping thepacket[RFC8087]. Endpoints react to reported congestion by reducing theirsending rate in response, as described in[QUIC-RECOVERY].¶
To enable ECN, a sending QUIC endpoint first determines whether a path supportsECN marking and whether the peer reports the ECN values in received IP headers;seeSection 13.4.2.¶
Use of ECN requires the receiving endpoint to read the ECN field from an IPpacket, which is not possible on all platforms. If an endpoint does notimplement ECN support or does not have access to received ECN fields, itdoes not report ECN counts for packets it receives.¶
Even if an endpoint does not set an ECT field on packets it sends, the endpointMUST provide feedback about ECN markings it receives, if these are accessible.Failing to report the ECN counts will cause the sender to disable use of ECNfor this connection.¶
On receiving an IP packet with an ECT(0), ECT(1) or CE codepoint, anECN-enabled endpoint accesses the ECN field and increases the correspondingECT(0), ECT(1), or CE count. These ECN counts are included in subsequent ACKframes; seeSection 13.2 andSection 19.3.¶
Each packet number space maintains separate acknowledgement state and separateECN counts. Coalesced QUIC packets (seeSection 12.2) share the same IPheader so the ECN counts are incremented once for each coalesced QUIC packet.¶
For example, if one each of an Initial, Handshake, and 1-RTT QUIC packet arecoalesced into a single UDP datagram, the ECN counts for all three packet numberspaces will be incremented by one each, based on the ECN field of the single IPheader.¶
ECN counts are only incremented when QUIC packets from the received IPpacket are processed. As such, duplicate QUIC packets are not processed anddo not increase ECN counts; seeSection 21.10 for relevant securityconcerns.¶
It is possible for faulty network devices to corrupt or erroneously droppackets that carry a non-zero ECN codepoint. To ensure connectivity in thepresence of such devices, an endpoint validates the ECN counts for each networkpath and disables use of ECN on that path if errors are detected.¶
To perform ECN validation for a new path:¶
If an endpoint has cause to expect that IP packets with an ECT codepoint mightbe dropped by a faulty network element, the endpoint could set an ECT codepointfor only the first ten outgoing packets on a path, or for a period of threePTOs (see Section 6.2 of[QUIC-RECOVERY]). If all packets marked with non-zeroECN codepoints are subsequently lost, it can disable marking on the assumptionthat the marking caused the loss.¶
An endpoint thus attempts to use ECN and validates this for each new connection,when switching to a server's preferred address, and on active connectionmigration to a new path.Appendix A.4 describes one possible algorithm.¶
Other methods of probing paths for ECN support are possible, as are differentmarking strategies. Implementations MAY use other methods defined in RFCs; see[RFC8311]. Implementations that use the ECT(1) codepoint need toperform ECN validation using the reported ECT(1) counts.¶
Erroneous application of CE markings by the network can result in degradedconnection performance. An endpoint that receives an ACK frame with ECN countstherefore validates the counts before using them. It performs this validation bycomparing newly received counts against those from the last successfullyprocessed ACK frame. Any increase in the ECN counts is validated based on theECN markings that were applied to packets that are newly acknowledged in the ACKframe.¶
If an ACK frame newly acknowledges a packet that the endpoint sent with eitherthe ECT(0) or ECT(1) codepoint set, ECN validation fails if the correspondingECN counts are not present in the ACK frame. This check detects a networkelement that zeroes the ECN field or a peer that does not report ECN markings.¶
ECN validation also fails if the sum of the increase in ECT(0) and ECN-CE countsis less than the number of newly acknowledged packets that were originally sentwith an ECT(0) marking. Similarly, ECN validation fails if the sum of theincreases to ECT(1) and ECN-CE counts is less than the number of newlyacknowledged packets sent with an ECT(1) marking. These checks can detectremarking of ECN-CE markings by the network.¶
An endpoint could miss acknowledgements for a packet when ACK frames are lost.It is therefore possible for the total increase in ECT(0), ECT(1), and ECN-CEcounts to be greater than the number of packets that are newly acknowledged byan ACK frame. This is why ECN counts are permitted to be larger than the totalnumber of packets that are acknowledged.¶
Validating ECN counts from reordered ACK frames can result in failure. Anendpoint MUST NOT fail ECN validation as a result of processing an ACK framethat does not increase the largest acknowledged packet number.¶
ECN validation can fail if the received total count for either ECT(0) or ECT(1)exceeds the total number of packets sent with each corresponding ECT codepoint.In particular, validation will fail when an endpoint receives a non-zero ECNcount corresponding to an ECT codepoint that it never applied. This checkdetects when packets are remarked to ECT(0) or ECT(1) in the network.¶
If validation fails, then the endpoint MUST disable ECN. It stops setting theECT codepoint in IP packets that it sends, assuming that either the network pathor the peer does not support ECN.¶
Even if validation fails, an endpoint MAY revalidate ECN for the same path atany later time in the connection. An endpoint could continue to periodicallyattempt validation.¶
Upon successful validation, an endpoint MAY continue to set an ECT codepoint insubsequent packets it sends, with the expectation that the path is ECN-capable.Network routing and path elements can however change mid-connection; an endpointMUST disable ECN if validation later fails.¶
A UDP datagram can include one or more QUIC packets. The datagram size refers tothe total UDP payload size of a single UDP datagram carrying QUIC packets. Thedatagram size includes one or more QUIC packet headers and protected payloads,but not the UDP or IP headers.¶
The maximum datagram size is defined as the largest size of UDP payload that canbe sent across a network path using a single UDP datagram. QUIC MUST NOT beused if the network path cannot support a maximum datagram size of at least 1200bytes.¶
QUIC assumes a minimum IP packet size of at least 1280 bytes. This is the IPv6minimum size ([IPv6]) and is also supported by most modern IPv4networks. Assuming the minimum IP header size of 40 bytes for IPv6 and 20 bytesfor IPv4 and a UDP header size of 8 bytes, this results in a maximum datagramsize of 1232 bytes for IPv6 and 1252 bytes for IPv4. Thus, modern IPv4and all IPv6 network paths will be able to support QUIC.¶
Any maximum datagram size larger than 1200 bytes can be discovered using PathMaximum Transmission Unit Discovery (PMTUD; seeSection 14.2.1) or DatagramPacketization Layer PMTU Discovery (DPLPMTUD; seeSection 14.3).¶
Enforcement of the max_udp_payload_size transport parameter(Section 18.2) might act as an additional limit on themaximum datagram size. A sender can avoid exceeding this limit, once the valueis known. However, prior to learning the value of the transport parameter,endpoints risk datagrams being lost if they send datagrams larger than thesmallest allowed maximum datagram size of 1200 bytes.¶
UDP datagrams MUST NOT be fragmented at the IP layer. In IPv4([IPv4]), the DF bit MUST be set if possible, to preventfragmentation on the path.¶
QUIC sometimes requires datagrams to be no smaller than a certain size; seeSection 8.1 as an example. However, the size of a datagram is notauthenticated. That is, if an endpoint receives a datagram of a certain size, itcannot know that the sender sent the datagram at the same size. Therefore, anendpoint MUST NOT close a connection when it receives a datagram that does notmeet size constraints; the endpoint MAY however discard such datagrams.¶
A client MUST expand the payload of all UDP datagrams carrying Initial packetsto at least the smallest allowed maximum datagram size of 1200 bytes by addingPADDING frames to the Initial packet or by coalescing the Initial packet; seeSection 12.2. Similarly, a server MUST expand the payload of all UDPdatagrams carrying ack-eliciting Initial packets to at least the smallestallowed maximum datagram size of 1200 bytes. Sending UDP datagrams of this sizeensures that the network path supports a reasonable Path Maximum TransmissionUnit (PMTU), in both directions. Additionally, a client that expands Initialpackets helps reduce the amplitude of amplification attacks caused by serverresponses toward an unverified client address; seeSection 8.¶
Datagrams containing Initial packets MAY exceed 1200 bytes if the senderbelieves that the network path and peer both support the size that it chooses.¶
A server MUST discard an Initial packet that is carried in a UDP datagram with apayload that is smaller than the smallest allowed maximum datagram size of 1200bytes. A server MAY also immediately close the connection by sending aCONNECTION_CLOSE frame with an error code of PROTOCOL_VIOLATION; seeSection 10.2.3.¶
The server MUST also limit the number of bytes it sends before validating theaddress of the client; seeSection 8.¶
The Path Maximum Transmission Unit (PMTU) is the maximum size of the entire IPpacket including the IP header, UDP header, and UDP payload. The UDP payloadincludes one or more QUIC packet headers and protected payloads. The PMTU candepend on path characteristics, and can therefore change over time. The largestUDP payload an endpoint sends at any given time is referred to as the endpoint'smaximum datagram size.¶
An endpoint SHOULD use DPLPMTUD (Section 14.3) or PMTUD (Section 14.2.1) to determinewhether the path to a destination will support a desired maximum datagram sizewithout fragmentation. In the absence of these mechanisms, QUIC endpointsSHOULD NOT send datagrams larger than the smallest allowed maximum datagramsize.¶
Both DPLPMTUD and PMTUD send datagrams that are larger than the current maximumdatagram size, referred to as PMTU probes. All QUIC packets that are not sentin a PMTU probe SHOULD be sized to fit within the maximum datagram size to avoidthe datagram being fragmented or dropped ([RFC8085]).¶
If a QUIC endpoint determines that the PMTU between any pair of local and remoteIP addresses has fallen below the smallest allowed maximum datagram size of 1200bytes, it MUST immediately cease sending QUIC packets, except for those in PMTUprobes or those containing CONNECTION_CLOSE frames, on the affected path. Anendpoint MAY terminate the connection if an alternative path cannot be found.¶
Each pair of local and remote addresses could have a different PMTU. QUICimplementations that implement any kind of PMTU discovery therefore SHOULDmaintain a maximum datagram size for each combination of local and remote IPaddresses.¶
A QUIC implementation MAY be more conservative in computing the maximum datagramsize to allow for unknown tunnel overheads or IP header options/extensions.¶
Path Maximum Transmission Unit Discovery (PMTUD;[RFC1191],[RFC8201])relies on reception of ICMP messages (e.g., IPv6 Packet Too Big messages) thatindicate when an IP packet is dropped because it is larger than the local routerMTU. DPLPMTUD can also optionally use these messages. This use of ICMP messagesis potentially vulnerable to off-path attacks that successfully guess theaddresses used on the path and reduce the PMTU to a bandwidth-inefficient value.¶
An endpoint MUST ignore an ICMP message that claims the PMTU has decreased belowQUIC's smallest allowed maximum datagram size.¶
The requirements for generating ICMP ([RFC1812],[RFC4443]) state that thequoted packet should contain as much of the original packet as possible withoutexceeding the minimum MTU for the IP version. The size of the quoted packet canactually be smaller, or the information unintelligible, as described in Section1.1 of[DPLPMTUD].¶
QUIC endpoints using PMTUD SHOULD validate ICMP messages to protect fromoff-path injection as specified in[RFC8201] and Section 5.2 of[RFC8085].This validation SHOULD use the quoted packet supplied in the payload of an ICMPmessage to associate the message with a corresponding transport connection (seeSection 4.6.1 of[DPLPMTUD]). ICMP message validation MUST include matchingIP addresses and UDP ports ([RFC8085]) and, when possible, connection IDs toan active QUIC session. The endpoint SHOULD ignore all ICMP messages that failvalidation.¶
An endpoint MUST NOT increase PMTU based on ICMP messages; see Section 3, clause6 of[DPLPMTUD]. Any reduction in QUIC's maximum datagram size in responseto ICMP messages MAY be provisional until QUIC's loss detection algorithmdetermines that the quoted packet has actually been lost.¶
Datagram Packetization Layer PMTU Discovery (DPLPMTUD;[DPLPMTUD])relies on tracking loss or acknowledgment of QUIC packets that are carried inPMTU probes. PMTU probes for DPLPMTUD that use the PADDING frame implement"Probing using padding data", as defined in Section 4.1 of[DPLPMTUD].¶
Endpoints SHOULD set the initial value of BASE_PLPMTU (Section 5.1 of[DPLPMTUD]) to be consistent with QUIC's smallest allowed maximum datagramsize. The MIN_PLPMTU is the same as the BASE_PLPMTU.¶
QUIC endpoints implementing DPLPMTUD maintain a DPLPMTUD Maximum Packet Size(MPS, Section 4.4 of[DPLPMTUD]) for each combination of local and remote IPaddresses. This corresponds to the maximum datagram size.¶
From the perspective of DPLPMTUD, QUIC is an acknowledged Packetization Layer(PL). A QUIC sender can therefore enter the DPLPMTUD BASE state (Section 5.2 of[DPLPMTUD]) when the QUIC connection handshake has been completed.¶
QUIC is an acknowledged PL, therefore a QUIC sender does not implement aDPLPMTUD CONFIRMATION_TIMER while in the SEARCH_COMPLETE state; see Section 5.2of[DPLPMTUD].¶
An endpoint using DPLPMTUD requires the validation of any received ICMP PacketToo Big (PTB) message before using the PTB information, as defined in Section4.6 of[DPLPMTUD]. In addition to UDP port validation, QUIC validates anICMP message by using other PL information (e.g., validation of connection IDsin the quoted packet of any received ICMP message).¶
The considerations for processing ICMP messages described inSection 14.2.1 alsoapply if these messages are used by DPLPMTUD.¶
PMTU probes are ack-eliciting packets.¶
Endpoints could limit the content of PMTU probes to PING and PADDING frames,since packets that are larger than the current maximum datagram size are morelikely to be dropped by the network. Loss of a QUIC packet that is carried in aPMTU probe is therefore not a reliable indication of congestion and SHOULD NOTtrigger a congestion control reaction; see Section 3, Bullet 7 of[DPLPMTUD].However, PMTU probes consume congestion window, which could delay subsequenttransmission by an application.¶
Endpoints that rely on the destination connection ID for routing incoming QUICpackets are likely to require that the connection ID be included inPMTU probes to route any resulting ICMP messages (Section 14.2.1) back to the correctendpoint. However, only long header packets (Section 17.2) contain theSource Connection ID field, and long header packets are not decrypted oracknowledged by the peer once the handshake is complete.¶
One way to construct a PMTU probe is to coalesce (seeSection 12.2) apacket with a long header, such as a Handshake or 0-RTT packet(Section 17.2), with a short header packet in a single UDP datagram. If theresulting PMTU probe reaches the endpoint, the packet with the long header willbe ignored, but the short header packet will be acknowledged. If the PMTU probecauses an ICMP message to be sent, the first part of the probe will be quoted inthat message. If the Source Connection ID field is within the quoted portion ofthe probe, that could be used for routing or validation of the ICMP message.¶
The purpose of using a packet with a long header is only to ensure that thequoted packet contained in the ICMP message contains a Source Connection IDfield. This packet does not need to be a valid packet and it can be sent evenif there is no current use for packets of that type.¶
QUIC versions are identified using a 32-bit unsigned number.¶
The version 0x00000000 is reserved to represent version negotiation. Thisversion of the specification is identified by the number 0x00000001.¶
Other versions of QUIC might have different properties from this version. Theproperties of QUIC that are guaranteed to be consistent across all versions ofthe protocol are described in[QUIC-INVARIANTS].¶
Version 0x00000001 of QUIC uses TLS as a cryptographic handshake protocol, asdescribed in[QUIC-TLS].¶
Versions with the most significant 16 bits of the version number cleared arereserved for use in future IETF consensus documents.¶
Versions that follow the pattern 0x?a?a?a?a are reserved for use in forcingversion negotiation to be exercised. That is, any version number where the lowfour bits of all bytes is 1010 (in binary). A client or server MAY advertisesupport for any of these reserved versions.¶
Reserved version numbers will never represent a real protocol; a client MAY useone of these version numbers with the expectation that the server will initiateversion negotiation; a server MAY advertise support for one of these versionsand can expect that clients ignore the value.¶
QUIC packets and frames commonly use a variable-length encoding for non-negativeinteger values. This encoding ensures that smaller integer values need fewerbytes to encode.¶
The QUIC variable-length integer encoding reserves the two most significant bitsof the first byte to encode the base 2 logarithm of the integer encoding lengthin bytes. The integer value is encoded on the remaining bits, in network byteorder.¶
This means that integers are encoded on 1, 2, 4, or 8 bytes and can encode 6,14, 30, or 62 bit values respectively.Table 4 summarizes theencoding properties.¶
| 2Bit | Length | Usable Bits | Range |
|---|---|---|---|
| 00 | 1 | 6 | 0-63 |
| 01 | 2 | 14 | 0-16383 |
| 10 | 4 | 30 | 0-1073741823 |
| 11 | 8 | 62 | 0-4611686018427387903 |
Examples and a sample decoding algorithm are shown inAppendix A.1.¶
Versions (Section 15) and packet numbers sent in the header(Section 17.1) are described using integers, but do not use thisencoding.¶
All numeric values are encoded in network byte order (that is, big-endian) andall field sizes are in bits. Hexadecimal notation is used for describing thevalue of fields.¶
Packet numbers are integers in the range 0 to 2^62-1 (Section 12.3). Whenpresent in long or short packet headers, they are encoded in 1 to 4 bytes. Thenumber of bits required to represent the packet number is reduced by includingonly the least significant bits of the packet number.¶
The encoded packet number is protected as described in Section 5.4 of[QUIC-TLS].¶
Prior to receiving an acknowledgement for a packet number space, the full packetnumber MUST be included; it is not to be truncated as described below.¶
After an acknowledgement is received for a packet number space, the sender MUSTuse a packet number size able to represent more than twice as large a range thanthe difference between the largest acknowledged packet and packet number beingsent. A peer receiving the packet will then correctly decode the packet number,unless the packet is delayed in transit such that it arrives after manyhigher-numbered packets have been received. An endpoint SHOULD use a largeenough packet number encoding to allow the packet number to be recovered even ifthe packet arrives after packets that are sent afterwards.¶
As a result, the size of the packet number encoding is at least one bit morethan the base-2 logarithm of the number of contiguous unacknowledged packetnumbers, including the new packet. Pseudocode and examples for packet numberencoding can be found inAppendix A.2.¶
At a receiver, protection of the packet number is removed prior to recoveringthe full packet number. The full packet number is then reconstructed based onthe number of significant bits present, the value of those bits, and the largestpacket number received on a successfully authenticated packet. Recovering thefull packet number is necessary to successfully remove packet protection.¶
Once header protection is removed, the packet number is decoded by finding thepacket number value that is closest to the next expected packet. The nextexpected packet is the highest received packet number plus one. Pseudocode andan example for packet number decoding can be found inAppendix A.3.¶
Long Header Packet { Header Form (1) = 1, Fixed Bit (1) = 1, Long Packet Type (2), Type-Specific Bits (4), Version (32), Destination Connection ID Length (8), Destination Connection ID (0..160), Source Connection ID Length (8), Source Connection ID (0..160), Type-Specific Payload (..),}Long headers are used for packets that are sent prior to the establishmentof 1-RTT keys. Once 1-RTT keys are available,a sender switches to sending packets using the short header(Section 17.3). The long form allows for special packets - such as theVersion Negotiation packet - to be represented in this uniform fixed-lengthpacket format. Packets that use the long header contain the following fields:¶
The most significant bit (0x80) of byte 0 (the first byte) is set to 1 forlong headers.¶
The next bit (0x40) of byte 0 is set to 1. Packets containing a zero valuefor this bit are not valid packets in this version and MUST be discarded.¶
The next two bits (those with a mask of 0x30) of byte 0 contain a packet type.Packet types are listed inTable 5.¶
The lower four bits (those with a mask of 0x0f) of byte 0 are type-specific.¶
The QUIC Version is a 32-bit field that follows the first byte. This fieldindicates the version of QUIC that is in use and determines how the rest ofthe protocol fields are interpreted.¶
The byte following the version contains the length in bytes of the DestinationConnection ID field that follows it. This length is encoded as an 8-bitunsigned integer. In QUIC version 1, this value MUST NOT exceed 20.Endpoints that receive a version 1 long header with a value larger than 20MUST drop the packet. In order to properly form a Version Negotiation packet,servers SHOULD be able to read longer connection IDs from other QUIC versions.¶
The Destination Connection ID field follows the Destination Connection IDLength field, which indicates the length of this field.Section 7.2 describes the use of this field in more detail.¶
The byte following the Destination Connection ID contains the length in bytesof the Source Connection ID field that follows it. This length is encoded asa 8-bit unsigned integer. In QUIC version 1, this value MUST NOT exceed 20bytes. Endpoints that receive a version 1 long header with a value largerthan 20 MUST drop the packet. In order to properly form a Version Negotiationpacket, servers SHOULD be able to read longer connection IDs from other QUICversions.¶
The Source Connection ID field follows the Source Connection ID Length field,which indicates the length of this field.Section 7.2describes the use of this field in more detail.¶
The remainder of the packet, if any, is type-specific.¶
In this version of QUIC, the following packet types with the long header aredefined:¶
| Type | Name | Section |
|---|---|---|
| 0x0 | Initial | Section 17.2.2 |
| 0x1 | 0-RTT | Section 17.2.3 |
| 0x2 | Handshake | Section 17.2.4 |
| 0x3 | Retry | Section 17.2.5 |
The header form bit, Destination and Source Connection ID lengths, Destinationand Source Connection ID fields, and Version fields of a long header packet areversion-independent. The other fields in the first byte are version-specific.See[QUIC-INVARIANTS] for details on how packets from different versions ofQUIC are interpreted.¶
The interpretation of the fields and the payload are specific to a version andpacket type. While type-specific semantics for this version are described inthe following sections, several long-header packets in this version of QUICcontain these additional fields:¶
Two bits (those with a mask of 0x0c) of byte 0 are reserved across multiplepacket types. These bits are protected using header protection; see Section5.4 of[QUIC-TLS]. The value included prior to protection MUST be set to 0.An endpoint MUST treat receipt of a packet that has a non-zero value for thesebits after removing both packet and header protection as a connection errorof type PROTOCOL_VIOLATION. Discarding such a packet after only removingheader protection can expose the endpoint to attacks; see Section 9.5 of[QUIC-TLS].¶
In packet types that contain a Packet Number field, the least significant twobits (those with a mask of 0x03) of byte 0 contain the length of the packetnumber, encoded as an unsigned, two-bit integer that is one less than thelength of the packet number field in bytes. That is, the length of the packetnumber field is the value of this field, plus one. These bits are protectedusing header protection; see Section 5.4 of[QUIC-TLS].¶
The length of the remainder of the packet (that is, the Packet Number andPayload fields) in bytes, encoded as a variable-length integer(Section 16).¶
The packet number field is 1 to 4 bytes long. The packet number is protectedusing header protection; see Section 5.4 of[QUIC-TLS]. The length of thepacket number field is encoded in the Packet Number Length bits of byte 0; seeabove.¶
A Version Negotiation packet is inherently not version-specific. Upon receipt bya client, it will be identified as a Version Negotiation packet based on theVersion field having a value of 0.¶
The Version Negotiation packet is a response to a client packet that contains aversion that is not supported by the server, and is only sent by servers.¶
The layout of a Version Negotiation packet is:¶
Version Negotiation Packet { Header Form (1) = 1, Unused (7), Version (32) = 0, Destination Connection ID Length (8), Destination Connection ID (0..2040), Source Connection ID Length (8), Source Connection ID (0..2040), Supported Version (32) ...,}The value in the Unused field is selected randomly by the server. Clients MUSTignore the value of this field. Servers SHOULD set the most significant bit ofthis field (0x40) to 1 so that Version Negotiation packets appear to have theFixed Bit field.¶
The Version field of a Version Negotiation packet MUST be set to 0x00000000.¶
The server MUST include the value from the Source Connection ID field of thepacket it receives in the Destination Connection ID field. The value for SourceConnection ID MUST be copied from the Destination Connection ID of the receivedpacket, which is initially randomly selected by a client. Echoing bothconnection IDs gives clients some assurance that the server received the packetand that the Version Negotiation packet was not generated by an off-pathattacker.¶
Future versions of QUIC could have different requirements for the lengths ofconnection IDs. In particular, connection IDs might have a smaller minimumlength or a greater maximum length. Version-specific rules for the connectionID therefore MUST NOT influence a server decision about whether to send aVersion Negotiation packet.¶
The remainder of the Version Negotiation packet is a list of 32-bit versionsthat the server supports.¶
A Version Negotiation packet is not acknowledged. It is only sent in responseto a packet that indicates an unsupported version; seeSection 5.2.2.¶
The Version Negotiation packet does not include the Packet Number and Lengthfields present in other packets that use the long header form. Consequently,a Version Negotiation packet consumes an entire UDP datagram.¶
A server MUST NOT send more than one Version Negotiation packet in response to asingle UDP datagram.¶
SeeSection 6 for a description of the version negotiationprocess.¶
An Initial packet uses long headers with a type value of 0x0. It carries thefirst CRYPTO frames sent by the client and server to perform key exchange, andcarries ACKs in either direction.¶
Initial Packet { Header Form (1) = 1, Fixed Bit (1) = 1, Long Packet Type (2) = 0, Reserved Bits (2), Packet Number Length (2), Version (32), Destination Connection ID Length (8), Destination Connection ID (0..160), Source Connection ID Length (8), Source Connection ID (0..160), Token Length (i), Token (..), Length (i), Packet Number (8..32), Packet Payload (8..),}The Initial packet contains a long header as well as the Length and PacketNumber fields; seeSection 17.2. The first byte contains the Reserved andPacket Number Length bits; see alsoSection 17.2. Between the SourceConnection ID and Length fields, there are two additional fields specific tothe Initial packet.¶
A variable-length integer specifying the length of the Token field, in bytes.This value is zero if no token is present. Initial packets sent by the serverMUST set the Token Length field to zero; clients that receive an Initialpacket with a non-zero Token Length field MUST either discard the packet orgenerate a connection error of type PROTOCOL_VIOLATION.¶
The value of the token that was previously provided in a Retry packet orNEW_TOKEN frame; seeSection 8.1.¶
The payload of the packet.¶
In order to prevent tampering by version-unaware middleboxes, Initial packetsare protected with connection- and version-specific keys (Initial keys) asdescribed in[QUIC-TLS]. This protection does not provide confidentiality orintegrity against on-path attackers, but provides some level of protectionagainst off-path attackers.¶
The client and server use the Initial packet type for any packet that containsan initial cryptographic handshake message. This includes all cases where a newpacket containing the initial cryptographic message needs to be created, such asthe packets sent after receiving a Retry packet (Section 17.2.5).¶
A server sends its first Initial packet in response to a client Initial. Aserver MAY send multiple Initial packets. The cryptographic key exchange couldrequire multiple round trips or retransmissions of this data.¶
The payload of an Initial packet includes a CRYPTO frame (or frames) containinga cryptographic handshake message, ACK frames, or both. PING, PADDING, andCONNECTION_CLOSE frames of type 0x1c are also permitted. An endpoint thatreceives an Initial packet containing other frames can either discard thepacket as spurious or treat it as a connection error.¶
The first packet sent by a client always includes a CRYPTO frame that containsthe start or all of the first cryptographic handshake message. The firstCRYPTO frame sent always begins at an offset of 0; seeSection 7.¶
Note that if the server sends a HelloRetryRequest, the client will send anotherseries of Initial packets. These Initial packets will continue thecryptographic handshake and will contain CRYPTO frames starting at an offsetmatching the size of the CRYPTO frames sent in the first flight of Initialpackets.¶
A client stops both sending and processing Initial packets when it sends itsfirst Handshake packet. A server stops sending and processing Initial packetswhen it receives its first Handshake packet. Though packets might still be inflight or awaiting acknowledgment, no further Initial packets need to beexchanged beyond this point. Initial packet protection keys are discarded (seeSection 4.9.1 of[QUIC-TLS]) along with any loss recovery and congestioncontrol state; see Section 6.4 of[QUIC-RECOVERY].¶
Any data in CRYPTO frames is discarded - and no longer retransmitted - whenInitial keys are discarded.¶
A 0-RTT packet uses long headers with a type value of 0x1, followed by theLength and Packet Number fields; seeSection 17.2. The first byte containsthe Reserved and Packet Number Length bits; seeSection 17.2. A 0-RTT packetis used to carry "early" data from the client to the server as part of thefirst flight, prior to handshake completion. As part of the TLS handshake, theserver can accept or reject this early data.¶
See Section 2.3 of[TLS13] for a discussion of 0-RTT data and itslimitations.¶
0-RTT Packet { Header Form (1) = 1, Fixed Bit (1) = 1, Long Packet Type (2) = 1, Reserved Bits (2), Packet Number Length (2), Version (32), Destination Connection ID Length (8), Destination Connection ID (0..160), Source Connection ID Length (8), Source Connection ID (0..160), Length (i), Packet Number (8..32), Packet Payload (8..),}Packet numbers for 0-RTT protected packets use the same space as 1-RTT protectedpackets.¶
After a client receives a Retry packet, 0-RTT packets are likely to have beenlost or discarded by the server. A client SHOULD attempt to resend data in0-RTT packets after it sends a new Initial packet. New packet numbers MUST beused for any new packets that are sent; as described inSection 17.2.5.3,reusing packet numbers could compromise packet protection.¶
A client only receives acknowledgments for its 0-RTT packets once the handshakeis complete, as defined Section 4.1.1 of[QUIC-TLS].¶
A client MUST NOT send 0-RTT packets once it starts processing 1-RTT packetsfrom the server. This means that 0-RTT packets cannot contain any response toframes from 1-RTT packets. For instance, a client cannot send an ACK frame in a0-RTT packet, because that can only acknowledge a 1-RTT packet. Anacknowledgment for a 1-RTT packet MUST be carried in a 1-RTT packet.¶
A server SHOULD treat a violation of remembered limits (Section 7.4.1)as a connection error of an appropriate type (for instance, a FLOW_CONTROL_ERRORfor exceeding stream data limits).¶
A Handshake packet uses long headers with a type value of 0x2, followed by theLength and Packet Number fields; seeSection 17.2. The first byte containsthe Reserved and Packet Number Length bits; seeSection 17.2. It is usedto carry cryptographic handshake messages and acknowledgments from the serverand client.¶
Handshake Packet { Header Form (1) = 1, Fixed Bit (1) = 1, Long Packet Type (2) = 2, Reserved Bits (2), Packet Number Length (2), Version (32), Destination Connection ID Length (8), Destination Connection ID (0..160), Source Connection ID Length (8), Source Connection ID (0..160), Length (i), Packet Number (8..32), Packet Payload (8..),}Once a client has received a Handshake packet from a server, it uses Handshakepackets to send subsequent cryptographic handshake messages and acknowledgmentsto the server.¶
The Destination Connection ID field in a Handshake packet contains a connectionID that is chosen by the recipient of the packet; the Source Connection IDincludes the connection ID that the sender of the packet wishes to use; seeSection 7.2.¶
Handshake packets have their own packet number space, and thus the firstHandshake packet sent by a server contains a packet number of 0.¶
The payload of this packet contains CRYPTO frames and could contain PING,PADDING, or ACK frames. Handshake packets MAY contain CONNECTION_CLOSE framesof type 0x1c. Endpoints MUST treat receipt of Handshake packets with otherframes as a connection error of type PROTOCOL_VIOLATION.¶
Like Initial packets (seeSection 17.2.2.1), data in CRYPTO frames forHandshake packets is discarded - and no longer retransmitted - when Handshakeprotection keys are discarded.¶
A Retry packet uses a long packet header with a type value of 0x3. It carriesan address validation token created by the server. It is used by a server thatwishes to perform a retry; seeSection 8.1.¶
Retry Packet { Header Form (1) = 1, Fixed Bit (1) = 1, Long Packet Type (2) = 3, Unused (4), Version (32), Destination Connection ID Length (8), Destination Connection ID (0..160), Source Connection ID Length (8), Source Connection ID (0..160), Retry Token (..), Retry Integrity Tag (128),}A Retry packet (shown inFigure 18) does not contain any protectedfields. The value in the Unused field is set to an arbitrary value by theserver; a client MUST ignore these bits. In addition to the fields from thelong header, it contains these additional fields:¶
An opaque token that the server can use to validate the client's address.¶
The server populates the Destination Connection ID with the connection ID thatthe client included in the Source Connection ID of the Initial packet.¶
The server includes a connection ID of its choice in the Source Connection IDfield. This value MUST NOT be equal to the Destination Connection ID field ofthe packet sent by the client. A client MUST discard a Retry packet thatcontains a Source Connection ID field that is identical to the DestinationConnection ID field of its Initial packet. The client MUST use the value fromthe Source Connection ID field of the Retry packet in the Destination ConnectionID field of subsequent packets that it sends.¶
A server MAY send Retry packets in response to Initial and 0-RTT packets. Aserver can either discard or buffer 0-RTT packets that it receives. A servercan send multiple Retry packets as it receives Initial or 0-RTT packets. Aserver MUST NOT send more than one Retry packet in response to a single UDPdatagram.¶
A client MUST accept and process at most one Retry packet for each connectionattempt. After the client has received and processed an Initial or Retry packetfrom the server, it MUST discard any subsequent Retry packets that it receives.¶
Clients MUST discard Retry packets that have a Retry Integrity Tag that cannotbe validated; see the Retry Packet Integrity section of[QUIC-TLS]. Thisdiminishes an off-path attacker's ability to inject a Retry packet and protectsagainst accidental corruption of Retry packets. A client MUST discard a Retrypacket with a zero-length Retry Token field.¶
The client responds to a Retry packet with an Initial packet that includes theprovided Retry Token to continue connection establishment.¶
A client sets the Destination Connection ID field of this Initial packet to thevalue from the Source Connection ID in the Retry packet. Changing DestinationConnection ID also results in a change to the keys used to protect the Initialpacket. It also sets the Token field to the token provided in the Retry. Theclient MUST NOT change the Source Connection ID because the server could includethe connection ID as part of its token validation logic; seeSection 8.1.4.¶
A Retry packet does not include a packet number and cannot be explicitlyacknowledged by a client.¶
Subsequent Initial packets from the client include the connection ID and tokenvalues from the Retry packet. The client copies the Source Connection ID fieldfrom the Retry packet to the Destination Connection ID field and uses thisvalue until an Initial packet with an updated value is received; seeSection 7.2. The value of the Token field is copied to allsubsequent Initial packets; seeSection 8.1.2.¶
Other than updating the Destination Connection ID and Token fields, the Initialpacket sent by the client is subject to the same restrictions as the firstInitial packet. A client MUST use the same cryptographic handshake message itincluded in this packet. A server MAY treat a packet that contains a differentcryptographic handshake message as a connection error or discard it.¶
A client MAY attempt 0-RTT after receiving a Retry packet by sending 0-RTTpackets to the connection ID provided by the server. A client MUST NOT changethe cryptographic handshake message it sends in response to receiving a Retry.¶
A client MUST NOT reset the packet number for any packet number space afterprocessing a Retry packet. In particular, 0-RTT packets contain confidentialinformation that will most likely be retransmitted on receiving a Retry packet.The keys used to protect these new 0-RTT packets will not change as a result ofresponding to a Retry packet. However, the data sent in these packets could bedifferent than what was sent earlier. Sending these new packets with the samepacket number is likely to compromise the packet protection for those packetsbecause the same key and nonce could be used to protect different content.A server MAY abort the connection if it detects that the client reset thepacket number.¶
The connection IDs used on Initial and Retry packets exchanged between clientand server are copied to the transport parameters and validated as describedinSection 7.3.¶
This version of QUIC defines a single packet type that uses the short packetheader.¶
A 1-RTT packet uses a short packet header. It is used after the version and1-RTT keys are negotiated.¶
1-RTT Packet { Header Form (1) = 0, Fixed Bit (1) = 1, Spin Bit (1), Reserved Bits (2), Key Phase (1), Packet Number Length (2), Destination Connection ID (0..160), Packet Number (8..32), Packet Payload (8..),}1-RTT packets contain the following fields:¶
The most significant bit (0x80) of byte 0 is set to 0 for the short header.¶
The next bit (0x40) of byte 0 is set to 1. Packets containing a zero valuefor this bit are not valid packets in this version and MUST be discarded.¶
The third most significant bit (0x20) of byte 0 is the latency spin bit, setas described inSection 17.4.¶
The next two bits (those with a mask of 0x18) of byte 0 are reserved. Thesebits are protected using header protection; see Section 5.4 of[QUIC-TLS]. The value included prior to protection MUST be set to 0. Anendpoint MUST treat receipt of a packet that has a non-zero value for thesebits, after removing both packet and header protection, as a connection errorof type PROTOCOL_VIOLATION. Discarding such a packet after only removingheader protection can expose the endpoint to attacks; see Section 9.5 of[QUIC-TLS].¶
The next bit (0x04) of byte 0 indicates the key phase, which allows arecipient of a packet to identify the packet protection keys that are used toprotect the packet. See[QUIC-TLS] for details. This bit is protectedusing header protection; see Section 5.4 of[QUIC-TLS].¶
The least significant two bits (those with a mask of 0x03) of byte 0 containthe length of the packet number, encoded as an unsigned, two-bit integer thatis one less than the length of the packet number field in bytes. That is, thelength of the packet number field is the value of this field, plus one. Thesebits are protected using header protection; see Section 5.4 of[QUIC-TLS].¶
The Destination Connection ID is a connection ID that is chosen by theintended recipient of the packet. SeeSection 5.1 for more details.¶
The packet number field is 1 to 4 bytes long. The packet number hasconfidentiality protection separate from packet protection, as described inSection 5.4 of[QUIC-TLS]. The length of the packet number field is encodedin Packet Number Length field. SeeSection 17.1 for details.¶
1-RTT packets always include a 1-RTT protected payload.¶
The header form bit and the connection ID field of a short header packet areversion-independent. The remaining fields are specific to the selected QUICversion. See[QUIC-INVARIANTS] for details on how packets from differentversions of QUIC are interpreted.¶
The latency spin bit, which is defined for 1-RTT packets (Section 17.3.1),enables passive latency monitoring from observation points on the network paththroughout the duration of a connection. The server reflects the spin valuereceived, while the client 'spins' it after one RTT. On-path observers canmeasure the time between two spin bit toggle events to estimate the end-to-endRTT of a connection.¶
The spin bit is only present in 1-RTT packets, since it is possible to measurethe initial RTT of a connection by observing the handshake. Therefore, the spinbit is available after version negotiation and connection establishment arecompleted. On-path measurement and use of the latency spin bit is furtherdiscussed in[QUIC-MANAGEABILITY].¶
The spin bit is an OPTIONAL feature of this version of QUIC. A QUIC stack thatchooses to support the spin bit MUST implement it as specified in this section.¶
Each endpoint unilaterally decides if the spin bit is enabled or disabled for aconnection. Implementations MUST allow administrators of clients and serversto disable the spin bit either globally or on a per-connection basis. Even whenthe spin bit is not disabled by the administrator, endpoints MUST disable theiruse of the spin bit for a random selection of at least one in every 16 networkpaths, or for one in every 16 connection IDs. As each endpoint disables thespin bit independently, this ensures that the spin bit signal is disabled onapproximately one in eight network paths.¶
When the spin bit is disabled, endpoints MAY set the spin bit to any value, andMUST ignore any incoming value. It is RECOMMENDED that endpoints set the spinbit to a random value either chosen independently for each packet or chosenindependently for each connection ID.¶
If the spin bit is enabled for the connection, the endpoint maintains a spinvalue for each network path and sets the spin bit in the packet header to thecurrently stored value when a 1-RTT packet is sent on that path. The spin valueis initialized to 0 in the endpoint for each network path. Each endpoint alsoremembers the highest packet number seen from its peer on each path.¶
When a server receives a 1-RTT packet that increases the highest packet numberseen by the server from the client on a given network path, it sets the spinvalue for that path to be equal to the spin bit in the received packet.¶
When a client receives a 1-RTT packet that increases the highest packet numberseen by the client from the server on a given network path, it sets the spinvalue for that path to the inverse of the spin bit in the received packet.¶
An endpoint resets the spin value for a network path to zero when changing theconnection ID being used on that network path.¶
The extension_data field of the quic_transport_parameters extension defined in[QUIC-TLS] contains the QUIC transport parameters. They are encoded as asequence of transport parameters, as shown inFigure 20:¶
Transport Parameters { Transport Parameter (..) ...,}Each transport parameter is encoded as an (identifier, length, value) tuple,as shown inFigure 21:¶
Transport Parameter { Transport Parameter ID (i), Transport Parameter Length (i), Transport Parameter Value (..),}The Transport Parameter Length field contains the length of the TransportParameter Value field in bytes.¶
QUIC encodes transport parameters into a sequence of bytes, which is thenincluded in the cryptographic handshake.¶
Transport parameters with an identifier of the form31 * N + 27 for integervalues of N are reserved to exercise the requirement that unknown transportparameters be ignored. These transport parameters have no semantics, and cancarry arbitrary values.¶
This section details the transport parameters defined in this document.¶
Many transport parameters listed here have integer values. Those transportparameters that are identified as integers use a variable-length integerencoding; seeSection 16. Transport parameters have a default valueof 0 if the transport parameter is absent unless otherwise stated.¶
The following transport parameters are defined:¶
The value of the Destination Connection ID field from the first Initial packetsent by the client; seeSection 7.3. This transport parameter is only sentby a server.¶
The max idle timeout is a value in milliseconds that is encoded as an integer;see (Section 10.1). Idle timeout is disabled when both endpoints omitthis transport parameter or specify a value of 0.¶
A stateless reset token is used in verifying a stateless reset; seeSection 10.3. This parameter is a sequence of 16 bytes. Thistransport parameter MUST NOT be sent by a client, but MAY be sent by a server.A server that does not send this transport parameter cannot use statelessreset (Section 10.3) for the connection ID negotiated during thehandshake.¶
The maximum UDP payload size parameter is an integer value that limits thesize of UDP payloads that the endpoint is willing to receive. UDP datagramswith payloads larger than this limit are not likely to be processed by thereceiver.¶
The default for this parameter is the maximum permitted UDP payload of 65527.Values below 1200 are invalid.¶
This limit does act as an additional constraint on datagram size in the sameway as the path MTU, but it is a property of the endpoint and not the path;seeSection 14. It is expected that this is the space an endpointdedicates to holding incoming packets.¶
The initial maximum data parameter is an integer value that contains theinitial value for the maximum amount of data that can be sent on theconnection. This is equivalent to sending a MAX_DATA (Section 19.9) forthe connection immediately after completing the handshake.¶
This parameter is an integer value specifying the initial flow control limitfor locally-initiated bidirectional streams. This limit applies to newlycreated bidirectional streams opened by the endpoint that sends the transportparameter. In client transport parameters, this applies to streams with anidentifier with the least significant two bits set to 0x0; in server transportparameters, this applies to streams with the least significant two bits set to0x1.¶
This parameter is an integer value specifying the initial flow control limitfor peer-initiated bidirectional streams. This limit applies to newly createdbidirectional streams opened by the endpoint that receives the transportparameter. In client transport parameters, this applies to streams with anidentifier with the least significant two bits set to 0x1; in server transportparameters, this applies to streams with the least significant two bits set to0x0.¶
This parameter is an integer value specifying the initial flow control limitfor unidirectional streams. This limit applies to newly createdunidirectional streams opened by the endpoint that receives the transportparameter. In client transport parameters, this applies to streams with anidentifier with the least significant two bits set to 0x3; in server transportparameters, this applies to streams with the least significant two bits set to0x2.¶
The initial maximum bidirectional streams parameter is an integer value thatcontains the initial maximum number of bidirectional streams the peer ispermitted to initiate. If this parameter is absent or zero, the peer cannotopen bidirectional streams until a MAX_STREAMS frame is sent. Setting thisparameter is equivalent to sending a MAX_STREAMS (Section 19.11) ofthe corresponding type with the same value.¶
The initial maximum unidirectional streams parameter is an integer value thatcontains the initial maximum number of unidirectional streams the peer ispermitted to initiate. If this parameter is absent or zero, the peer cannotopen unidirectional streams until a MAX_STREAMS frame is sent. Setting thisparameter is equivalent to sending a MAX_STREAMS (Section 19.11) ofthe corresponding type with the same value.¶
The acknowledgement delay exponent is an integer value indicating an exponentused to decode the ACK Delay field in the ACK frame (Section 19.3). If thisvalue is absent, a default value of 3 is assumed (indicating a multiplier of8). Values above 20 are invalid.¶
The maximum acknowledgement delay is an integer value indicating the maximumamount of time in milliseconds by which the endpoint will delay sendingacknowledgments. This value SHOULD include the receiver's expected delays inalarms firing. For example, if a receiver sets a timer for 5ms and alarmscommonly fire up to 1ms late, then it should send a max_ack_delay of 6ms. Ifthis value is absent, a default of 25 milliseconds is assumed. Values of 2^14or greater are invalid.¶
The disable active migration transport parameter is included if the endpointdoes not support active connection migration (Section 9) on the addressbeing used during the handshake. When a peer sets this transport parameter,an endpoint MUST NOT use a new local address when sending to the address thatthe peer used during the handshake. This transport parameter does notprohibit connection migration after a client has acted on a preferred_addresstransport parameter. This parameter is a zero-length value.¶
The server's preferred address is used to effect a change in server address atthe end of the handshake, as described inSection 9.6. Thistransport parameter is only sent by a server. Servers MAY choose to only senda preferred address of one address family by sending an all-zero address andport (0.0.0.0:0 or ::.0) for the other family. IP addresses are encoded innetwork byte order.¶
The preferred_address transport parameter contains an address and port forboth IP version 4 and 6. The four-byte IPv4 Address field is followed by theassociated two-byte IPv4 Port field. This is followed by a 16-byte IPv6Address field and two-byte IPv6 Port field. After address and port pairs,a Connection ID Length field describes the length of the following ConnectionID field. Finally, a 16-byte Stateless Reset Token field includes thestateless reset token associated with the connection ID. The format of thistransport parameter is shown inFigure 22.¶
The Connection ID field and the Stateless Reset Token field contain analternative connection ID that has a sequence number of 1; seeSection 5.1.1.Having these values sent alongside the preferred address ensures that therewill be at least one unused active connection ID when the client initiatesmigration to the preferred address.¶
The Connection ID and Stateless Reset Token fields of a preferred address areidentical in syntax and semantics to the corresponding fields of aNEW_CONNECTION_ID frame (Section 19.15). A server that choosesa zero-length connection ID MUST NOT provide a preferred address. Similarly,a server MUST NOT include a zero-length connection ID in this transportparameter. A client MUST treat violation of these requirements as aconnection error of type TRANSPORT_PARAMETER_ERROR.¶
Preferred Address { IPv4 Address (32), IPv4 Port (16), IPv6 Address (128), IPv6 Port (16), Connection ID Length (8), Connection ID (..), Stateless Reset Token (128),}The active connection ID limit is an integer value specifying themaximum number of connection IDs from the peer that an endpoint is willingto store. This value includes the connection ID received during the handshake,that received in the preferred_address transport parameter, and those receivedin NEW_CONNECTION_ID frames.The value of the active_connection_id_limit parameter MUST be at least 2.An endpoint that receives a value less than 2 MUST close the connectionwith an error of type TRANSPORT_PARAMETER_ERROR.If this transport parameter is absent, a default of 2 is assumed. If anendpoint issues a zero-length connection ID, it will never send aNEW_CONNECTION_ID frame and therefore ignores the active_connection_id_limitvalue received from its peer.¶
The value that the endpoint included in the Source Connection ID field of thefirst Initial packet it sends for the connection; seeSection 7.3.¶
The value that the server included in the Source Connection ID field of aRetry packet; seeSection 7.3. This transport parameter is only sent by aserver.¶
If present, transport parameters that set initial flow control limits(initial_max_stream_data_bidi_local, initial_max_stream_data_bidi_remote, andinitial_max_stream_data_uni) are equivalent to sending a MAX_STREAM_DATA frame(Section 19.10) on every stream of the corresponding typeimmediately after opening. If the transport parameter is absent, streams ofthat type start with a flow control limit of 0.¶
A client MUST NOT include any server-only transport parameter:original_destination_connection_id, preferred_address,retry_source_connection_id, or stateless_reset_token. A server MUST treatreceipt of any of these transport parameters as a connection error of typeTRANSPORT_PARAMETER_ERROR.¶
As described inSection 12.4, packets contain one or more frames. This sectiondescribes the format and semantics of the core QUIC frame types.¶
A PADDING frame (type=0x00) has no semantic value. PADDING frames can be usedto increase the size of a packet. Padding can be used to increase an initialclient packet to the minimum required size, or to provide protection againsttraffic analysis for protected packets.¶
PADDING frames are formatted as shown inFigure 23, which shows thatPADDING frames have no content. That is, a PADDING frame consists of the singlebyte that identifies the frame as a PADDING frame.¶
PADDING Frame { Type (i) = 0x00,}Endpoints can use PING frames (type=0x01) to verify that their peers are stillalive or to check reachability to the peer.¶
PING frames are formatted as shown inFigure 24, which shows that PINGframes have no content.¶
PING Frame { Type (i) = 0x01,}The receiver of a PING frame simply needs to acknowledge the packet containingthis frame.¶
The PING frame can be used to keep a connection alive when an application orapplication protocol wishes to prevent the connection from timing out; seeSection 10.1.2.¶
Receivers send ACK frames (types 0x02 and 0x03) to inform senders of packetsthey have received and processed. The ACK frame contains one or more ACK Ranges.ACK Ranges identify acknowledged packets. If the frame type is 0x03, ACK framesalso contain the sum of QUIC packets with associated ECN marks received on theconnection up until this point. QUIC implementations MUST properly handle bothtypes and, if they have enabled ECN for packets they send, they SHOULD use theinformation in the ECN section to manage their congestion state.¶
QUIC acknowledgements are irrevocable. Once acknowledged, a packet remainsacknowledged, even if it does not appear in a future ACK frame. This is unlikereneging for TCP SACKs ([RFC2018]).¶
Packets from different packet number spaces can be identified using the samenumeric value. An acknowledgment for a packet needs to indicate both a packetnumber and a packet number space. This is accomplished by having each ACK frameonly acknowledge packet numbers in the same space as the packet in which theACK frame is contained.¶
Version Negotiation and Retry packets cannot be acknowledged because they do notcontain a packet number. Rather than relying on ACK frames, these packets areimplicitly acknowledged by the next Initial packet sent by the client.¶
ACK frames are formatted as shown inFigure 25.¶
ACK Frame { Type (i) = 0x02..0x03, Largest Acknowledged (i), ACK Delay (i), ACK Range Count (i), First ACK Range (i), ACK Range (..) ..., [ECN Counts (..)],}ACK frames contain the following fields:¶
A variable-length integer representing the largest packet number the peer isacknowledging; this is usually the largest packet number that the peer hasreceived prior to generating the ACK frame. Unlike the packet number in theQUIC long or short header, the value in an ACK frame is not truncated.¶
A variable-length integer encoding the acknowledgement delay inmicroseconds; seeSection 13.2.5. It is decoded by multiplying thevalue in the field by 2 to the power of the ack_delay_exponent transportparameter sent by the sender of the ACK frame; seeSection 18.2. Compared to simply expressingthe delay as an integer, this encoding allows for a larger range ofvalues within the same number of bytes, at the cost of lower resolution.¶
A variable-length integer specifying the number of Gap and ACK Range fields inthe frame.¶
A variable-length integer indicating the number of contiguous packetspreceding the Largest Acknowledged that are being acknowledged. The First ACKRange is encoded as an ACK Range; seeSection 19.3.1 starting from theLargest Acknowledged. That is, the smallest packet acknowledged in therange is determined by subtracting the First ACK Range value from the LargestAcknowledged.¶
Contains additional ranges of packets that are alternately notacknowledged (Gap) and acknowledged (ACK Range); seeSection 19.3.1.¶
The three ECN Counts; seeSection 19.3.2.¶
Each ACK Range consists of alternating Gap and ACK Range values in descendingpacket number order. ACK Ranges can be repeated. The number of Gap and ACKRange values is determined by the ACK Range Count field; one of each value ispresent for each value in the ACK Range Count field.¶
ACK Ranges are structured as shown inFigure 26.¶
ACK Range { Gap (i), ACK Range Length (i),}The fields that form each ACK Range are:¶
A variable-length integer indicating the number of contiguous unacknowledgedpackets preceding the packet number one lower than the smallest in thepreceding ACK Range.¶
A variable-length integer indicating the number of contiguous acknowledgedpackets preceding the largest packet number, as determined by thepreceding Gap.¶
Gap and ACK Range value use a relative integer encoding for efficiency. Thougheach encoded value is positive, the values are subtracted, so that each ACKRange describes progressively lower-numbered packets.¶
Each ACK Range acknowledges a contiguous range of packets by indicating thenumber of acknowledged packets that precede the largest packet number in thatrange. A value of zero indicates that only the largest packet number isacknowledged. Larger ACK Range values indicate a larger range, withcorresponding lower values for the smallest packet number in the range. Thus,given a largest packet number for the range, the smallest value is determined bythe formula:¶
smallest = largest - ack_range¶
An ACK Range acknowledges all packets between the smallest packet number and thelargest, inclusive.¶
The largest value for an ACK Range is determined by cumulatively subtracting thesize of all preceding ACK Ranges and Gaps.¶
Each Gap indicates a range of packets that are not being acknowledged. Thenumber of packets in the gap is one higher than the encoded value of the Gapfield.¶
The value of the Gap field establishes the largest packet number value for thesubsequent ACK Range using the following formula:¶
largest = previous_smallest - gap - 2¶
If any computed packet number is negative, an endpoint MUST generate aconnection error of type FRAME_ENCODING_ERROR.¶
The ACK frame uses the least significant bit (that is, type 0x03) to indicateECN feedback and report receipt of QUIC packets with associated ECN codepointsof ECT(0), ECT(1), or CE in the packet's IP header. ECN Counts are only presentwhen the ACK frame type is 0x03.¶
When present, there are 3 ECN counts, as shown inFigure 27.¶
ECN Counts { ECT0 Count (i), ECT1 Count (i), ECN-CE Count (i),}The three ECN Counts are:¶
A variable-length integer representing the total number of packets receivedwith the ECT(0) codepoint in the packet number space of the ACK frame.¶
A variable-length integer representing the total number of packets receivedwith the ECT(1) codepoint in the packet number space of the ACK frame.¶
A variable-length integer representing the total number of packets receivedwith the CE codepoint in the packet number space of the ACK frame.¶
ECN counts are maintained separately for each packet number space.¶
An endpoint uses a RESET_STREAM frame (type=0x04) to abruptly terminate thesending part of a stream.¶
After sending a RESET_STREAM, an endpoint ceases transmission and retransmissionof STREAM frames on the identified stream. A receiver of RESET_STREAM candiscard any data that it already received on that stream.¶
An endpoint that receives a RESET_STREAM frame for a send-only stream MUSTterminate the connection with error STREAM_STATE_ERROR.¶
RESET_STREAM frames are formatted as shown inFigure 28.¶
RESET_STREAM Frame { Type (i) = 0x04, Stream ID (i), Application Protocol Error Code (i), Final Size (i),}RESET_STREAM frames contain the following fields:¶
A variable-length integer encoding of the Stream ID of the stream beingterminated.¶
A variable-length integer containing the application protocol errorcode (seeSection 20.2) that indicates why the stream is beingclosed.¶
A variable-length integer indicating the final size of the stream by theRESET_STREAM sender, in unit of bytes; seeSection 4.5.¶
An endpoint uses a STOP_SENDING frame (type=0x05) to communicate that incomingdata is being discarded on receipt at application request. STOP_SENDINGrequests that a peer cease transmission on a stream.¶
A STOP_SENDING frame can be sent for streams in the Recv or Size Known states;seeSection 3.1. Receiving a STOP_SENDING frame for alocally-initiated stream that has not yet been created MUST be treated as aconnection error of type STREAM_STATE_ERROR. An endpoint that receives aSTOP_SENDING frame for a receive-only stream MUST terminate the connection witherror STREAM_STATE_ERROR.¶
STOP_SENDING frames are formatted as shown inFigure 29.¶
STOP_SENDING Frame { Type (i) = 0x05, Stream ID (i), Application Protocol Error Code (i),}STOP_SENDING frames contain the following fields:¶
A variable-length integer carrying the Stream ID of the stream being ignored.¶
A variable-length integer containing the application-specified reason thesender is ignoring the stream; seeSection 20.2.¶
A CRYPTO frame (type=0x06) is used to transmit cryptographic handshake messages.It can be sent in all packet types except 0-RTT. The CRYPTO frame offers thecryptographic protocol an in-order stream of bytes. CRYPTO frames arefunctionally identical to STREAM frames, except that they do not bear a streamidentifier; they are not flow controlled; and they do not carry markers foroptional offset, optional length, and the end of the stream.¶
CRYPTO frames are formatted as shown inFigure 30.¶
CRYPTO Frame { Type (i) = 0x06, Offset (i), Length (i), Crypto Data (..),}CRYPTO frames contain the following fields:¶
A variable-length integer specifying the byte offset in the stream for thedata in this CRYPTO frame.¶
A variable-length integer specifying the length of the Crypto Data field inthis CRYPTO frame.¶
The cryptographic message data.¶
There is a separate flow of cryptographic handshake data in each encryptionlevel, each of which starts at an offset of 0. This implies that each encryptionlevel is treated as a separate CRYPTO stream of data.¶
The largest offset delivered on a stream - the sum of the offset and datalength - cannot exceed 2^62-1. Receipt of a frame that exceeds this limit MUSTbe treated as a connection error of type FRAME_ENCODING_ERROR orCRYPTO_BUFFER_EXCEEDED.¶
Unlike STREAM frames, which include a Stream ID indicating to which stream thedata belongs, the CRYPTO frame carries data for a single stream per encryptionlevel. The stream does not have an explicit end, so CRYPTO frames do not have aFIN bit.¶
A server sends a NEW_TOKEN frame (type=0x07) to provide the client with a tokento send in the header of an Initial packet for a future connection.¶
NEW_TOKEN frames are formatted as shown inFigure 31.¶
NEW_TOKEN Frame { Type (i) = 0x07, Token Length (i), Token (..),}NEW_TOKEN frames contain the following fields:¶
A variable-length integer specifying the length of the token in bytes.¶
An opaque blob that the client can use with a future Initial packet. The tokenMUST NOT be empty. A client MUST treat receipt of a NEW_TOKEN frame withan empty Token field as a connection error of type FRAME_ENCODING_ERROR.¶
A client might receive multiple NEW_TOKEN frames that contain the same tokenvalue if packets containing the frame are incorrectly determined to be lost.Clients are responsible for discarding duplicate values, which might be usedto link connection attempts; seeSection 8.1.3.¶
Clients MUST NOT send NEW_TOKEN frames. A server MUST treat receipt of aNEW_TOKEN frame as a connection error of type PROTOCOL_VIOLATION.¶
STREAM frames implicitly create a stream and carry stream data. The STREAMframe Type field takes the form 0b00001XXX (or the set of values from 0x08 to0x0f). The three low-order bits of the frame type determine the fields thatare present in the frame:¶
An endpoint MUST terminate the connection with error STREAM_STATE_ERROR if itreceives a STREAM frame for a locally-initiated stream that has not yet beencreated, or for a send-only stream.¶
STREAM frames are formatted as shown inFigure 32.¶
STREAM Frame { Type (i) = 0x08..0x0f, Stream ID (i), [Offset (i)], [Length (i)], Stream Data (..),}STREAM frames contain the following fields:¶
A variable-length integer indicating the stream ID of the stream; seeSection 2.1.¶
A variable-length integer specifying the byte offset in the stream for thedata in this STREAM frame. This field is present when the OFF bit is set to1. When the Offset field is absent, the offset is 0.¶
A variable-length integer specifying the length of the Stream Data field inthis STREAM frame. This field is present when the LEN bit is set to 1. Whenthe LEN bit is set to 0, the Stream Data field consumes all the remainingbytes in the packet.¶
The bytes from the designated stream to be delivered.¶
When a Stream Data field has a length of 0, the offset in the STREAM frame isthe offset of the next byte that would be sent.¶
The first byte in the stream has an offset of 0. The largest offset deliveredon a stream - the sum of the offset and data length - cannot exceed 2^62-1, asit is not possible to provide flow control credit for that data. Receipt of aframe that exceeds this limit MUST be treated as a connection error of typeFRAME_ENCODING_ERROR or FLOW_CONTROL_ERROR.¶
A MAX_DATA frame (type=0x10) is used in flow control to inform the peer of themaximum amount of data that can be sent on the connection as a whole.¶
MAX_DATA frames are formatted as shown inFigure 33.¶
MAX_DATA Frame { Type (i) = 0x10, Maximum Data (i),}MAX_DATA frames contain the following field:¶
A variable-length integer indicating the maximum amount of data that can besent on the entire connection, in units of bytes.¶
All data sent in STREAM frames counts toward this limit. The sum of the finalsizes on all streams - including streams in terminal states - MUST NOT exceedthe value advertised by a receiver. An endpoint MUST terminate a connectionwith a FLOW_CONTROL_ERROR error if it receives more data than the maximum datavalue that it has sent. This includes violations of remembered limits in EarlyData; seeSection 7.4.1.¶
A MAX_STREAM_DATA frame (type=0x11) is used in flow control to inform a peerof the maximum amount of data that can be sent on a stream.¶
A MAX_STREAM_DATA frame can be sent for streams in the Recv state; seeSection 3.1. Receiving a MAX_STREAM_DATA frame for alocally-initiated stream that has not yet been created MUST be treated as aconnection error of type STREAM_STATE_ERROR. An endpoint that receives aMAX_STREAM_DATA frame for a receive-only stream MUST terminate the connectionwith error STREAM_STATE_ERROR.¶
MAX_STREAM_DATA frames are formatted as shown inFigure 34.¶
MAX_STREAM_DATA Frame { Type (i) = 0x11, Stream ID (i), Maximum Stream Data (i),}MAX_STREAM_DATA frames contain the following fields:¶
The stream ID of the stream that is affected encoded as a variable-lengthinteger.¶
A variable-length integer indicating the maximum amount of data that can besent on the identified stream, in units of bytes.¶
When counting data toward this limit, an endpoint accounts for the largestreceived offset of data that is sent or received on the stream. Loss orreordering can mean that the largest received offset on a stream can be greaterthan the total size of data received on that stream. Receiving STREAM framesmight not increase the largest received offset.¶
The data sent on a stream MUST NOT exceed the largest maximum stream data valueadvertised by the receiver. An endpoint MUST terminate a connection with aFLOW_CONTROL_ERROR error if it receives more data than the largest maximumstream data that it has sent for the affected stream. This includes violationsof remembered limits in Early Data; seeSection 7.4.1.¶
A MAX_STREAMS frame (type=0x12 or 0x13) inform the peer of the cumulativenumber of streams of a given type it is permitted to open. A MAX_STREAMS framewith a type of 0x12 applies to bidirectional streams, and a MAX_STREAMS framewith a type of 0x13 applies to unidirectional streams.¶
MAX_STREAMS frames are formatted as shown inFigure 35;¶
MAX_STREAMS Frame { Type (i) = 0x12..0x13, Maximum Streams (i),}MAX_STREAMS frames contain the following field:¶
A count of the cumulative number of streams of the corresponding type thatcan be opened over the lifetime of the connection. This value cannot exceed2^60, as it is not possible to encode stream IDs larger than 2^62-1.Receipt of a frame that permits opening of a stream larger than this limitMUST be treated as a FRAME_ENCODING_ERROR.¶
Loss or reordering can cause a MAX_STREAMS frame to be received that state alower stream limit than an endpoint has previously received. MAX_STREAMS framesthat do not increase the stream limit MUST be ignored.¶
An endpoint MUST NOT open more streams than permitted by the current streamlimit set by its peer. For instance, a server that receives a unidirectionalstream limit of 3 is permitted to open stream 3, 7, and 11, but not stream 15.An endpoint MUST terminate a connection with a STREAM_LIMIT_ERROR error if apeer opens more streams than was permitted. This includes violations ofremembered limits in Early Data; seeSection 7.4.1.¶
Note that these frames (and the corresponding transport parameters) do notdescribe the number of streams that can be opened concurrently. The limitincludes streams that have been closed as well as those that are open.¶
A sender SHOULD send a DATA_BLOCKED frame (type=0x14) when it wishes to senddata, but is unable to do so due to connection-level flow control; seeSection 4. DATA_BLOCKED frames can be used as input to tuning of flowcontrol algorithms; seeSection 4.2.¶
DATA_BLOCKED frames are formatted as shown inFigure 36.¶
DATA_BLOCKED Frame { Type (i) = 0x14, Maximum Data (i),}DATA_BLOCKED frames contain the following field:¶
A variable-length integer indicating the connection-level limit at whichblocking occurred.¶
A sender SHOULD send a STREAM_DATA_BLOCKED frame (type=0x15) when it wishes tosend data, but is unable to do so due to stream-level flow control. This frameis analogous to DATA_BLOCKED (Section 19.12).¶
An endpoint that receives a STREAM_DATA_BLOCKED frame for a send-only streamMUST terminate the connection with error STREAM_STATE_ERROR.¶
STREAM_DATA_BLOCKED frames are formatted as shown inFigure 37.¶
STREAM_DATA_BLOCKED Frame { Type (i) = 0x15, Stream ID (i), Maximum Stream Data (i),}STREAM_DATA_BLOCKED frames contain the following fields:¶
A sender SHOULD send a STREAMS_BLOCKED frame (type=0x16 or 0x17) when it wishesto open a stream, but is unable to due to the maximum stream limit set by itspeer; seeSection 19.11. A STREAMS_BLOCKED frame of type 0x16 is usedto indicate reaching the bidirectional stream limit, and a STREAMS_BLOCKED frameof type 0x17 is used to indicate reaching the unidirectional stream limit.¶
A STREAMS_BLOCKED frame does not open the stream, but informs the peer that anew stream was needed and the stream limit prevented the creation of the stream.¶
STREAMS_BLOCKED frames are formatted as shown inFigure 38.¶
STREAMS_BLOCKED Frame { Type (i) = 0x16..0x17, Maximum Streams (i),}STREAMS_BLOCKED frames contain the following field:¶
A variable-length integer indicating the maximum number of streams allowedat the time the frame was sent. This value cannot exceed 2^60, as it isnot possible to encode stream IDs larger than 2^62-1. Receipt of a framethat encodes a larger stream ID MUST be treated as a STREAM_LIMIT_ERROR or aFRAME_ENCODING_ERROR.¶
An endpoint sends a NEW_CONNECTION_ID frame (type=0x18) to provide its peer withalternative connection IDs that can be used to break linkability when migratingconnections; seeSection 9.5.¶
NEW_CONNECTION_ID frames are formatted as shown inFigure 39.¶
NEW_CONNECTION_ID Frame { Type (i) = 0x18, Sequence Number (i), Retire Prior To (i), Length (8), Connection ID (8..160), Stateless Reset Token (128),}NEW_CONNECTION_ID frames contain the following fields:¶
The sequence number assigned to the connection ID by the sender, encoded as avariable-length integer; seeSection 5.1.1.¶
A variable-length integer indicating which connection IDs should be retired;seeSection 5.1.2.¶
An 8-bit unsigned integer containing the length of the connection ID. Valuesless than 1 and greater than 20 are invalid and MUST be treated as aconnection error of type FRAME_ENCODING_ERROR.¶
A connection ID of the specified length.¶
A 128-bit value that will be used for a stateless reset when the associatedconnection ID is used; seeSection 10.3.¶
An endpoint MUST NOT send this frame if it currently requires that its peer sendpackets with a zero-length Destination Connection ID. Changing the length of aconnection ID to or from zero-length makes it difficult to identify when thevalue of the connection ID changed. An endpoint that is sending packets with azero-length Destination Connection ID MUST treat receipt of a NEW_CONNECTION_IDframe as a connection error of type PROTOCOL_VIOLATION.¶
Transmission errors, timeouts and retransmissions might cause the sameNEW_CONNECTION_ID frame to be received multiple times. Receipt of the sameframe multiple times MUST NOT be treated as a connection error. A receiver canuse the sequence number supplied in the NEW_CONNECTION_ID frame to handlereceiving the same NEW_CONNECTION_ID frame multiple times.¶
If an endpoint receives a NEW_CONNECTION_ID frame that repeats a previouslyissued connection ID with a different Stateless Reset Token or a differentsequence number, or if a sequence number is used for different connectionIDs, the endpoint MAY treat that receipt as a connection error of typePROTOCOL_VIOLATION.¶
The Retire Prior To field applies to connection IDs established duringconnection setup and the preferred_address transport parameter; seeSection 5.1.2. The Retire Prior To field MUST be less than or equal to theSequence Number field. Receiving a value greater than the Sequence Number MUSTbe treated as a connection error of type FRAME_ENCODING_ERROR.¶
Once a sender indicates a Retire Prior To value, smaller values sent insubsequent NEW_CONNECTION_ID frames have no effect. A receiver MUST ignore anyRetire Prior To fields that do not increase the largest received Retire Prior Tovalue.¶
An endpoint that receives a NEW_CONNECTION_ID frame with a sequence numbersmaller than the Retire Prior To field of a previously receivedNEW_CONNECTION_ID frame MUST send a corresponding RETIRE_CONNECTION_ID framethat retires the newly received connection ID, unless it has already done sofor that sequence number.¶
An endpoint sends a RETIRE_CONNECTION_ID frame (type=0x19) to indicate that itwill no longer use a connection ID that was issued by its peer. This includesthe connection ID provided during the handshake. Sending a RETIRE_CONNECTION_IDframe also serves as a request to the peer to send additional connection IDs forfuture use; seeSection 5.1. New connection IDs can be delivered to apeer using the NEW_CONNECTION_ID frame (Section 19.15).¶
Retiring a connection ID invalidates the stateless reset token associated withthat connection ID.¶
RETIRE_CONNECTION_ID frames are formatted as shown inFigure 40.¶
RETIRE_CONNECTION_ID Frame { Type (i) = 0x19, Sequence Number (i),}RETIRE_CONNECTION_ID frames contain the following field:¶
The sequence number of the connection ID being retired; seeSection 5.1.2.¶
Receipt of a RETIRE_CONNECTION_ID frame containing a sequence number greaterthan any previously sent to the peer MUST be treated as a connection error oftype PROTOCOL_VIOLATION.¶
The sequence number specified in a RETIRE_CONNECTION_ID frame MUST NOT referto the Destination Connection ID field of the packet in which the frame iscontained. The peer MAY treat this as a connection error of typePROTOCOL_VIOLATION.¶
An endpoint cannot send this frame if it was provided with a zero-lengthconnection ID by its peer. An endpoint that provides a zero-length connectionID MUST treat receipt of a RETIRE_CONNECTION_ID frame as a connection error oftype PROTOCOL_VIOLATION.¶
Endpoints can use PATH_CHALLENGE frames (type=0x1a) to check reachability to thepeer and for path validation during connection migration.¶
PATH_CHALLENGE frames are formatted as shown inFigure 41.¶
PATH_CHALLENGE Frame { Type (i) = 0x1a, Data (64),}PATH_CHALLENGE frames contain the following field:¶
This 8-byte field contains arbitrary data.¶
Including 64 bits of entropy in a PATH_CHALLENGE frame ensures that it is easierto receive the packet than it is to guess the value correctly.¶
The recipient of this frame MUST generate a PATH_RESPONSE frame(Section 19.18) containing the same Data.¶
A PATH_RESPONSE frame (type=0x1b) is sent in response to a PATH_CHALLENGE frame.¶
PATH_RESPONSE frames are formatted as shown inFigure 42, which isidentical to the PATH_CHALLENGE frame (Section 19.17).¶
PATH_RESPONSE Frame { Type (i) = 0x1b, Data (64),}If the content of a PATH_RESPONSE frame does not match the content of aPATH_CHALLENGE frame previously sent by the endpoint, the endpoint MAY generatea connection error of type PROTOCOL_VIOLATION.¶
An endpoint sends a CONNECTION_CLOSE frame (type=0x1c or 0x1d) to notify itspeer that the connection is being closed. The CONNECTION_CLOSE with a frametype of 0x1c is used to signal errors at only the QUIC layer, or the absence oferrors (with the NO_ERROR code). The CONNECTION_CLOSE frame with a type of 0x1dis used to signal an error with the application that uses QUIC.¶
If there are open streams that have not been explicitly closed, they areimplicitly closed when the connection is closed.¶
CONNECTION_CLOSE frames are formatted as shown inFigure 43.¶
CONNECTION_CLOSE Frame { Type (i) = 0x1c..0x1d, Error Code (i), [Frame Type (i)], Reason Phrase Length (i), Reason Phrase (..),}CONNECTION_CLOSE frames contain the following fields:¶
A variable-length integer error code that indicates the reason forclosing this connection. A CONNECTION_CLOSE frame of type 0x1c uses codesfrom the space defined inSection 20.1. A CONNECTION_CLOSE frameof type 0x1d uses codes from the application protocol error code space; seeSection 20.2.¶
A variable-length integer encoding the type of frame that triggered the error.A value of 0 (equivalent to the mention of the PADDING frame) is used when theframe type is unknown. The application-specific variant of CONNECTION_CLOSE(type 0x1d) does not include this field.¶
A variable-length integer specifying the length of the reason phrase in bytes.Because a CONNECTION_CLOSE frame cannot be split between packets, any limitson packet size will also limit the space available for a reason phrase.¶
A human-readable explanation for why the connection was closed. This can bezero length if the sender chooses not to give details beyond the Error Code.This SHOULD be a UTF-8 encoded string[RFC3629].¶
The application-specific variant of CONNECTION_CLOSE (type 0x1d) can only besent using 0-RTT or 1-RTT packets; seeSection 12.5. When anapplication wishes to abandon a connection during the handshake, an endpointcan send a CONNECTION_CLOSE frame (type 0x1c) with an error code ofAPPLICATION_ERROR in an Initial or a Handshake packet.¶
The server uses a HANDSHAKE_DONE frame (type=0x1e) to signal confirmation ofthe handshake to the client.¶
HANDSHAKE_DONE frames are formatted as shown inFigure 44, whichshows that HANDSHAKE_DONE frames have no content.¶
HANDSHAKE_DONE Frame { Type (i) = 0x1e,}A HANDSHAKE_DONE frame can only be sent by the server. Servers MUST NOT send aHANDSHAKE_DONE frame before completing the handshake. A server MUST treatreceipt of a HANDSHAKE_DONE frame as a connection error of typePROTOCOL_VIOLATION.¶
QUIC frames do not use a self-describing encoding. An endpoint therefore needsto understand the syntax of all frames before it can successfully process apacket. This allows for efficient encoding of frames, but it means that anendpoint cannot send a frame of a type that is unknown to its peer.¶
An extension to QUIC that wishes to use a new type of frame MUST first ensurethat a peer is able to understand the frame. An endpoint can use a transportparameter to signal its willingness to receive extension frame types. Onetransport parameter can indicate support for one or more extension frame types.¶
Extensions that modify or replace core protocol functionality (including frametypes) will be difficult to combine with other extensions that modify orreplace the same functionality unless the behavior of the combination isexplicitly defined. Such extensions SHOULD define their interaction withpreviously-defined extensions modifying the same protocol components.¶
Extension frames MUST be congestion controlled and MUST cause an ACK frame tobe sent. The exception is extension frames that replace or supplement the ACKframe. Extension frames are not included in flow control unless specifiedin the extension.¶
An IANA registry is used to manage the assignment of frame types; seeSection 22.4.¶
QUIC transport error codes and application error codes are 62-bit unsignedintegers.¶
This section lists the defined QUIC transport error codes that can be used in aCONNECTION_CLOSE frame with a type of 0x1c. These errors apply to the entireconnection.¶
An endpoint uses this with CONNECTION_CLOSE to signal that the connection isbeing closed abruptly in the absence of any error.¶
The endpoint encountered an internal error and cannot continue with theconnection.¶
The server refused to accept a new connection.¶
An endpoint received more data than it permitted in its advertised datalimits; seeSection 4.¶
An endpoint received a frame for a stream identifier that exceeded itsadvertised stream limit for the corresponding stream type.¶
An endpoint received a frame for a stream that was not in a state thatpermitted that frame; seeSection 3.¶
An endpoint received a STREAM frame containing data that exceeded thepreviously established final size. Or an endpoint received a STREAM frame ora RESET_STREAM frame containing a final size that was lower than the size ofstream data that was already received. Or an endpoint received a STREAM frameor a RESET_STREAM frame containing a different final size to the one alreadyestablished.¶
An endpoint received a frame that was badly formatted. For instance, a frameof an unknown type, or an ACK frame that has more acknowledgment ranges thanthe remainder of the packet could carry.¶
An endpoint received transport parameters that were badly formatted, includedan invalid value, was absent even though it is mandatory, was present thoughit is forbidden, or is otherwise in error.¶
The number of connection IDs provided by the peer exceeds the advertisedactive_connection_id_limit.¶
An endpoint detected an error with protocol compliance that was not covered bymore specific error codes.¶
A server received a client Initial that contained an invalid Token field.¶
The application or application protocol caused the connection to be closed.¶
An endpoint has received more data in CRYPTO frames than it can buffer.¶
An endpoint detected errors in performing key updates; see Section 6 of[QUIC-TLS].¶
An endpoint has reached the confidentiality or integrity limit for the AEADalgorithm used by the given connection.¶
An endpoint has determined that the network path is incapable of supportingQUIC. An endpoint is unlikely to receive CONNECTION_CLOSE carrying this codeexcept when the path does not support a large enough MTU.¶
The cryptographic handshake failed. A range of 256 values is reserved forcarrying error codes specific to the cryptographic handshake that is used.Codes for errors occurring when TLS is used for the crypto handshake aredescribed in Section 4.8 of[QUIC-TLS].¶
SeeSection 22.5 for details of registering new error codes.¶
In defining these error codes, several principles are applied. Error conditionsthat might require specific action on the part of a recipient are given uniquecodes. Errors that represent common conditions are given specific codes.Absent either of these conditions, error codes are used to identify a generalfunction of the stack, like flow control or transport parameter handling.Finally, generic errors are provided for conditions where implementations areunable or unwilling to use more specific codes.¶
The management of application error codes is left to application protocols.Application protocol error codes are used for the RESET_STREAM frame(Section 19.4), the STOP_SENDING frame (Section 19.5), andthe CONNECTION_CLOSE frame with a type of 0x1d (Section 19.19).¶
The goal of QUIC is to provide a secure transport connection.Section 21.1 provides an overview of those properties; subsequentsections discuss constraints and caveats regarding these properties, includingdescriptions of known attacks and countermeasures.¶
A complete security analysis of QUIC is outside the scope of this document.This section provides an informal description of the desired security propertiesas an aid to implementors and to help guide protocol analysis.¶
QUIC assumes the threat model described in[SEC-CONS] and providesprotections against many of the attacks that arise from that model.¶
For this purpose, attacks are divided into passive and active attacks. Passiveattackers have the capability to read packets from the network, while activeattackers also have the capability to write packets into the network. However,a passive attack could involve an attacker with the ability to cause a routingchange or other modification in the path taken by packets that comprise aconnection.¶
Attackers are additionally categorized as either on-path attackers or off-pathattackers; see Section 3.5 of[SEC-CONS]. An on-path attacker can read,modify, or remove any packet it observes such that it no longer reaches itsdestination, while an off-path attacker observes the packets, but cannot preventthe original packet from reaching its intended destination. Both types ofattackers can also transmit arbitrary packets.¶
Properties of the handshake, protected packets, and connection migration areconsidered separately.¶
The QUIC handshake incorporates the TLS 1.3 handshake and inherits thecryptographic properties described in Appendix E.1 of[TLS13]. Manyof the security properties of QUIC depend on the TLS handshake providing theseproperties. Any attack on the TLS handshake could affect QUIC.¶
Any attack on the TLS handshake that compromises the secrecy or uniquenessof session keys affects other security guarantees provided by QUIC that dependson these keys. For instance, migration (Section 9) depends on the efficacyof confidentiality protections, both for the negotiation of keys using the TLShandshake and for QUIC packet protection, to avoid linkability across networkpaths.¶
An attack on the integrity of the TLS handshake might allow an attacker toaffect the selection of application protocol or QUIC version.¶
In addition to the properties provided by TLS, the QUIC handshake provides somedefense against DoS attacks on the handshake.¶
Address validation (Section 8) is used to verify that an entitythat claims a given address is able to receive packets at that address. Addressvalidation limits amplification attack targets to addresses for which anattacker can observe packets.¶
Prior to address validation, endpoints are limited in what they are able tosend. Endpoints cannot send data toward an unvalidated address in excess ofthree times the data received from that address.¶
The anti-amplification limit only applies when an endpoint responds to packetsreceived from an unvalidated address. The anti-amplification limit does notapply to clients when establishing a new connection or when initiatingconnection migration.¶
Computing the server's first flight for a full handshake is potentiallyexpensive, requiring both a signature and a key exchange computation. In orderto prevent computational DoS attacks, the Retry packet provides a cheap tokenexchange mechanism that allows servers to validate a client's IP address priorto doing any expensive computations at the cost of a single round trip. After asuccessful handshake, servers can issue new tokens to a client, which will allownew connection establishment without incurring this cost.¶
An on-path or off-path attacker can force a handshake to fail by replacing orracing Initial packets. Once valid Initial packets have been exchanged,subsequent Handshake packets are protected with the handshake keys and anon-path attacker cannot force handshake failure other than by dropping packetsto cause endpoints to abandon the attempt.¶
An on-path attacker can also replace the addresses of packets on either side andtherefore cause the client or server to have an incorrect view of the remoteaddresses. Such an attack is indistinguishable from the functions performed by aNAT.¶
The entire handshake is cryptographically protected, with the Initial packetsbeing encrypted with per-version keys and the Handshake and later packets beingencrypted with keys derived from the TLS key exchange. Further, parameternegotiation is folded into the TLS transcript and thus provides the sameintegrity guarantees as ordinary TLS negotiation. An attacker can observethe client's transport parameters (as long as it knows the version-specificsalt) but cannot observe the server's transport parameters and cannot influenceparameter negotiation.¶
Connection IDs are unencrypted but integrity protected in all packets.¶
This version of QUIC does not incorporate a version negotiation mechanism;implementations of incompatible versions will simply fail to establish aconnection.¶
Packet protection (Section 12.1) provides authentication and encryptionof all packets except Version Negotiation packets, though Initial and Retrypackets have limited encryption and authentication based on version-specificinputs; see[QUIC-TLS] for more details. This section considers passive andactive attacks against protected packets.¶
Both on-path and off-path attackers can mount a passive attack in which theysave observed packets for an offline attack against packet protection at afuture time; this is true for any observer of any packet on any network.¶
A blind attacker, one who injects packets without being able to observe validpackets for a connection, is unlikely to be successful, since packet protectionensures that valid packets are only generated by endpoints that possess thekey material established during the handshake; seeSection 7 andSection 21.1.1. Similarly, any active attacker that observes packetsand attempts to insert new data or modify existing data in those packets shouldnot be able to generate packets deemed valid by the receiving endpoint.¶
A spoofing attack, in which an active attacker rewrites unprotected parts of apacket that it forwards or injects, such as the source or destinationaddress, is only effective if the attacker can forward packets to the originalendpoint. Packet protection ensures that the packet payloads can only beprocessed by the endpoints that completed the handshake, and invalidpackets are ignored by those endpoints.¶
An attacker can also modify the boundaries between packets and UDP datagrams,causing multiple packets to be coalesced into a single datagram, or splittingcoalesced packets into multiple datagrams. Aside from datagrams containingInitial packets, which require padding, modification of how packets arearranged in datagrams has no functional effect on a connection, although itmight change some performance characteristics.¶
Connection Migration (Section 9) provides endpoints with the ability totransition between IP addresses and ports on multiple paths, using one path at atime for transmission and receipt of non-probing frames. Path validation(Section 8.2) establishes that a peer is both willing and ableto receive packets sent on a particular path. This helps reduce the effects ofaddress spoofing by limiting the number of packets sent to a spoofed address.¶
This section describes the intended security properties of connection migrationwhen under various types of DoS attacks.¶
An attacker that can cause a packet it observes to no longer reach its intendeddestination is considered an on-path attacker. When an attacker is presentbetween a client and server, endpoints are required to send packets through theattacker to establish connectivity on a given path.¶
An on-path attacker can:¶
An on-path attacker cannot:¶
An on-path attacker has the opportunity to modify the packets that it observes,however any modifications to an authenticated portion of a packet will cause itto be dropped by the receiving endpoint as invalid, as packet payloads are bothauthenticated and encrypted.¶
In the presence of an on-path attacker, QUIC aims to provide the followingproperties:¶
An off-path attacker is not directly on the path between a client and server,but could be able to obtain copies of some or all packets sent between theclient and the server. It is also able to send copies of those packets toeither endpoint.¶
An off-path attacker can:¶
An off-path attacker cannot:¶
An off-path attacker can modify packets that it has observed and inject themback into the network, potentially with spoofed source and destinationaddresses.¶
For the purposes of this discussion, it is assumed that an off-path attackerhas the ability to observe, modify, and re-inject a packet into the networkthat will reach the destination endpoint prior to the arrival of the originalpacket observed by the attacker. In other words, an attacker has the ability toconsistently "win" a race with the legitimate packets between the endpoints,potentially causing the original packet to be ignored by the recipient.¶
It is also assumed that an attacker has the resources necessary to affect NATstate, potentially both causing an endpoint to lose its NAT binding, and anattacker to obtain the same port for use with its traffic.¶
In the presence of an off-path attacker, QUIC aims to provide the followingproperties:¶
A limited on-path attacker is an off-path attacker that has offered improvedrouting of packets by duplicating and forwarding original packets between theserver and the client, causing those packets to arrive before the originalcopies such that the original packets are dropped by the destination endpoint.¶
A limited on-path attacker differs from an on-path attacker in that it is not onthe original path between endpoints, and therefore the original packets sent byan endpoint are still reaching their destination. This means that a futurefailure to route copied packets to the destination faster than their originalpath will not prevent the original packets from reaching the destination.¶
A limited on-path attacker can:¶
A limited on-path attacker cannot:¶
A limited on-path attacker can only delay packets up to the point that theoriginal packets arrive before the duplicate packets, meaning that it cannotoffer routing with worse latency than the original path. If a limited on-pathattacker drops packets, the original copy will still arrive at the destinationendpoint.¶
In the presence of a limited on-path attacker, QUIC aims to provide thefollowing properties:¶
Note that these guarantees are the same guarantees provided for any NAT, for thesame reasons.¶
As an encrypted and authenticated transport QUIC provides a range of protectionsagainst denial of service. Once the cryptographic handshake is complete, QUICendpoints discard most packets that are not authenticated, greatly limiting theability of an attacker to interfere with existing connections.¶
Once a connection is established QUIC endpoints might accept someunauthenticated ICMP packets (seeSection 14.2.1), but the use of these packetsis extremely limited. The only other type of packet that an endpoint mightaccept is a stateless reset (Section 10.3), which relies on the tokenbeing kept secret until it is used.¶
During the creation of a connection, QUIC only provides protection againstattack from off the network path. All QUIC packets contain proof that therecipient saw a preceding packet from its peer.¶
Addresses cannot change during the handshake, so endpoints can discard packetsthat are received on a different network path.¶
The Source and Destination Connection ID fields are the primary means ofprotection against off-path attack during the handshake. These are required tomatch those set by a peer. Except for an Initial and stateless reset packets,an endpoint only accepts packets that include a Destination Connection ID fieldthat matches a value the endpoint previously chose. This is the only protectionoffered for Version Negotiation packets.¶
The Destination Connection ID field in an Initial packet is selected by a clientto be unpredictable, which serves an additional purpose. The packets that carrythe cryptographic handshake are protected with a key that is derived from thisconnection ID and salt specific to the QUIC version. This allows endpoints touse the same process for authenticating packets that they receive as they useafter the cryptographic handshake completes. Packets that cannot beauthenticated are discarded. Protecting packets in this fashion provides astrong assurance that the sender of the packet saw the Initial packet andunderstood it.¶
These protections are not intended to be effective against an attacker that isable to receive QUIC packets prior to the connection being established. Such anattacker can potentially send packets that will be accepted by QUIC endpoints.This version of QUIC attempts to detect this sort of attack, but it expects thatendpoints will fail to establish a connection rather than recovering. For themost part, the cryptographic handshake protocol[QUIC-TLS] is responsible fordetecting tampering during the handshake.¶
Endpoints are permitted to use other methods to detect and attempt to recoverfrom interference with the handshake. Invalid packets can be identified anddiscarded using other methods, but no specific method is mandated in thisdocument.¶
An attacker might be able to receive an address validation token(Section 8) from a server and then release the IP address it usedto acquire that token. At a later time, the attacker can initiate a 0-RTTconnection with a server by spoofing this same address, which might now addressa different (victim) endpoint. The attacker can thus potentially cause theserver to send an initial congestion window's worth of data towards the victim.¶
Servers SHOULD provide mitigations for this attack by limiting the usage andlifetime of address validation tokens; seeSection 8.1.3.¶
An endpoint that acknowledges packets it has not received might cause acongestion controller to permit sending at rates beyond what the networksupports. An endpoint MAY skip packet numbers when sending packets to detectthis behavior. An endpoint can then immediately close the connection with aconnection error of type PROTOCOL_VIOLATION; seeSection 10.2.¶
A request forgery attack occurs where an endpoint causes its peer to issue arequest towards a victim, with the request controlled by the endpoint. Requestforgery attacks aim to provide an attacker with access to capabilities of itspeer that might otherwise be unavailable to the attacker. For a networkingprotocol, a request forgery attack is often used to exploit any implicitauthorization conferred on the peer by the victim due to the peer's location inthe network.¶
For request forgery to be effective, an attacker needs to be able to influencewhat packets the peer sends and where these packets are sent. If an attackercan target a vulnerable service with a controlled payload, that service mightperform actions that are attributed to the attacker's peer, but decided by theattacker.¶
For example, cross-site request forgery[CSRF]exploits on the Web cause a client to issue requests that include authorizationcookies[COOKIE], allowing one site access to information andactions that are intended to be restricted to a different site.¶
As QUIC runs over UDP, the primary attack modality of concern is one where anattacker can select the address to which its peer sends UDP datagrams and cancontrol some of the unprotected content of those packets. As much of the datasent by QUIC endpoints is protected, this includes control over ciphertext. Anattack is successful if an attacker can cause a peer to send a UDP datagram toa host that will perform some action based on content in the datagram.¶
This section discusses ways in which QUIC might be used for request forgeryattacks.¶
This section also describes limited countermeasures that can be implemented byQUIC endpoints. These mitigations can be employed unilaterally by a QUICimplementation or deployment, without potential targets for request forgeryattacks taking action. However these countermeasures could be insufficient ifUDP-based services do not properly authorize requests.¶
Because the migration attack described inSection 21.5.4 is quite powerful and does not haveadequate countermeasures, QUIC server implementations should assume thatattackers can cause them to generate arbitrary UDP payloads to arbitrarydestinations. QUIC servers SHOULD NOT be deployed in networks that also haveinadequately secured UDP endpoints.¶
Although it is not generally possible to ensure that clients are not co-locatedwith vulnerable endpoints, this version of QUIC does not allow servers tomigrate, thus preventing spoofed migration attacks on clients. Any futureextension which allows server migration MUST also define countermeasures forforgery attacks.¶
QUIC offers some opportunities for an attacker to influence or control whereits peer sends UDP datagrams:¶
In all cases, the attacker can cause its peer to send datagrams to avictim that might not understand QUIC. That is, these packets are sent bythe peer prior to address validation; seeSection 8.¶
Outside of the encrypted portion of packets, QUIC offers an endpoint severaloptions for controlling the content of UDP datagrams that its peer sends. TheDestination Connection ID field offers direct control over bytes that appearearly in packets sent by the peer; seeSection 5.1. The Token field inInitial packets offers a server control over other bytes of Initial packets;seeSection 17.2.2.¶
There are no measures in this version of QUIC to prevent indirect control overthe encrypted portions of packets. It is necessary to assume that endpoints areable to control the contents of frames that a peer sends, especially thoseframes that convey application data, such as STREAM frames. Though this dependsto some degree on details of the application protocol, some control is possiblein many protocol usage contexts. As the attacker has access to packetprotection keys, they are likely to be capable of predicting how a peer willencrypt future packets. Successful control over datagram content then onlyrequires that the attacker be able to predict the packet number and placementof frames in packets with some amount of reliability.¶
This section assumes that limiting control over datagram content is notfeasible. The focus of the mitigations in subsequent sections is on limitingthe ways in which datagrams that are sent prior to address validation can beused for request forgery.¶
An attacker acting as a server can choose the IP address and port on which itadvertises its availability, so Initial packets from clients are assumed to beavailable for use in this sort of attack. The address validation implicit inthe handshake ensures that - for a new connection - a client will not sendother types of packet to a destination that does not understand QUIC or is notwilling to accept a QUIC connection.¶
Initial packet protection (Section 5.2 of[QUIC-TLS]) makes it difficult forservers to control the content of Initial packets sent by clients. A clientchoosing an unpredictable Destination Connection ID ensures that servers areunable to control any of the encrypted portion of Initial packets from clients.¶
However, the Token field is open to server control and does allow a server touse clients to mount request forgery attacks. Use of tokens provided with theNEW_TOKEN frame (Section 8.1.3) offers the only option for requestforgery during connection establishment.¶
Clients however are not obligated to use the NEW_TOKEN frame. Request forgeryattacks that rely on the Token field can be avoided if clients send an emptyToken field when the server address has changed from when the NEW_TOKEN framewas received.¶
Clients could avoid using NEW_TOKEN if the server address changes. However, notincluding a Token field could adversely affect performance. Servers could relyon NEW_TOKEN to enable sending of data in excess of the three times limit onsending data; seeSection 8.1. In particular, this affects caseswhere clients use 0-RTT to request data from servers.¶
Sending a Retry packet (Section 17.2.5) offers a server the option to changethe Token field. After sending a Retry, the server can also control theDestination Connection ID field of subsequent Initial packets from the client.This also might allow indirect control over the encrypted content of Initialpackets. However, the exchange of a Retry packet validates the server'saddress, thereby preventing the use of subsequent Initial packets for requestforgery.¶
Servers can specify a preferred address, which clients then migrate to afterconfirming the handshake; seeSection 9.6. The Destination ConnectionID field of packets that the client sends to a preferred address can be usedfor request forgery.¶
A client MUST NOT send non-probing frames to a preferred address prior tovalidating that address; seeSection 8. This greatly reduces theoptions that a server has to control the encrypted portion of datagrams.¶
This document does not offer any additional countermeasures that are specificto use of preferred addresses and can be implemented by endpoints. The genericmeasures described inSection 21.5.6 could be used as further mitigation.¶
Clients are able to present a spoofed source address as part of an apparentconnection migration to cause a server to send datagrams to that address.¶
The Destination Connection ID field in any packets that a server subsequentlysends to this spoofed address can be used for request forgery. A client mightalso be able to influence the ciphertext.¶
A server that only sends probing packets (Section 9.1) to an address prior toaddress validation provides an attacker with only limited control over theencrypted portion of datagrams. However, particularly for NAT rebinding, thiscan adversely affect performance. If the server sends frames carryingapplication data, an attacker might be able to control most of the content ofdatagrams.¶
This document does not offer specific countermeasures that can be implementedby endpoints aside from the generic measures described inSection 21.5.6.However, countermeasures for address spoofing at the network level, inparticular ingress filtering[BCP38], are especially effectiveagainst attacks that use spoofing and originate from an external network.¶
Clients that are able to present a spoofed source address on a packet can causea server to send a Version Negotiation packetSection 17.2.1 to thataddress.¶
The absence of size restrictions on the connection ID fields for packets of anunknown version increases the amount of data that the client controls from theresulting datagram. The first byte of this packet is not under client controland the next four bytes are zero, but the client is able to control up to 512bytes starting from the fifth byte.¶
No specific countermeasures are provided for this attack, though genericprotectionsSection 21.5.6 could apply. In this case, ingress filtering[BCP38] is also effective.¶
The most effective defense against request forgery attacks is to modifyvulnerable services to use strong authentication. However, this is not alwayssomething that is within the control of a QUIC deployment. This sectionoutlines some others steps that QUIC endpoints could take unilaterally. Theseadditional steps are all discretionary as, depending on circumstances, theycould interfere with or prevent legitimate uses.¶
Services offered over loopback interfaces often lack proper authentication.Endpoints MAY prevent connection attempts or migration to a loopback address.Endpoints SHOULD NOT allow connections or migration to a loopback address if thesame service was previously available at a different interface or if the addresswas provided by a service at a non-loopback address. Endpoints that depend onthese capabilities could offer an option to disable these protections.¶
Similarly, endpoints could regard a change in address to link-local address[RFC4291] or an address in a private use range[RFC1918] from a global,unique-local[RFC4193], or non-private address as a potential attempt atrequest forgery. Endpoints could refuse to use these addresses entirely, butthat carries a significant risk of interfering with legitimate uses. EndpointsSHOULD NOT refuse to use an address unless they have specific knowledge aboutthe network indicating that sending datagrams to unvalidated addresses in agiven range is not safe.¶
Endpoints MAY choose to reduce the risk of request forgery by not includingvalues from NEW_TOKEN frames in Initial packets or by only sending probingframes in packets prior to completing address validation. Note that this doesnot prevent an attacker from using the Destination Connection ID field for anattack.¶
Endpoints are not expected to have specific information about the location ofservers that could be vulnerable targets of a request forgery attack. However,it might be possible over time to identify specific UDP ports that are commontargets of attacks or particular patterns in datagrams that are used forattacks. Endpoints MAY choose to avoid sending datagrams to these ports or notsend datagrams that match these patterns prior to validating the destinationaddress. Endpoints MAY retire connection IDs containing patterns known to beproblematic without using them.¶
Modifying endpoints to apply these protections is more efficient thandeploying network-based protections, as endpoints do not need to performany additional processing when sending to an address that has been validated.¶
The attacks commonly known as Slowloris ([SLOWLORIS]) try to keep manyconnections to the target endpoint open and hold them open as long as possible.These attacks can be executed against a QUIC endpoint by generating the minimumamount of activity necessary to avoid being closed for inactivity. This mightinvolve sending small amounts of data, gradually opening flow control windows inorder to control the sender rate, or manufacturing ACK frames that simulate ahigh loss rate.¶
QUIC deployments SHOULD provide mitigations for the Slowloris attacks, such asincreasing the maximum number of clients the server will allow, limiting thenumber of connections a single IP address is allowed to make, imposingrestrictions on the minimum transfer speed a connection is allowed to have, andrestricting the length of time an endpoint is allowed to stay connected.¶
An adversarial sender might intentionally not send portions of the stream data,causing the receiver to commit resources for the unsent data. This couldcause a disproportionate receive buffer memory commitment and/or the creation ofa large and inefficient data structure at the receiver.¶
An adversarial receiver might intentionally not acknowledge packets containingstream data in an attempt to force the sender to store the unacknowledged streamdata for retransmission.¶
The attack on receivers is mitigated if flow control windows correspond toavailable memory. However, some receivers will over-commit memory andadvertise flow control offsets in the aggregate that exceed actual availablememory. The over-commitment strategy can lead to better performance whenendpoints are well behaved, but renders endpoints vulnerable to the streamfragmentation attack.¶
QUIC deployments SHOULD provide mitigations against stream fragmentationattacks. Mitigations could consist of avoiding over-committing memory,limiting the size of tracking data structures, delaying reassemblyof STREAM frames, implementing heuristics based on the age andduration of reassembly holes, or some combination.¶
An adversarial endpoint can open a large number of streams, exhausting state onan endpoint. The adversarial endpoint could repeat the process on a largenumber of connections, in a manner similar to SYN flooding attacks in TCP.¶
Normally, clients will open streams sequentially, as explained inSection 2.1.However, when several streams are initiated at short intervals, loss orreordering can cause STREAM frames that open streams to be received out ofsequence. On receiving a higher-numbered stream ID, a receiver is required toopen all intervening streams of the same type; seeSection 3.2.Thus, on a new connection, opening stream 4000000 opens 1 million and 1client-initiated bidirectional streams.¶
The number of active streams is limited by the initial_max_streams_bidi andinitial_max_streams_uni transport parameters, as explained inSection 4.6. If chosen judiciously, these limits mitigate theeffect of the stream commitment attack. However, setting the limit too lowcould affect performance when applications expect to open large number ofstreams.¶
QUIC and TLS both contain frames or messages that have legitimate uses in somecontexts, but that can be abused to cause a peer to expend processing resourceswithout having any observable impact on the state of the connection.¶
Messages can also be used to change and revert state in small or inconsequentialways, such as by sending small increments to flow control limits.¶
If processing costs are disproportionately large in comparison to bandwidthconsumption or effect on state, then this could allow a malicious peer toexhaust processing capacity.¶
While there are legitimate uses for all messages, implementations SHOULD trackcost of processing relative to progress and treat excessive quantities of anynon-productive packets as indicative of an attack. Endpoints MAY respond tothis condition with a connection error, or by dropping packets.¶
An on-path attacker could manipulate the value of ECN fields in the IP headerto influence the sender's rate.[RFC3168] discusses manipulations and theireffects in more detail.¶
An on-the-side attacker can duplicate and send packets with modified ECN fieldsto affect the sender's rate. If duplicate packets are discarded by a receiver,an off-path attacker will need to race the duplicate packet against theoriginal to be successful in this attack. Therefore, QUIC endpoints ignore theECN field on an IP packet unless at least one QUIC packet in that IP packet issuccessfully processed; seeSection 13.4.¶
Stateless resets create a possible denial of service attack analogous to a TCPreset injection. This attack is possible if an attacker is able to cause astateless reset token to be generated for a connection with a selectedconnection ID. An attacker that can cause this token to be generated can resetan active connection with the same connection ID.¶
If a packet can be routed to different instances that share a static key, forexample by changing an IP address or port, then an attacker can cause the serverto send a stateless reset. To defend against this style of denial of service,endpoints that share a static key for stateless reset (seeSection 10.3.2) MUSTbe arranged so that packets with a given connection ID always arrive at aninstance that has connection state, unless that connection is no longer active.¶
More generally, servers MUST NOT generate a stateless reset if a connection withthe corresponding connection ID could be active on any endpoint using the samestatic key.¶
In the case of a cluster that uses dynamic load balancing, it is possible that achange in load balancer configuration could occur while an active instanceretains connection state. Even if an instance retains connection state, thechange in routing and resulting stateless reset will result in the connectionbeing terminated. If there is no chance of the packet being routed to thecorrect instance, it is better to send a stateless reset than wait for theconnection to time out. However, this is acceptable only if the routing cannotbe influenced by an attacker.¶
This document defines QUIC Version Negotiation packets inSection 6 that can be used to negotiate the QUIC version usedbetween two endpoints. However, this document does not specify how thisnegotiation will be performed between this version and subsequent futureversions. In particular, Version Negotiation packets do not contain anymechanism to prevent version downgrade attacks. Future versions of QUIC thatuse Version Negotiation packets MUST define a mechanism that is robust againstversion downgrade attacks.¶
Deployments should limit the ability of an attacker to target a new connectionto a particular server instance. This means that client-controlled fields, suchas the initial Destination Connection ID used on Initial and 0-RTT packetsSHOULD NOT be used by themselves to make routing decisions. Ideally, routingdecisions are made independently of client-selected values; a Source ConnectionID can be selected to route later packets to the same server.¶
The length of QUIC packets can reveal information about the length of thecontent of those packets. The PADDING frame is provided so that endpoints havesome ability to obscure the length of packet content; seeSection 19.1.¶
Note however that defeating traffic analysis is challenging and the subject ofactive research. Length is not the only way that information might leak.Endpoints might also reveal sensitive information through other side channels,such as the timing of packets.¶
This document establishes several registries for the management of codepoints inQUIC. These registries operate on a common set of policies as defined inSection 22.1.¶
All QUIC registries allow for both provisional and permanent registration ofcodepoints. This section documents policies that are common to theseregistries.¶
Provisional registration of codepoints are intended to allow for private use andexperimentation with extensions to QUIC. Provisional registrations only requirethe inclusion of the codepoint value and contact information. However,provisional registrations could be reclaimed and reassigned for another purpose.¶
Provisional registrations require Expert Review, as defined in Section 4.5 of[RFC8126]. Designated expert(s) are advised that only registrations for anexcessive proportion of remaining codepoint space or the very first unassignedvalue (seeSection 22.1.2) can be rejected.¶
Provisional registrations will include a date field that indicates when theregistration was last updated. A request to update the date on any provisionalregistration can be made without review from the designated expert(s).¶
All QUIC registries include the following fields to support provisionalregistration:¶
The assigned codepoint.¶
"Permanent" or "Provisional".¶
A reference to a publicly available specification for the value.¶
The date of last update to the registration.¶
The entity that is responsible for the definition of the registration.¶
Contact details for the registrant.¶
Supplementary notes about the registration.¶
Provisional registrations MAY omit the Specification and Notes fields, plus anyadditional fields that might be required for a permanent registration. The Datefield is not required as part of requesting a registration as it is set to thedate the registration is created or updated.¶
New uses of codepoints from QUIC registries SHOULD use a randomly selectedcodepoint that excludes both existing allocations and the first unallocatedcodepoint in the selected space. Requests for multiple codepoints MAY use acontiguous range. This minimizes the risk that differing semantics areattributed to the same codepoint by different implementations.¶
Use of the first available codepoint in a range is reserved for allocation usingthe Standards Action policy; see Section 4.9 of[RFC8126]. The earlycodepoint assignment process[EARLY-ASSIGN] can be used for thesevalues.¶
For codepoints that are encoded in variable-length integers(Section 16), such as frame types, codepoints that encode to four oreight bytes (that is, values 2^14 and above) SHOULD be used unless the usage isespecially sensitive to having a longer encoding.¶
Applications to register codepoints in QUIC registries MAY include a codepointas part of the registration. IANA MUST allocate the selected codepoint if thecodepoint is unassigned and the requirements of the registration policy are met.¶
A request might be made to remove an unused provisional registration from theregistry to reclaim space in a registry, or portion of the registry (such as the64-16383 range for codepoints that use variable-length encodings). This SHOULDbe done only for the codepoints with the earliest recorded date and entries thathave been updated less than a year prior SHOULD NOT be reclaimed.¶
A request to remove a codepoint MUST be reviewed by the designated expert(s).The expert(s) MUST attempt to determine whether the codepoint is still in use.Experts are advised to contact the listed contacts for the registration, plus aswide a set of protocol implementers as possible in order to determine whetherany use of the codepoint is known. The expert(s) are advised to allow at leastfour weeks for responses.¶
If any use of the codepoints is identified by this search or a request to updatethe registration is made, the codepoint MUST NOT be reclaimed. Instead, thedate on the registration is updated. A note might be added for the registrationrecording relevant information that was learned.¶
If no use of the codepoint was identified and no request was made to update theregistration, the codepoint MAY be removed from the registry.¶
This process also applies to requests to change a provisional registration intoa permanent registration, except that the goal is not to determine whether thereis no use of the codepoint, but to determine that the registration is anaccurate representation of any deployed usage.¶
Permanent registrations in QUIC registries use the Specification Required policy([RFC8126]), unless otherwise specified. The designated expert(s) verifythat a specification exists and is readily accessible. Expert(s) are encouragedto be biased towards approving registrations unless they are abusive, frivolous,or actively harmful (not merely aesthetically displeasing, or architecturallydubious). The creation of a registry MAY specify additional constraints onpermanent registrations.¶
The creation of a registry MAY identify a range of codepoints whereregistrations are governed by a different registration policy. For instance,the frame type registry inSection 22.4 has a stricter policy for codepointsin the range from 0 to 63.¶
Any stricter requirements for permanent registrations do not prevent provisionalregistrations for affected codepoints. For instance, a provisional registrationfor a frame type of 61 could be requested.¶
All registrations made by Standards Track publications MUST be permanent.¶
All registrations in this document are assigned a permanent status and list achange controller of the IETF and a contact of the QUIC working group(quic@ietf.org).¶
IANA [SHALL add/has added] a registry for "QUIC Versions" under a "QUIC"heading.¶
The "QUIC Versions" registry governs a 32-bit space; seeSection 15. Thisregistry follows the registration policy fromSection 22.1. Permanentregistrations in this registry are assigned using the Specification Requiredpolicy ([RFC8126]).¶
The codepoint of 0x00000001 to the protocol is assigned with permanent statusto the protocol defined in this document. The codepoint of 0x00000000 ispermanently reserved; the note for this codepoint [shall] indicate[s] thatthis version is reserved for Version Negotiation.¶
All codepoints that follow the pattern 0x?a?a?a?a are reserved and MUST NOT beassigned by IANA and MUST NOT appear in the listing of assigned values.¶
[[RFC editor: please remove the following note before publication.]]¶
Several pre-standardization versions will likely be in use at the time ofpublication. There is no need to document these in an RFC, but recordinginformation about these version will ensure that the information in theregistry is accurate. The document editors or working group chairs canfacilitate getting the necessary information.¶
IANA [SHALL add/has added] a registry for "QUIC Transport Parameters" under a"QUIC" heading.¶
The "QUIC Transport Parameters" registry governs a 62-bit space. This registryfollows the registration policy fromSection 22.1. Permanent registrationsin this registry are assigned using the Specification Required policy([RFC8126]).¶
In addition to the fields inSection 22.1.1, permanent registrations inthis registry MUST include the following field:¶
A short mnemonic for the parameter.¶
The initial contents of this registry are shown inTable 6.¶
| Value | Parameter Name | Specification |
|---|---|---|
| 0x00 | original_destination_connection_id | Section 18.2 |
| 0x01 | max_idle_timeout | Section 18.2 |
| 0x02 | stateless_reset_token | Section 18.2 |
| 0x03 | max_udp_payload_size | Section 18.2 |
| 0x04 | initial_max_data | Section 18.2 |
| 0x05 | initial_max_stream_data_bidi_local | Section 18.2 |
| 0x06 | initial_max_stream_data_bidi_remote | Section 18.2 |
| 0x07 | initial_max_stream_data_uni | Section 18.2 |
| 0x08 | initial_max_streams_bidi | Section 18.2 |
| 0x09 | initial_max_streams_uni | Section 18.2 |
| 0x0a | ack_delay_exponent | Section 18.2 |
| 0x0b | max_ack_delay | Section 18.2 |
| 0x0c | disable_active_migration | Section 18.2 |
| 0x0d | preferred_address | Section 18.2 |
| 0x0e | active_connection_id_limit | Section 18.2 |
| 0x0f | initial_source_connection_id | Section 18.2 |
| 0x10 | retry_source_connection_id | Section 18.2 |
Each value of the format31 * N + 27 for integer values of N (that is, 27, 58,89, ...) are reserved; these values MUST NOT be assigned by IANA and MUST NOTappear in the listing of assigned values.¶
IANA [SHALL add/has added] a registry for "QUIC Frame Types" under a"QUIC" heading.¶
The "QUIC Frame Types" registry governs a 62-bit space. This registry followsthe registration policy fromSection 22.1. Permanent registrations in thisregistry are assigned using the Specification Required policy ([RFC8126]),except for values between 0x00 and 0x3f (in hexadecimal; inclusive), which areassigned using Standards Action or IESG Approval as defined in Section 4.9 and4.10 of[RFC8126].¶
In addition to the fields inSection 22.1.1, permanent registrations inthis registry MUST include the following field:¶
A short mnemonic for the frame type.¶
In addition to the advice inSection 22.1, specifications for new permanentregistrations SHOULD describe the means by which an endpoint might determinethat it can send the identified type of frame. An accompanying transportparameter registration is expected for most registrations; seeSection 22.3. Specifications for permanent registrations alsoneed to describe the format and assigned semantics of any fields in the frame.¶
The initial contents of this registry are tabulated inTable 3. Notethat the registry does not include the "Pkts" and "Spec" columns fromTable 3.¶
IANA [SHALL add/has added] a registry for "QUIC Transport Error Codes" under a"QUIC" heading.¶
The "QUIC Transport Error Codes" registry governs a 62-bit space. This space issplit into three regions that are governed by different policies. Permanentregistrations in this registry are assigned using the Specification Requiredpolicy ([RFC8126]), except for values between 0x00 and 0x3f (in hexadecimal;inclusive), which are assigned using Standards Action or IESG Approval asdefined in Section 4.9 and 4.10 of[RFC8126].¶
In addition to the fields inSection 22.1.1, permanent registrations inthis registry MUST include the following fields:¶
A short mnemonic for the parameter.¶
A brief description of the error code semantics, which MAY be a summary if aspecification reference is provided.¶
The initial contents of this registry are shown inTable 7.¶
| Value | Error | Description | Specification |
|---|---|---|---|
| 0x0 | NO_ERROR | No error | Section 20 |
| 0x1 | INTERNAL_ERROR | Implementation error | Section 20 |
| 0x2 | CONNECTION_REFUSED | Server refuses a connection | Section 20 |
| 0x3 | FLOW_CONTROL_ERROR | Flow control error | Section 20 |
| 0x4 | STREAM_LIMIT_ERROR | Too many streams opened | Section 20 |
| 0x5 | STREAM_STATE_ERROR | Frame received in invalid stream state | Section 20 |
| 0x6 | FINAL_SIZE_ERROR | Change to final size | Section 20 |
| 0x7 | FRAME_ENCODING_ERROR | Frame encoding error | Section 20 |
| 0x8 | TRANSPORT_PARAMETER_ERROR | Error in transport parameters | Section 20 |
| 0x9 | CONNECTION_ID_LIMIT_ERROR | Too many connection IDs received | Section 20 |
| 0xa | PROTOCOL_VIOLATION | Generic protocol violation | Section 20 |
| 0xb | INVALID_TOKEN | Invalid Token Received | Section 20 |
| 0xc | APPLICATION_ERROR | Application error | Section 20 |
| 0xd | CRYPTO_BUFFER_EXCEEDED | CRYPTO data buffer overflowed | Section 20 |
| 0xe | KEY_UPDATE_ERROR | Invalid packet protection update | Section 20 |
| 0xf | AEAD_LIMIT_REACHED | Excessive use of packet protection keys | Section 20 |
| 0x10 | NO_VIABLE_PATH | No viable network path exists | Section 20 |
The pseudocode in this section describes sample algorithms. These algorithmsare intended to be correct and clear, rather than being optimally performant.¶
The pseudocode segments in this section are licensed as Code Components; see thecopyright notice.¶
The pseudocode inFigure 45 shows how a variable-length integer can beread from a stream of bytes. The function ReadVarint takes a single argument, asequence of bytes which can be read in network byte order.¶
ReadVarint(data): // The length of variable-length integers is encoded in the // first two bits of the first byte. v = data.next_byte() prefix = v >> 6 length = 1 << prefix // Once the length is known, remove these bits and read any // remaining bytes. v = v & 0x3f repeat length-1 times: v = (v << 8) + data.next_byte() return v
For example, the eight-byte sequence 0xc2197c5eff14e88c decodes to the decimalvalue 151,288,809,941,952,652; the four-byte sequence 0x9d7f3e7d decodes to494,878,333; the two-byte sequence 0x7bbd decodes to 15,293; and the single byte0x25 decodes to 37 (as does the two-byte sequence 0x4025).¶
The pseudocode inFigure 46 shows how an implementation can selectan appropriate size for packet number encodings.¶
The EncodePacketNumber function takes two arguments:¶
EncodePacketNumber(full_pn, largest_acked): // The number of bits must be at least one more // than the base-2 logarithm of the number of contiguous // unacknowledged packet numbers, including the new packet. if largest_acked is None: num_unacked = full_pn + 1 else: num_unacked = full_pn - largest_acked min_bits = log(num_unacked, 2) + 1 num_bytes = ceil(min_bits / 8) // Encode the integer value and truncate to // the num_bytes least-significant bytes. return encode(full_pn, num_bytes)
For example, if an endpoint has received an acknowledgment for packet 0xabe8bcand is sending a packet with a number of 0xac5c02, there are 29,519 (0x734f)outstanding packets. In order to represent at least twice this range (59,038packets, or 0xe69e), 16 bits are required.¶
In the same state, sending a packet with a number of 0xace8fe uses the 24-bitencoding, because at least 18 bits are required to represent twice the range(131,182 packets, or 0x2006e).¶
The pseudocode inFigure 47 includes an example algorithm for decodingpacket numbers after header protection has been removed.¶
The DecodePacketNumber function takes three arguments:¶
DecodePacketNumber(largest_pn, truncated_pn, pn_nbits): expected_pn = largest_pn + 1 pn_win = 1 << pn_nbits pn_hwin = pn_win / 2 pn_mask = pn_win - 1 // The incoming packet number should be greater than // expected_pn - pn_hwin and less than or equal to // expected_pn + pn_hwin // // This means we cannot just strip the trailing bits from // expected_pn and add the truncated_pn because that might // yield a value outside the window. // // The following code calculates a candidate value and // makes sure it's within the packet number window. // Note the extra checks to prevent overflow and underflow. candidate_pn = (expected_pn & ~pn_mask) | truncated_pn if candidate_pn <= expected_pn - pn_hwin and candidate_pn < (1 << 62) - pn_win: return candidate_pn + pn_win if candidate_pn > expected_pn + pn_hwin and candidate_pn >= pn_win: return candidate_pn - pn_win return candidate_pn
For example, if the highest successfully authenticated packet had a packetnumber of 0xa82f30ea, then a packet containing a 16-bit value of 0x9b32 will bedecoded as 0xa82f9b32.¶
Each time an endpoint commences sending on a new network path, it determineswhether the path supports ECN; seeSection 13.4. If the path supports ECN, the goalis to use ECN. Endpoints might also periodically reassess a path that wasdetermined to not support ECN.¶
This section describes one method for testing new paths. This algorithm isintended to show how a path might be tested for ECN support. Endpoints canimplement different methods.¶
The path is assigned an ECN state that is one of "testing", "unknown", "failed",or "capable". On paths with a "testing" or "capable" state the endpoint sendspackets with an ECT marking, by default ECT(0); otherwise, the endpoint sendsunmarked packets.¶
To start testing a path, the ECN state is set to "testing" and existing ECNcounts are remembered as a baseline.¶
The testing period runs for a number of packets or a limited time, asdetermined by the endpoint. The goal is not to limit the duration of thetesting period, but to ensure that enough marked packets are sent for receivedECN counts to provide a clear indication of how the path treats marked packets.Section 13.4.2 suggests limiting this to 10 packets or 3 times the probetimeout.¶
After the testing period ends, the ECN state for the path becomes "unknown".From the "unknown" state, successful validation of the ECN counts an ACK frame(seeSection 13.4.2.1) causes the ECN state for the path to become "capable", unlessno marked packet has been acknowledged.¶
If validation of ECN counts fails at any time, the ECN state for the affectedpath becomes "failed". An endpoint can also mark the ECN state for a path as"failed" if marked packets are all declared lost or if they are all CE marked.¶
Following this algorithm ensures that ECN is rarely disabled for paths thatproperly support ECN. Any path that incorrectly modifies markings will causeECN to be disabled. For those rare cases where marked packets are discarded bythe path, the short duration of the testing period limits the number of lossesincurred.¶
Issue and pull request numbers are listed with a leading octothorp.¶
A number of improvements to IANA considerations:¶
Stateless reset changes (#2152, #2993)¶
Rework the first byte (#2006)¶
Substantial editorial reorganization; no technical changes.¶
Changes to integration of the TLS handshake (#829, #1018, #1094, #1165, #1190,#1233, #1242, #1252, #1450, #1458)¶
Streams are split into unidirectional and bidirectional (#643, #656, #720,#872, #175, #885)¶
Improvements to connection close¶
Split some frames into separate connection- and stream- level frames(#443)¶
Transport parameters for 0-RTT are retained from a previous connection (#405,#513, #512)¶
The original design and rationale behind this protocol draw significantly fromwork by Jim Roskind[EARLY-DESIGN].¶
The IETF QUIC Working Group received an enormous amount of support from manypeople. The following people provided substantive contributions to thisdocument:¶
奥 一穂 (Kazuho Oku)¶
Mikkel Fahnøe Jørgensen¶
Mirja Kühlewind¶
draft-ietf-quic-transport-33
| Document | Document type | This is an older version of an Internet-Draft that was ultimately published asRFC 9000. | |
|---|---|---|---|
| Select version | |||
| Compare versions | |||
| Authors | Jana Iyengar,Martin Thomson | ||
| Replaces | draft-hamilton-quic-transport-protocol draft-ietf-quic-spin-exp | ||
| RFC stream | |||
| Other formats | |||
| Additional resources | Mailing list discussion |