RFC 9328 | RTP Payload Format for VVC | December 2022 |
Zhao, et al. | Standards Track | [Page] |
This memo describes an RTP payload format for the Versatile Video Coding (VVC) specification, which was published as both ITU-T Recommendation H.266 and ISO/IEC International Standard 23090-3. VVC was developed by the Joint Video Experts Team (JVET). The RTP payloadformat allows for packetization of one or more Network AbstractionLayer (NAL) units in each RTP packet payload, as well as fragmentationof a NAL unit into multiple RTP packets. The payload format has wideapplicability in videoconferencing, Internet video streaming, andhigh-bitrate entertainment-quality video, among other applications.¶
This is an Internet Standards Track document.¶
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained athttps://www.rfc-editor.org/info/rfc9328.¶
Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The Versatile Video Coding specification was formally published as both ITU-T Recommendation H.266[VVC] and ISO/IEC International Standard 23090-3[ISO23090-3]. VVC is reported to provide significant coding efficiency gains over High Efficiency Video Coding[HEVC], also known as H.265, and other earlier video codecs.¶
This memo specifies an RTP payload format for VVC. It shares itsbasic design with the NAL-unit-based RTP payload formats of Advanced Video Coding (AVC)[RFC6184], Scalable Video Coding (SVC)[RFC6190], and High Efficiency Video Coding (HEVC)[RFC7798], as well astheir respective predecessors. With respect to designphilosophy, security, congestion control, and overall implementationcomplexity, it has similar properties to those earlier payload formatspecifications. This is a conscious choice, as at least[RFC6184] iswidely deployed and generally known in the relevant implementercommunities. Certain scalability-related mechanisms known from[RFC6190] were incorporated into this document, as VVC version 1 supports temporal, spatial, andsignal-to-noise ratio (SNR) scalability.¶
VVC and HEVC share a similar hybrid video codec design. In thismemo, we provide a very brief overview of those features of VVCthat are, in some form, addressed by the payload format specifiedherein. Implementers have to read, understand, and apply the ITU-T/ISO/IEC specifications pertaining to VVC to arrive atinteroperable, well-performing implementations.¶
Conceptually, both VVC and HEVC include a Video Coding Layer (VCL),which is often used to refer to the coding-tool features, and a NAL, which is often used to refer to the systems and transport interface aspects of the codecs.¶
Coding-tool features are described below with occasional reference tothe coding-tool set of HEVC, which is well known in the community.¶
Similar to earlier hybrid-video-coding-based standards, includingHEVC, the following basic video coding design is employed by VVC.A prediction signal is first formed by either intra- or motion-compensated prediction, and the residual (the difference between theoriginal and the prediction) is then coded. The gains in codingefficiency are achieved by redesigning and improving almost all partsof the codec over earlier designs. In addition, VVC includes severaltools to make the implementation on parallel architectures easier.¶
Finally, VVC includes temporal, spatial, and SNR scalability, as well as multiview coding support.¶
VVC inherits the basic systems and transport interface designsfrom HEVC and AVC. These include the NAL-unit-based syntaxstructure, the hierarchical syntax and data unit structure, thesupplemental enhancement information (SEI) message mechanism, and thevideo buffering model based on the hypothetical reference decoder(HRD). The scalability features of VVC are conceptually similar tothe scalable extension of HEVC, known as SHVC. The hierarchical syntaxand data unit structure consists of parameter sets at various levels(i.e., decoder, sequence (pertaining to all), sequence (pertaining to a single), andpicture), picture-level header parameters, slice-level header parameters, and lower-level parameters.¶
A number of key components that influenced the network abstraction layer design of VVC, as well as this memo, are described below¶
VVC includes support for spatial, SNR, and multiview scalability. Scalable video coding is widely considered to have technical benefits and enrich services for various video applications. Until recently, however, the functionality has not been included in the first version of specifications of the video codecs. In VVC, however, all those forms of scalability are supported in the first version of VVC natively through the signaling of the nuh_layer_id in the NAL unit header, the VPS that associates layers with the given nuh_layer_id to each other, reference picture selection, reference picture resampling for spatial scalability, and a number of other mechanisms not relevant for this memo.¶
VVC inherited the concept of tiles and wavefront parallel processing (WPP) from HEVC, with some minor to moderate differences. The basic concept of slices was kept in VVC but designed in an essentially different form. VVC is the first video coding standard that includes subpictures as a feature, which provides the same functionality as HEVC motion-constrained tile sets (MCTSs) but designed differently to have better coding efficiency and to be friendlier for usage in application systems. More details of these differences are described below.¶
In VVC, the conventional slices based on CTUs (as in HEVC) or macroblocks (as in AVC) have been removed. The main reasoning behind this architectural change is as follows. The advances in video coding since 2003 (the publication year of AVC v1) have been such that slice-based error concealment has become practically impossible due to the ever-increasing number and efficiency of in-picture and inter-picture prediction mechanisms. An error-concealed picture is the decoding result of a transmitted coded picture for which there is some data loss (e.g., loss of some slices) of the coded picture or a reference picture, as at least some part of the coded picture is not error-free (e.g., that reference picture was an error-concealed picture). For example, when one of the multiple slices of a picture is lost, it may be error-concealed using an interpolation of the neighboring slices. While advanced video coding prediction mechanisms provide significantly higher coding efficiency, they also make it harder for machines to estimate the quality of an error-concealed picture, which was already a hard problem with the use of simpler prediction mechanisms. Advanced in-picture prediction mechanisms also cause the coding efficiency loss due to splitting a picture into multiple slices to be more significant. Furthermore, network conditions become significantly better while, at the same time, techniques for dealing with packet losses have become significantly improved. As a result, very few implementations have recently used slices for maximum-transmission-unit-size matching. Instead, substantially all applications where low-delay error resilience is required (e.g., video telephony and video conferencing) rely on system/transport-level error resilience (e.g., retransmission or forward error correction) and/or picture-based error resilience tools (e.g., feedback-based error resilience, insertion of IRAPs, scalability with a higher protection level of the base layer, and so on). Considering all the above, nowadays, it is very rare that a picture that cannot be correctly decoded is passed to the decoder, and when such a rare case occurs, the system can afford to wait for an error-free picture to be decoded and available for display without resulting in frequent and long periods of picture freezing seen by end users.¶
Slices in VVC have two modes: rectangular slices and raster-scan slices. The rectangular slice, as indicated by its name, covers a rectangular region of the picture. Typically, a rectangular slice consists of several complete tiles. However, it is also possible that a rectangular slice is a subset of a tile and consists of one or more consecutive, complete CTU rows within a tile. A raster-scan slice consists of one or more complete tiles in a tile raster-scan order; hence, the region covered by raster-scan slices need not but could have a non-rectangular shape, but it may also happen to have the shape of a rectangle. The concept of slices in VVC is therefore strongly linked to or based on tiles instead of CTUs (as in HEVC) or macroblocks (as in AVC).¶
VVC is the first video coding standard that includes the support of subpictures as a feature. Each subpicture consists of one or more complete rectangular slices that collectively cover a rectangular region of the picture. A subpicture may be either specified to be extractable (i.e., coded independently of other subpictures of the same picture and of earlier pictures in decoding order) or not extractable. Regardless of whether a subpicture is extractable or not, the encoder can control whether in-loop filtering (including deblocking, SAO, and ALF) is applied across the subpicture boundaries individually for each subpicture.¶
Functionally, subpictures are similar to the motion-constrained tile sets (MCTSs) in HEVC. They both allow independent coding and extraction of a rectangular subset of a sequence of coded pictures for use cases like viewport-dependent 360-degree video streaming optimization and region of interest (ROI) applications.¶
There are several important design differences between subpictures and MCTSs. First, the subpictures featured in VVC allow motion vectors of a coding block to point outside of the subpicture, even when the subpicture is extractable by applying sample padding at the subpicture boundaries, in this case, similarly as at picture boundaries. Second, additional changes were introduced for the selection and derivation of motion vectors in the merge mode and in the decoder-side motion vector refinement process of VVC. This allows higher coding efficiency compared to the non-normative motion constraints applied at the encoder-side for MCTSs. Third, rewriting of slice headers (SHs) (and PH NAL units, when present) is not needed when extracting one or more extractable subpictures from a sequence of pictures to create a sub-bitstream that is a conforming bitstream. In sub-bitstream extractions based on HEVC MCTSs, rewriting of SHs is needed. Note that, in both HEVC MCTSs extraction and VVC subpictures extraction, rewriting of SPSs and PPSs is needed. However, typically, there are only a few parameter sets in a bitstream, whereas each picture has at least one slice; therefore, rewriting of SHs can be a significant burden for application systems. Fourth, slices of different subpictures within a picture are allowed to have different NAL unit types. Fifth, VVC specifies HRD and level definitions for subpicture sequences, thus the conformance of the sub-bitstream of each extractable subpicture sequence can be ensured by encoders.¶
VVC maintains the NAL unit concept of HEVC with modifications. VVCuses a two-byte NAL unit header, as shown inFigure 1. The payloadof a NAL unit refers to the NAL unit excluding the NAL unit header.¶
+---------------+---------------+ |0|1|2|3|4|5|6|7|0|1|2|3|4|5|6|7| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |F|Z| LayerID | Type | TID | +---------------+---------------+
The semantics of the fields in the NAL unit header are as specifiedin VVC and described briefly below for convenience. In addition tothe name and size of each field, the corresponding syntax elementname in VVC is also provided.¶
This payload format defines the following processes required for transport of VVC coded data over RTP[RFC3550]:¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14[RFC2119][RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This document uses the terms and definitions of VVC.Section 3.1.1lists relevant definitions from[VVC] for convenience.Section 3.1.2provides definitions specific to this memo. All the used terms and definitions in this memo are verbatim copies from the[VVC] specification.¶
A network element, such as amiddlebox, selective forwarding unit, or application-layer gatewaythat is capable of parsing certain aspects of the RTP payload headersor the RTP payload and reacting to their contents.¶
Informative note: The concept of a MANE goes beyond normal routersor gateways in that a MANE has to be aware of the signaling (e.g.,to learn about the payload type mappings of the media streams),and in that it has to be trusted when working with Secure RTP(SRTP). The advantage of using MANEs is that they allow packetsto be dropped according to the needs of the media coding. Forexample, if a MANE has to drop packets due to congestion on acertain link, it can identify and remove those packets whoseelimination produces the least adverse effect on the userexperience. After dropping packets, MANEs must rewrite RTCPpackets to match the changes to the RTP stream, as specified inSection 7 of [RFC3550].¶
The format of the RTP header is specified in[RFC3550] (reprinted asFigure 2 for convenience). This payload format uses the fields ofthe header in a manner consistent with that specification.¶
The RTP payload (and the settings for some RTP header bits) foraggregation packets and fragmentation units are specified in Sections4.3.2 and4.3.3, respectively.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |V=2|P|X| CC |M| PT | sequence number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | timestamp | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | synchronization source (SSRC) identifier | +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ | contributing source (CSRC) identifiers | | .... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The RTP header information to be set according to this RTP payload format is set as follows:¶
The RTP timestamp is set to the sampling timestamp of the content. A 90 kHz clock rateMUST be used. If the NAL unit has no timing properties of its own (e.g., parameter set and SEI NAL units), the RTP timestampMUST be set to the RTP timestamp of the coded pictures of the access unit in which the NAL unit (according to Section 7.4.2.4 of[VVC]) is included. ReceiversMUST use the RTP timestamp for the display process, even when the bitstream contains picture timing SEI messages or decoding unit information SEI messages, as specified in[VVC].¶
Informative note: When picture timing SEI messages are present, the RTP sender is responsible to ensure that the RTP timestamps are consistent with the timing information carried in the picture timing SEI messages.¶
The first two bytes of the payload of an RTP packet are referred toas the payload header. The payload header consists of the samefields (F, Z, LayerId, Type, and TID) as the NAL unit header showninSection 1.1.4, irrespective of the type of the payload structure.¶
The TID value indicates (among other things) the relative importanceof an RTP packet, for example, because NAL units belonging to highertemporal sublayers are not used for the decoding of lower temporalsublayers. A lower value of TID indicates a higher importance.More important NAL unitsMAY be better protected against transmissionlosses than less-important NAL units.¶
Three different types of RTP packet payload structures are specified.A receiver can identify the type of an RTP packet payload through theType field in the payload header.¶
The three different payload structures are as follows:¶
A single NAL unit packet contains exactly one NAL unit and consistsof a payload header, as defined in Table 5 of[VVC] (denoted here as PayloadHdr), following with a conditional 16-bit DONL field (in network byte order), and the NAL unit payload data(the NAL unit excluding its NAL unit header) of the contained NALunit, as shown inFigure 3.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PayloadHdr | DONL (conditional) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | NAL unit payload data | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The DONL field, when present, specifies the value of the 16 least significant bits of the decoding order number of the contained NALunit. If sprop-max-don-diff (defined inSection 7.2) is greater than 0, the DONL fieldMUST be present, and the variable DON for thecontained NAL unit is derived as equal to the value of the DONLfield. Otherwise (sprop-max-don-diff is equal to 0), the DONL fieldMUST NOT be present.¶
Aggregation packets (APs) can reducepacketization overhead for small NAL units, such as most of the non-VCL NAL units, which are often only a few octets in size.¶
An AP aggregates NAL units of one access unit, and itMUST NOT contain NAL units from more than one AU. Each NAL unit to be carried in an AP is encapsulated in an aggregation unit. NAL units aggregated in one AP are included in NAL-unit-decoding order.¶
An AP consists of a payload header, as defined in Table 5 of[VVC] (denoted here as PayloadHdr with Type=28), followed by two or more aggregation units, as shown inFigure 4.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PayloadHdr (Type=28) | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | two or more aggregation units | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The fields in the payload header of an AP are set as follows. The F bitMUSTbe equal to 0 if the F bit of each aggregated NAL unit is equal tozero; otherwise, itMUST be equal to 1. The Type fieldMUST be equalto 28.¶
The value of LayerIdMUST be equal to the lowest value of LayerId ofall the aggregated NAL units. The value of TIDMUST be the lowestvalue of TID of all the aggregated NAL units.¶
Informative note: All VCL NAL units in an AP have the same TIDvalue since they belong to the same access unit. However, an APmay contain non-VCL NAL units for which the TID value in the NALunit header may be different than the TID value of the VCL NALunits in the same AP.¶
Informative note: If a system envisions subpicture-level or picture-level modifications, for example, by removing subpictures or pictures of a particular layer, a good design choice on the sender's side would be to aggregate NAL units belonging to only the same subpicture or picture of a particular layer.¶
An APMUST carry at least two aggregation units and can carry as manyaggregation units as necessary; however, the total amount of data inan AP obviouslyMUST fit into an IP packet, and the sizeSHOULD bechosen so that the resulting IP packet is smaller than the MTU sizein order to avoid IP layer fragmentation. An APMUST NOT contain the FUsspecified inSection 4.3.3. APsMUST NOT be nested, i.e., an AP cannot contain another AP.¶
The first aggregation unit in an AP consists of a conditional 16-bitDONL field (in network byte order), followed by 16 bits of unsigned sizeinformation (in network byte order) that indicate the size of theNAL unit in bytes (excluding these two octets but including the NALunit header), followed by the NAL unit itself, including its NAL unitheader, as shown inFigure 5.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : DONL (conditional) | NALU size | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | NALU size | | +-+-+-+-+-+-+-+-+ NAL unit | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Informative note: The first octet ofFigure 5 (indicated by the first colon) belongs to a previous aggregation unit. It is depicted to emphasize that aggregation units are octet aligned only. Similarly, the NAL unit carried in the aggregation unit can terminate at the octet boundary.¶
The DONL field, when present, specifies the value of the 16 least significant bits of the decoding order number of the aggregated NALunit.¶
If sprop-max-don-diff is greater than 0, the DONL fieldMUST be present in an aggregation unit that is the first aggregation unit in an AP, and the variable DON for the aggregated NAL unit is derived as equal to the value of the DONL field, and the variable DON for an aggregation unit that is not the first aggregation unit in an AP-aggregated NAL unit is derived as equal to the DON of the preceding aggregated NAL unit in the same AP plus 1 modulo 65536. Otherwise (sprop-max-don-diff is equal to 0), the DONL fieldMUST NOT be present in an aggregation unit that is the first aggregation unit in an AP.¶
An aggregation unit that is not the first aggregation unit in an APwill be followed immediately by 16 bits of unsigned size information(in network byte order) that indicate thesize of the NAL unit in bytes (excluding these two octets butincluding the NAL unit header), followed by the NAL unit itself,including its NAL unit header, as shown inFigure 6.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : NALU size | NAL unit | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Informative note: The first octet ofFigure 6 (indicated by the first colon) belongs to a previous aggregation unit. It is depicted to emphasize that aggregation units are octet aligned only. Similarly, the NAL unit carried in the aggregation unit can terminate at the octet boundary.¶
Figure 7 presents an example of an AP that contains two aggregationunits, labeled as 1 and 2 in the figure, without the DONL fieldbeing present.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | RTP Header | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PayloadHdr (Type=28) | NALU 1 Size | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | NALU 1 HDR | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ NALU 1 Data | | . . . | | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . | NALU 2 Size | NALU 2 HDR | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | NALU 2 HDR | | +-+-+-+-+-+-+-+-+ NALU 2 Data | | . . . | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 8 presents an example of an AP that contains two aggregationunits, labeled as 1 and 2 in the figure, with the DONL field being present.¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | RTP Header | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PayloadHdr (Type=28) | NALU 1 DONL | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | NALU 1 Size | NALU 1 HDR | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | NALU 1 Data . . . | | | + . . . +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : NALU 2 Size | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | NALU 2 HDR | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ NALU 2 Data | | | | . . . +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Fragmentation Units (FUs) are introduced to enable fragmenting asingle NAL unit into multiple RTP packets, possibly withoutcooperation or knowledge of the[VVC] encoder. A fragmentof a NAL unit consists of an integer number of consecutive octets ofthat NAL unit. Fragments of the same NAL unitMUST be sent inconsecutive order with ascending RTP sequence numbers (with no otherRTP packets within the same RTP stream being sent between the firstand last fragment).¶
When a NAL unit is fragmented and conveyed within FUs, it is referredto as a fragmented NAL unit. APsMUST NOT be fragmented. FUsMUST NOT be nested, i.e., an FU cannot contain a subset of another FU.¶
The RTP timestamp of an RTP packet carrying an FU is set to the NALU-time of the fragmented NAL unit.¶
An FU consists of a payload header as defined in Table 5 of[VVC] (denoted here as PayloadHdr with Type=29), an FU header of one octet, a conditional 16-bit DONL field (in network byte order), and an FU payload (as shown inFigure 9).¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PayloadHdr (Type=29) | FU header | DONL (cond) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-| | DONL (cond) | | |-+-+-+-+-+-+-+-+ | | FU payload | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The fields in the payload header are set as follows. The Type fieldMUST be equal to 29. The fields F, LayerId, and TIDMUST be equal tothe fields F, LayerId, and TID, respectively, of the fragmented NALunit.¶
The FU header consists of an S bit, an E bit, an R bit, and a 5-bit FuTypefield, as shown inFigure 10.¶
+---------------+ |0|1|2|3|4|5|6|7| +-+-+-+-+-+-+-+-+ |S|E|P| FuType | +---------------+
The semantics of the FU header fields are as follows:¶
The DONL field, when present, specifies the value of the 16 least significant bits of the decoding order number of the fragmented NALunit.¶
If sprop-max-don-diff is greater than 0,and the S bit is equal to 1, the DONL fieldMUST be present in theFU, and the variable DON for the fragmented NAL unit is derived asequal to the value of the DONL field. Otherwise (sprop-max-don-diffis equal to 0, or the S bit is equal to 0),the DONL fieldMUST NOT be present in the FU.¶
A non-fragmented NAL unitMUST NOT be transmitted in one FU, i.e.,the Start bit and End bit must not both be set to 1 in the same FUheader.¶
The FU payload consists of fragments of the payload of the fragmentedNAL unit so that, if the FU payloads of consecutive FUs, starting withan FU with the S bit equal to 1 and ending with an FU with the E bitequal to 1, are sequentially concatenated, the payload of thefragmented NAL unit can be reconstructed. The NAL unit header of thefragmented NAL unit is not included as such in the FU payload, butrather the information of the NAL unit header of the fragmented NALunit is conveyed in the F, LayerId, and TID fields of the FU payloadheaders of the FUs and the FuType field of the FU header of the FUs.An FU payloadMUST NOT be empty.¶
If an FU is lost, the receiverSHOULD discard all followingfragmentation units in transmission order, corresponding to the samefragmented NAL unit, unless the decoder in the receiver is known tobe prepared to gracefully handle incomplete NAL units.¶
A receiver in an endpoint or in a MANEMAY aggregate the first n-1fragments of a NAL unit to an (incomplete) NAL unit, even if fragmentn of that NAL unit is not received. In this case, theforbidden_zero_bit of the NAL unitMUST be set to 1 to indicate asyntax violation.¶
For each NAL unit, the variable AbsDon is derived, representing thedecoding order number that is indicative of the NAL unit decodingorder.¶
Let NAL unit n be the n-th NAL unit in transmission order within anRTP stream.¶
If sprop-max-don-diff is equal to 0, AbsDon[n], the value of AbsDon for NAL unit n, is derived as equal to n.¶
Otherwise (sprop-max-don-diff is greater than 0), AbsDon[n] is derived as follows, where DON[n] is the valueof the variable DON for NAL unit n:¶
If DON[n] == DON[n-1], AbsDon[n] = AbsDon[n-1] If (DON[n] > DON[n-1] and DON[n] - DON[n-1] < 32768), AbsDon[n] = AbsDon[n-1] + DON[n] - DON[n-1] If (DON[n] < DON[n-1] and DON[n-1] - DON[n] >= 32768), AbsDon[n] = AbsDon[n-1] + 65536 - DON[n-1] + DON[n] If (DON[n] > DON[n-1] and DON[n] - DON[n-1] >= 32768), AbsDon[n] = AbsDon[n-1] - (DON[n-1] + 65536 - DON[n]) If (DON[n] < DON[n-1] and DON[n-1] - DON[n] < 32768), AbsDon[n] = AbsDon[n-1] - (DON[n-1] - DON[n])¶
For any two NAL units (m and n), the following applies:¶
Informative note: When two consecutive NAL units in the NAL unit decoding order have different values of AbsDon, the absolute difference between the two AbsDon values may be greater than or equal to 1.¶
Informative note: There are multiple reasons to allow for the absolute difference of the values of AbsDon for two consecutive NAL units in the NAL unit decoding order to be greater than one. An increment by one is not required, as at the time ofassociating values of AbsDon to NAL units, it may not be known whether all NAL units are to be delivered to the receiver. For example, a gateway mightnot forward VCL NAL units of higher sublayers or someSEI NAL units when there is congestion in the network. In another example, the first intra-coded picture of a pre-encoded clip is transmitted in advance to ensure that it is readily available in the receiver, and when transmitting the first intra-coded picture, the originator does not exactly know how many NAL units will be encodedbefore the first intra-coded picture of the pre-encodedclip follows in decoding order. Thus, the values of AbsDon for the NAL units of the first intra-coded pictureof the pre-encoded clip have to be estimated when they are transmitted, and gaps in values of AbsDon may occur.¶
The following packetization rules apply:¶
The general concept behind de-packetization is to get the NAL unitsout of the RTP packets in an RTP stream and pass them to the decoder in the NALunit decoding order.¶
The de-packetization process is implementation dependent. Therefore,the following description should be seen as an example of a suitableimplementation. Other schemes may be used as well, as long as theoutput for the same input is the same as the process described below.The output is the same when the set of output NAL units and theirorder are both identical. Optimizations relative to the describedalgorithms are possible.¶
All normal RTP mechanisms related to buffer management apply. Inparticular, duplicated or outdated RTP packets (as indicated by theRTP sequence number and the RTP timestamp) are removed. Todetermine the exact time for decoding, factors, such as a possibleintentional delay to allow for proper inter-stream synchronization,MUST be factored in.¶
NAL units with NAL unit type values in the range of 0 to 27,inclusive, may be passed to the decoder. NAL-unit-like structureswith NAL unit type values in the range of 28 to 31, inclusive,MUST NOT be passed to the decoder.¶
The receiver includes a receiver buffer, which is used to compensatefor transmission delay jitter within individual RTP streams and to reorder NAL units from transmission order to the NAL unit decoding order. In this section, thereceiver operation is described under the assumption that there is notransmission delay jitter within an RTP stream. To make a difference from a practical receiver buffer that is also used for compensation of transmission delay jitter, thereceiver buffer is hereafter called the de-packetization buffer inthis section. Receivers should also prepare for transmission delayjitter, that is, either reserve separate buffers for transmissiondelay jitter buffering and de-packetization buffering or use areceiver buffer for both transmission delay jitter and de-packetization. Moreover, receivers should take transmission delayjitter into account in the buffering operation, e.g., by additionalinitial buffering before starting of decoding and playback.¶
The de-packetization process extracts the NAL units from the RTP packets in an RTP stream as follows. When an RTP packet carries a single NAL unit packet, the payload of the RTP packet is extracted as a single NAL unit, excluding the DONL field, i.e., third and fourth bytes, when sprop-max-don-diff is greater than 0. When an RTP packet carries an aggregation packet, several NAL units are extracted from the payload of the RTP packet. In this case, each NAL unit corresponds to the part of the payload of each aggregation unit that follows the NALU size field, as described inSection 4.3.2. When an RTP packet carries a Fragmentation Unit (FU), all RTP packets from the first FU (with the S field equal to 1) of the fragmented NAL unit up to the last FU (with the E field equal to 1) of the fragmented NAL unit are collected. The NAL unit is extracted from these RTP packets by concatenating all FU payloads in the same order as the corresponding RTP packets and appending the NAL unit header with the fields F, LayerId, and TID set to equal the values of the fields F, LayerId, and TID in the payload header of the FUs, respectively, and with the NAL unit type set equal to the value of the field FuType in the FU header of the FUs, as described inSection 4.3.3.¶
When sprop-max-don-diff is equal to 0, the de-packetization buffer size is zero bytes, and the NAL units carried in the single RTP stream are directly passed to the decoder in their transmission order, which is identical to their decoding order.¶
When sprop-max-don-diff is greater than 0, the process described in the remainder of this sectionapplies.¶
There are two buffering states in the receiver: initial buffering andbuffering while playing. Initial buffering starts when the receptionis initialized. After initial buffering, decoding and playback arestarted, and the buffering-while-playing mode is used.¶
Regardless of the buffering state, the receiver stores incoming NAL units in reception order into the de-packetization buffer. NAL units carried in RTP packets are stored in the de-packetizationbuffer individually, and the value of AbsDon is calculated and stored for each NAL unit.¶
Initial buffering lasts until the difference between the greatest and smallest AbsDon values of the NAL units in the de-packetization buffer is greater than or equal to the value of sprop-max-don-diff.¶
After initial buffering, whenever the difference between the greatest and smallest AbsDon values of the NAL units in the de-packetization buffer is greater than or equal to the value of sprop-max-don-diff, the following operation is repeatedly applied until this difference is smaller than sprop-max-don-diff:¶
The NAL unit in the de-packetization buffer with the smallestvalue of AbsDon is removed from the de-packetization buffer andpassed to the decoder.¶
When no more NAL units are flowing into the de-packetization buffer,all NAL units remaining in the de-packetization buffer are removedfrom the buffer and passed to the decoder in the order of increasingAbsDon values.¶
This section specifies the optional parameters. A mapping of the parameters with Session Description Protocol (SDP)[RFC8866] is also provided for applications that use SDP.¶
Parameters starting with the string "sprop" for stream properties can be used by a sender to provide a receiver with the properties of the stream that is or will be sent. The media sender (and not the receiver) selects whether, and with what values, "sprop" parameters are being sent. This uncommon characteristic of the "sprop" parameters may not be intuitive in the context of some signaling protocol concepts, especially with offer/answer. Please seeSection 7.3.2 for guidance specific to the use of sprop parameters in the offer/answer case.¶
The receiverMUST ignore any parameter unspecified in this memo.¶
These parameters indicate the profile, the tier, the default level, the sub-profile, and some constraints of the bitstream carried by the RTP stream, or a specific set of the profile, the tier, the default level, the sub-profile, and some constraints the receiver supports.¶
The subset of coding tools that may have been used to generate the bitstream or that the receiver supports, as well as some additional constraints, are indicated collectively by profile-id, sub-profile-id, and interop-constraints.¶
Informative note: There are 128 values of profile-id. The subset of coding tools identified by profile-id can be further constrained with up to 255 instances of sub-profile-id. In addition, 68 bits included in interop-constraints, which can be extended up to 324 bits, provide means to further restrict tools from existing profiles. To be able to support this fine-granular signaling of coding-tool subsets with profile-id, sub-profile-id, and interop-constraints, it would be safe to require symmetric use of these parameters in SDP offer/answer unless recv-ols-id is included in the SDP answer for choosing one of the layers offered.¶
The tier is indicated by tier-flag. The default level is indicated by level-id. The tier and the default level specify the limits on values of syntax elements or arithmetic combinations of values of syntax elements that are followed when generating the bitstream or that the receiver supports.¶
In SDP offer/answer, when the SDP answer does not include the recv-ols-id parameter that is less than the sprop-ols-id parameter in the SDP offer, the following applies:¶
In SDP offer/answer, when the SDP answer does include the recv-ols-id parameter that is less than the sprop-ols-id parameter in the SDP offer, the set of tier-flag, profile-id, sub-profile-id, interop-constraints, and level-id parameters included in the answerMUST be consistent with that for the chosen output layer set as indicated in the SDP offer, with the exception that the level-id parameter in the SDP answer is changeable as long as the highest level indicated by the answer is either lower than or equal to that in the offer.¶
More specifications of these parameters, including how they relate to syntax elements specified in[VVC], are provided below.¶
When profile-id is not present, a value of 1 (i.e., the Main 10 profile)MUST be inferred.¶
When used to indicate properties of a bitstream, profile-id is derived from the general_profile_idc syntax element that applies to the bitstream in an instance of the profile_tier_level( ) syntax structure.¶
VVC bitstreams transported over RTP using the technologies of this memoSHOULD contain only a single profile_tier_level( ) structure in the DCI, unless the sender can assure that a receiver can correctly decode the VVC bitstream, regardless of which profile_tier_level( ) structure contained in the DCI was used for deriving profile-id and other parameters for the SDP offer/answer exchange.¶
As specified in[VVC], a profile_tier_level( ) syntax structure may be contained in an SPS NAL unit, and one or more profile_tier_level( ) syntax structures may be contained in a VPS NAL unit and in a DCI NAL unit. One of the following three cases applies to the container NAL unit of the profile_tier_level( ) syntax structure containing syntax elements used to derive the values of profile-id, tier-flag, level-id, sub-profile-id, or interop-constraints:¶
[VVC] allows for multiple profile_tier_level( ) structures in a DCI NAL unit, which may contain different values for the syntax elements used to derive the values of profile-id, tier-flag, level-id, sub-profile-id, or interop-constraints in the different entries. However, herein defined is only a single profile-id, tier-flag, level-id, sub-profile-id, or interop-constraints. When signaling these parameters and a DCI NAL unit is present with multiple profile_tier_level( ) structures, these valuesSHOULD be the same as the first profile_tier_level structure in the DCI, unless the sender has ensured thatthe receiver can decode the bitstream when a different value is chosen.¶
The value of tier-flagMUST be in the range of 0 to 1, inclusive. The value of level-idMUST be in the range of 0 to 255, inclusive.¶
If the tier-flag and level-id parameters are used to indicate properties of a bitstream, they indicate the tier and the highest level the bitstream complies with.¶
If the tier-flag and level-id parameters are used for capability exchange, the following applies. If max-recv-level-id is not present, the default level defined by level-id indicates the highest level the codec wishes to support. Otherwise, max-recv-level-id indicates the highest level the codec supports for receiving. For either receiving or sending, all levels that are lower than the highest level supportedMUST also be supported.¶
If no tier-flag is present, a value of 0MUST be inferred; if no level-id is present, a value of 51 (i.e., level 3.1)MUST be inferred.¶
Informative note: The level values currently defined in the VVC specification are in the form of "majorNum.minorNum", and the value of the level-id for each of the levels is equal to majorNum * 16 + minorNum * 3. It is expected that, if any levels are defined in the future, the same convention will be used, but this cannot be guaranteed.¶
When used to indicate properties of a bitstream, the tier-flag and level-id parameters are derived respectively from the syntax element general_tier_flag, and the syntax element general_level_idc or sub_layer_level_idc[j], that apply to the bitstream in an instance of the profile_tier_level( ) syntax structure.¶
If the tier-flag and level-id are derived from the profile_tier_level( ) syntax structure in a DCI NAL unit, the following applies:¶
Otherwise, if the tier-flag and level-id are derived from the profile_tier_level( ) syntax structure in an SPS or VPS NAL unit, and the bitstream contains the highest sublayer representation in the OLS corresponding to the bitstream, the following applies:¶
Otherwise, if the tier-flag and level-id are derived from the profile_tier_level( ) syntax structure in an SPS or VPS NAL unit, and the bitstream does not contain the highest sublayer representation in the OLS corresponding to the bitstream, the following applies, with j being the value of the sprop-sublayer-id parameter:¶
The value of the parameter is a comma-separated (',') list of data using base64 encoding (Section 4 of [RFC4648]) representation without "==" padding.¶
When used to indicate properties of a bitstream, sub-profile-id is derived from each of the ptl_num_sub_profiles general_sub_profile_idc[i] syntax elements that apply to the bitstream in a profile_tier_level( ) syntax structure.¶
A base64 encoding (Section 4 of [RFC4648]) representation of the data that includes the ptl_frame_only_constraint_flag syntax element, the ptl_multilayer_enabled_flag syntax element, and the general_constraints_info( ) syntax structure that apply to the bitstream in an instance of the profile_tier_level( ) syntax structure.¶
If the interop-constraints parameter is not present, the followingMUST be inferred:¶
Using interop-constraints for capability exchange results in a requirement on any bitstream to be compliant with the interop-constraints.¶
This parameterMAY be used to indicate the highest allowed value of TID in the bitstream. When not present, the value of sprop-sublayer-id is inferred to be equal to 6.¶
The value of sprop-sublayer-idMUST be in the range of 0 to 6, inclusive.¶
This parameterMAY be used to indicate the OLS that the bitstream applies to. When not present, the value of sprop-ols-id is inferred to be equal to TargetOlsIdx, as specified in Section 8.1.1 of[VVC]. If this optional parameter is present, sprop-vpsMUST also be present or its contentMUST be known a priori at the receiver.¶
The value of sprop-ols-idMUST be in the range of 0 to 256, inclusive.¶
Informative note: VVC allows having up to 257 output layer sets indicated in the VPS, as the number of output layer sets minus 2 is indicated with a field of 8 bits.¶
This parameterMAY be used to signal a receiver's choice of the offered or declared sublayer representations in sprop-vps and sprop-sps. The value of recv-sublayer-id indicates the TID of the highest sublayer that a receiver supports. When not present, the value of recv-sublayer-id is inferred to be equal to the value of the sprop-sublayer-id parameter in the SDP offer.¶
The value of recv-sublayer-idMUST be in the range of 0 to 6, inclusive.¶
This parameterMAY be used to signal a receiver's choice of the offered or declared output layer sets in sprop-vps. The value of recv-ols-id indicates the OLS index of the bitstream that a receiver supports. When not present, the value of recv-ols-id is inferred to be equal to the value of the sprop-ols-id parameter inferred from or indicated in the SDP offer. When present, the value of recv-ols-id must be included only when sprop-ols-id was received and must refer to an output layer set in the VPS that includes no layers other than all or a subset of the layers of the OLS referred to by sprop-ols-id. If this optional parameter is present, sprop-vps must have been received or its content must be known a priori at the receiver.¶
The value of recv-ols-idMUST be in the range of 0 to 256, inclusive.¶
This parameterMAY be used to indicate the highest level a receiver supports.¶
The value of max-recv-level-idMUST be in the range of 0 to 255, inclusive.¶
When max-recv-level-id is not present, the value is inferred to be equal to level-id.¶
max-recv-level-idMUST NOT be present when the highest level the receiver supports is not higher than the default level.¶
This parameterMAY be used to convey any video parameter set to the NAL unit of the bitstream for out-of-band transmission of video parameter sets. The parameterMAY also be used for capability exchange and to indicate substream characteristics (i.e., properties of output layer sets and sublayer representations, as defined in[VVC]). The value of the parameter is a comma-separated (',') list of base64 encoding (Section 4 of [RFC4648]) representations of the video parameter set NAL units, as specified in Section 7.3.2.3 of[VVC].¶
The sprop-vps parameterMAY contain one or more than one video parameter set NAL units. However, all other video parameter sets contained in the sprop-vps parameterMUST be consistent with the first video parameter set in the sprop-vps parameter. A video parameter set vpsB is said to be consistent with another video parameter set vpsA if the number of OLSs in vpsA and vpsB are the same and any decoder that conforms to the profile, tier, level, and constraints indicated by the data starting from the syntax element general_profile_idc to the syntax structure general_constraints_info(), inclusive, in the profile_tier_level( ) syntax structure corresponding to any OLS with index olsIdx in vpsA can decode any CVS(s) referencing vpsB when TargetOlsIdx is equal to olsIdx that conforms to the profile, tier, level, and constraints indicated by the data starting from the syntax element general_profile_idc to the syntax structure general_constraints_info(), inclusive, in the profile_tier_level( ) syntax structure corresponding to the OLS with index TargetOlsIdx in vpsB.¶
This parameterMAY be used to convey sequence parameter set NAL units of the bitstream for out-of-band transmission of sequence parameter sets. The value of the parameter is a comma-separated (',') list of base64 encoding (Section 4 of [RFC4648]) representations of the sequence parameter set NAL units, as specified in Section 7.3.2.4 of[VVC].¶
A sequence parameter set spsB is said to be consistent with another sequence parameter set spsA if any decoder that conforms to the profile, tier, level, and constraints indicated by the data starting from the syntax element general_profile_idc to the syntax structure general_constraints_info(), inclusive, in the profile_tier_level( ) syntax structure in spsA can decode any CLVS(s) referencing spsB that conforms to the profile, tier, level, and constraints indicated by the data starting from the syntax element general_profile_idc to the syntax structure general_constraints_info(), inclusive, in the profile_tier_level( ) syntax structure in spsB.¶
This parameterMAY be used to convey picture parameter set NAL units of the bitstream for out-of-band transmission of picture parameter sets. The value of the parameter is a comma-separated (',') list of base64 encoding (Section 4 of [RFC4648]) representations of the picture parameter set NAL units, as specified in Section 7.3.2.5 of[VVC].¶
This parameterMAY be used to convey one or more SEI messages that describe bitstream characteristics. When present, a decoder can rely on the bitstream characteristics that are described in the SEI messages for the entire duration of the session, independently from the persistence scopes of the SEI messages, as specified in[VSEI].¶
The value of the parameter is a comma-separated (',') list of base64 encoding (Section 4 of [RFC4648]) representations of SEI NAL units, as specified in[VSEI].¶
Informative note: Intentionally, no list of applicable or inapplicable SEI messages is specified here. Conveying certain SEI messages in sprop-sei may be sensible in some application scenarios and meaningless in others. However, a few examples are described below:¶
In an environment where the bitstream was created from film-based source material, and no splicing is going to occur during the lifetime of the session, the film grain characteristics SEI message is likely meaningful, and sending it in sprop-sei, rather than in the bitstream at each entry point, may help with saving bits and allows one to configure the renderer only once, avoiding unwanted artifacts.¶
Examples for SEI messages that would be meaningless to be conveyed in sprop-sei include the decoded picture hash SEI message (it is close to impossible that all decoded pictures have the same hashtag) or the filler payload SEI message (as there is no point in just having more bits in SDP).¶
The max-lsrMAY be used to signal the capabilities of a receiver implementation andMUST NOT be used for any other purpose. The value of max-lsr is an integer indicating the maximum processing rate in units of luma samples per second. The max-lsr parameter signals that the receiver is capable of decoding video at a higher rate than is required by the highest level.¶
Informative note: When theOPTIONAL media type parameters are used to signal the properties of a bitstream, and max-lsr is not present, the values of tier-flag, profile-id, sub-profile-id, interop-constraints, and level-id must always be such that the bitstream complies fully with the specified profile, sub-profile, tier, level, and interop-constraints.¶
When max-lsr is signaled, the receiverMUST be able to decode bitstreams that conform to the highest level, with the exception that the MaxLumaSr value in Table A.3 of[VVC] for the highest level is replaced with the value of max-lsr. SendersMAY use this knowledge to send pictures of a given size at a higher picture rate than is indicated in the highest level.¶
When not present, the value of max-lsr is inferred to be equal to the value of MaxLumaSr given in Table A.3 of[VVC] for the highest level.¶
The value of max-lsrMUST be in the range of MaxLumaSr to 16 * MaxLumaSr, inclusive, where MaxLumaSr is given in Table A.3 of[VVC] for the highest level.¶
The value of max-fps is an integer indicating the maximum picture rate in units of pictures per 100 seconds that can be effectively processed by the receiver. The max-fps parameterMAY be used to signal that the receiver has a constraint in that it is not capable of processing video effectively at the full picture rate that is implied by the highest level and, when present, max-lsr.¶
The value of max-fps is not necessarily the picture rate at which the maximum picture size can be sent; it constitutes a constraint on maximum picture rate for all resolutions.¶
Informative note: The max-fps parameter is semantically different from max-lsr in that max-fps is used to signal a constraint, lowering the maximum picture rate from what is implied by other parameters.¶
The encoderMUST use a picture rate equal to or less than this value. In cases where the max-fps parameter is absent, the encoder is free to choose any picture rate according to the highest level and any signaled optional parameters.¶
The value of max-fpsMUST be smaller than or equal to the full picture rate that is implied by the highest level and, when present, max-lsr.¶
If there is no NAL unit naluA that is followed in transmission order by any NAL unit preceding naluA in decoding order (i.e., the transmission order of the NAL units is the same as the decoding order), the value of this parameterMUST be equal to 0.¶
Otherwise, this parameter specifies the maximum absolute difference between the decoding order number (i.e., AbsDon) values of any two NAL units naluA and naluB, where naluA follows naluB in decoding order and precedes naluB in transmission order.¶
The value of sprop-max-don-diffMUST be an integer in the range of 0 to 32767, inclusive.¶
When not present, the value of sprop-max-don-diff is inferred to be equal to 0.¶
This parameter signals the required size of the de-packetization buffer in units of bytes. The value of the parameterMUST be greater than or equal to the maximum buffer occupancy (in units of bytes) of the de-packetization buffer, as specified inSection 6.¶
The value of sprop-depack-buf-bytesMUST be an integer in the range of 0 to 4294967295, inclusive.¶
When sprop-max-don-diff is present and greater than 0, this parameterMUST be present and the valueMUST be greater than 0. When not present, the value of sprop-depack-buf-bytes is inferred to be equal to 0.¶
Informative note: The value of sprop-depack-buf-bytes indicates the required size of the de-packetization buffer only. When network jitter can occur, an appropriately sized jitter buffer has to be available as well.¶
This parameter signals the capabilities of a receiver implementation and indicates the amount of de-packetization buffer space in units of bytes that the receiver has available for reconstructing the NAL unit decoding order from NAL units carried in the RTP stream. A receiver is able to handle any RTP stream for which the value of the sprop-depack-buf-bytes parameter is smaller than or equal to this parameter.¶
When not present, the value of depack-buf-cap is inferred to be equal to 4294967295. The value of depack-buf-capMUST be an integer in the range of 1 to 4294967295, inclusive.¶
Informative note: depack-buf-cap indicates the maximum possible size of the de-packetization buffer of the receiver only, without allowing for network jitter.¶
The receiverMUST ignore any parameter unspecified in this memo.¶
The media type video/H266 string is mapped to fields in the Session Description Protocol (SDP)[RFC8866] as follows:¶
TheOPTIONAL parameters sprop-vps, sprop-sps, sprop-pps, sprop-sei, and sprop-dci, when present,MUST be included in the "a=fmtp" line of SDP or conveyed using the "fmtp" source attribute as specified inSection 6.3 of [RFC5576]. For a particular media format (i.e., RTP payload type), sprop-vps, sprop-sps, sprop-pps, sprop-sei, or sprop-dciMUST NOT be both included in the "a=fmtp" line of SDP and conveyed using the "fmtp" source attribute. When included in the "a=fmtp" line of SDP, those parameters are expressed as a media type string, in the form of a semicolon-separated list of parameter=value pairs. When conveyed in the "a=fmtp" line of SDP for a particular payload type, the parameters sprop-vps, sprop-sps, sprop-pps, sprop-sei, and sprop-dciMUST be applied to each SSRC with the payload type. When conveyed using the "fmtp" source attribute, these parameters are only associated with the given source and payload type as parts of the "fmtp" source attribute.¶
Informative note: Conveyance of sprop-vps, sprop-sps, and sprop-pps using the "fmtp" source attribute allows for out-of-band transport of parameter sets in topologies like Topo-Video-switch-MCU, as specified in[RFC7667].¶
A general usage of media representation in SDP is as follows:¶
m=video 49170 RTP/AVP 98 a=rtpmap:98 H266/90000 a=fmtp:98 profile-id=1; sprop-vps=<video parameter sets data>; sprop-sps=<sequence parameter set data>; sprop-pps=<picture parameter set data>;¶
A SIP offer/answer exchange wherein both parties are expected to both send and receive could look like the following. Only the media codec-specific parts of the SDP are shown. Some lines are wrapped due to text constraints.¶
Offerer->Answerer: m=video 49170 RTP/AVP 98 a=rtpmap:98 H266/90000 a=fmtp:98 profile-id=1; level_id=83;¶
The above represents an offer for symmetric video communication using[VVC] and its payload specification at the main profile and level 5.1 (and as the levels are downgradable, all lower levels). Informally speaking, this offer tells the receiver of the offer that the sender is willing to receive up to 4Kp60 resolution at the maximum bitrates specified in[VVC]. At the same time, if this offer were accepted "as is", the offer can expect that the answerer would be able to receive and properly decode H.266 media up to and including level 5.1.¶
Answerer->Offerer: m=video 49170 RTP/AVP 98 a=rtpmap:98 H266/90000 a=fmtp:98 profile-id=1; level_id=67¶
With this answer to the offer above, the system receiving the offer advises the offerer that it is incapable of handing H.266 at level 5.1 but is capable of decoding 1080p60. As H.266 video codecs must support decoding at all levels below the maximum level they implement, the resulting user experience would likely be that both systems send video at 1080p60. However, nothing prevents an encoder from further downgrading its sending to, for example, 720p30 if it were short of cycles or bandwidth or for other reasons.¶
This section describes the negotiation of unicast messages using the offer/answer model as described in[RFC3264] and its updates. The section is split into subsections, covering a) media format configurations not involving non-temporal scalability; b) scalable media format configurations; c) the description of the use of those parameters not involving the media configuration itself but rather the parameters of the payload format design; and d) multicast.¶
A non-scalable VVC media configuration is such a configuration where no non-temporal scalability mechanisms are allowed. In[VVC] version 1, it is implied that general_profile_idc indicates one of the following profiles: Main 10, Main 10 Still Picture, Main 10 4:4:4, or Main 10 4:4:4 Still Picture, with general_profile_idc values of 1, 65, 33, and 97, respectively. Note that non-scalable media configurations include temporal scalability inline with VVC's design philosophy and profile structure.¶
The following limitations and rules pertaining to the media configuration apply:¶
The parameters identifying a media format configuration for VVC are profile-id, tier-flag, sub-profile-id, level-id, and interop-constraints. These media configuration parameters, except level-id,MUST be used symmetrically.¶
The answererMUST structure its answer according to one of the following three options:¶
Informative note: The above requirement for symmetric use does not apply for level-id and does not apply for the other bitstream or RTP stream properties and capability parameters, as described inSection 7.3.2.3 below.¶
The same RTP payload type number used in the offer for the media subtype H266MUST be used in the answer when the answer includes recv-sublayer-id. When the answer does not include recv-sublayer-id, the answerMUST NOT contain a payload type number used in the offer for the media subtype H266 unless the configuration is exactly the same as in the offer or the configuration in the answer only differs from that in the offer with a different value of level-id. The answerMAY contain the recv-sublayer-id parameter if a VVC bitstream contains multiple operation points (using temporal scalability and sublayers) and sprop-sps or sprop-vps is included in the offer where information of sublayers are present in the first sequence parameter set or video parameter set contained in sprop-sps or sprop-vps, respectively. If sprop-sps or sprop-vps is provided in an offer, an answererMAY select a particular operation point indicated in the first sequence parameter set or video parameter set contained in sprop-sps or sprop-vps, respectively. When the answer includes a recv-sublayer-id that is less than a sprop-sublayer-id in the offer, the following applies:¶
Informative note: When an offerer receives an answer that does not include recv-sublayer-id, it has to compare payload types not declared in the offer based on the media type (i.e., video/H266) and the above media configuration parameters with any payload types it has already declared. This will enable itto determine whether the configuration in question is new or if it is equivalent to configuration already offered, since a different payload type number may be used in the answer. The ability to perform operation point selection enables a receiver to utilize the temporal scalable nature of a VVC bitstream.¶
A scalable VVC media configuration is such a configuration where non-temporal scalability mechanisms are allowed. In[VVC] version 1, it is implied that general_profile_idc indicates one of the following profiles: Multilayer Main 10 and Multilayer Main 10 4:4:4, with general_profile_idc values of 17 and 49, respectively.¶
The following limitations and rules pertaining to the media configuration apply. They are listed in an order that would be logical for an implementation to follow:¶
The answererMUST NOT include recv-ols-id unless the offer includes sprop-ols-id. When present, recv-ols-idMUST indicate a supported output layer set in the VPS that includes no layers other than all or a subset of the layers of the OLS referred to by sprop-ols-id. If unable, the answererMUST remove the media format.¶
Informative note: If an offerer wants to offer more than one output layer set, it can do so by offering multiple VVC media with different payload types.¶
The following limitations and rules pertain to the configuration of the payload format buffer management mostly and apply to both scalable and non-scalable VVC.¶
The parameters sprop-max-don-diff and sprop-depack-buf-bytes describe the properties of an RTP stream that the offerer or the answerer is sending for the media format configuration. This differs from the normal usage of the offer/answer parameters; normally, such parameters declare the properties of the bitstream or RTP stream that the offerer or the answerer is able to receive. When dealing with VVC, the offerer assumes that the answerer will be able to receive media encoded using the configuration being offered.¶
Informative note: The above parameters apply for any RTP stream, when present, sent by a declaring entity with the same configuration. In other words, the applicability of the above parameters to RTP streams depends on the source endpoint. Rather than being bound to the payload type, the values may have to be applied to another payload type when being sent, as they apply for the configuration.¶
The following rules apply to transport of parameter sets in the offerer-to-answerer direction.¶
The following rules apply to transport of parameter sets in the answerer-to-offerer direction.¶
Figure 11 lists the interpretation of all the parameters thatMAY beused for the various combinations of offer, answer, and directionattributes.¶
sendonly --+ answer: recvonly, recv-ols-id --+ | recvonly w/o recv-ols-id --+ | | answer: sendrecv, recv-ols-id --+ | | | sendrecv w/o recv-ols-id --+ | | | | | | | | |profile-id C D C D Ptier-flag C D C D Plevel-id D D D D Psub-profile-id C D C D Pinterop-constraints C D C D Pmax-recv-level-id R R R R -sprop-max-don-diff P P - - Psprop-depack-buf-bytes P P - - Pdepack-buf-cap R R R R -max-lsr R R R R -max-fps R R R R -sprop-dci P P - - Psprop-sei P P - - Psprop-vps P P - - Psprop-sps P P - - Psprop-pps P P - - Psprop-sublayer-id P P - - Precv-sublayer-id O O O O -sprop-ols-id P P - - Precv-ols-id X O X O -Legend: C: configuration for sending and receiving bitstreams D: changeable configuration, same as C, except possible to answer with a different but consistent value (see the semantics of the six parameters related to profile, tier, and level on these parameters being consistent) P: properties of the bitstream to be sent R: receiver capabilities O: operation point selection X: MUST NOT be present -: not usable, when present MUST be ignored
Parameters used for declaring receiver capabilities are, in general,downgradable, i.e., they express the upper limit for a sender'spossible behavior. Thus, a senderMAY select to set its encoderusing only lower/lesser or equal values of these parameters.¶
When the answer does not include a recv-ols-id that is lessthan the sprop-ols-id in the offer, parameters declaring aconfiguration point are not changeable, with the exception of thelevel-id parameter for unicast usage, and these parameters expressvalues a receiver expects to be used andMUST be used verbatim in theanswer as in the offer.¶
When a sender's capabilities are declared with the configurationparameters, these parameters express a configuration that isacceptable for the sender to receive bitstreams. In order to achievehigh interoperability levels, it is often advisable to offer multiplealternative configurations. It is impossible to offer multipleconfigurations in a single payload type. Thus, when multipleconfiguration offers are made, each offer requires its own RTPpayload type associated with the offer. However, it is possible tooffer multiple operation points using one configuration in a singlepayload type by including sprop-vps in the offer and recv-ols-id in the answer.¶
An implementationSHOULD be able to understand all media type parameters(including all optional media type parameters), even if it doesn't support the functionality related to the parameter. This, in conjunction with proper application logic in the implementation, allows the implementation, after having received an offer, to create an answer by potentially downgrading one or more of the optional parameters to the point where the implementation can cope, leading to higher chances of interoperability beyond the most basic interop points (for which, as described above, no optional parameters are necessary).¶
Informative note: In implementations of previous H.26x payload formats, it was occasionally observed that implementations were incapable of parsing most (or all) of the optional parameters. As a result, the offer/answer exchange resulted in abaseline performance (using the default values for the optional parameters) with the resulting suboptimal user experience. However, there are valid reasons to forego the implementation complexity of implementing the parsing of some or all of the optional parameters, for example, when there is predetermined knowledge, not negotiated by an SDP-based offer/answer process, of the capabilities of the involved systems (walled gardens, baseline requirements defined in application standards higher up in the stack, and similar).¶
An answererMAY extend the offer with additional media formatconfigurations. However, to enable their usage, in most cases, asecond offer is required from the offerer to provide the bitstreamproperty parameters that the media sender will use. This also hasthe effect that the offerer has to be able to receive this mediaformat configuration, not only to send it.¶
For bitstreams being delivered over multicast, the following rules apply:¶
When VVC over RTP is offered with SDP in a declarative style, as in Real Time Streaming Protocol (RTSP)[RFC7826] or Session Announcement Protocol (SAP)[RFC2974], the following considerations are necessary.¶
All parameters capable of indicating both bitstream properties and receiver capabilities are used to indicate only bitstream properties. For example, in this case, the parameters profile-id, tier-id, and level-id declare the values used by the bitstream, not the capabilities for receiving bitstreams. As a result, the following interpretation of the parametersMUST be used:¶
Declaring actual configuration or bitstream properties:¶
Not usable (when present, theyMUST be ignored):¶
When out-of-band transport of parameter sets is used, parameter setsMAY still be additionally transported in-band unless explicitly disallowed by an application, and some of these additional parameter sets may update some of the out-of-band transported parameter sets. An update of a parameter set refers to the sending of a parameter set of the same type using the same parameter set ID but with different values for at least one other parameter of the parameter set.¶
The following subsections define the use of the Picture LossIndication (PLI) and Full Intra Request (FIR) feedbackmessages with[VVC]. The PLI is defined in[RFC4585], and the FIR message is defined in[RFC5104].In accordance with this memo, unlike[HEVC], a senderMUST NOT send Slice Loss Indication (SLI) or Reference Picture Selection Indication (RPSI), and a receiverSHOULD ignore RPSI and treat a received SLI as a PLI.¶
As specified inSection 6.3.1 of [RFC4585], the reception of a PLI by amedia sender indicates "the loss of an undefined amount of codedvideo data belonging to one or more pictures". Without having anyspecific knowledge of the setup of the bitstream (such as use andlocation of in-band parameter sets, non-IRAP decoder refresh points,picture structures, and so forth), a reaction to the reception of aPLI by a VVC senderSHOULD be to send an IRAP picture and relevantparameter sets, potentially with sufficient redundancy so to ensurecorrect reception. However, sometimes information about thebitstream structure is known.For example, such information can be parameter sets that have been conveyed out of band through mechanisms not defined in this document and that are known to stay static for the duration of the session. In that case, it is obviously unnecessaryto send them in-band as a result of the reception of a PLI. Otherexamples could be devised based on a priori knowledge of differentaspects of the bitstream structure. In all cases, the timing andcongestion control mechanisms of[RFC4585]MUST be observed.¶
The purpose of the FIR message is to force an encoder to send an independent decoder refresh point as soon as possiblewhile observing applicable congestion-control-related constraints, such as those set out in[RFC8082].¶
Upon reception of a FIR, a senderMUST send an IDR picture.Parameter setsMUST also be sent, except when there is a prioriknowledge that the parameter sets have been correctly established. Atypical example for that is an understanding between the sender andreceiver, established by means outside this document, that parametersets are exclusively sent out of band.¶
The scope of this section is limited to thepayload format itself and to one feature of[VVC] that may pose aparticularly serious security risk if implemented naively. Thepayload format, in isolation, does not form a complete system.Implementers are advised to read and understand relevant security-related documents, especially those pertaining to RTP (see theSecurity Considerations section in[RFC3550]) and the security ofthe call-control stack chosen (that may make use of the media typeregistration of this memo). Implementers should also consider knownsecurity vulnerabilities of video coding and decoding implementationsin general and avoid those.¶
Within this RTP payload format, and with the exception of the userdata SEI message as described below, no security threats other thanthose common to RTP payload formats are known. In other words,neither the various media-plane-based mechanisms nor the signalingpart of this memo seem to pose a security risk beyond those commonto all RTP-based systems.¶
RTP packets using the payload format defined in this specificationare subject to the security considerations discussed in the RTPspecification[RFC3550] and in any applicable RTP profile, such asRTP/AVP[RFC3551], RTP/AVPF[RFC4585], RTP/SAVP[RFC3711], or RTP/SAVPF[RFC5124]. However, as "Securing the RTP Framework: Why RTP Does Not Mandate a Single Media Security Solution"[RFC7202]discusses, it is not an RTP payload format's responsibility todiscuss or mandate what solutions are used to meet the basic securitygoals, like confidentiality, integrity, and source authenticity for RTPin general. This responsibility lays on anyone using RTP in anapplication. They can find guidance on available security mechanismsand important considerations in "Options for Securing RTP Sessions"[RFC7201]. The rest of this section discusses the securityimpacting properties of the payload format itself.¶
Because the data compression used with this payload format is appliedend to end, any encryption needs to be performed after compression.A potential denial-of-service threat exists for data encodings usingcompression techniques that have non-uniform receiver-endcomputational load. The attacker can inject pathological datagramsinto the bitstream that are complex to decode and that cause thereceiver to be overloaded.[VVC] is particularly vulnerable to suchattacks, as it is extremely simple to generate datagrams containingNAL units that affect the decoding process of many future NAL units.Therefore, the usage of data origin authentication and data integrity protection of at least the RTP packet isRECOMMENDED but NOTREQUIREDbased on the thoughts of[RFC7202].¶
Like HEVC[RFC7798],[VVC] includes a user data SupplementalEnhancement Information (SEI) message. This SEI message allowsinclusion of an arbitrary bitstring into the video bitstream. Such abitstring could include JavaScript, machine code, and other activecontent.[VVC] leaves the handling of this SEI message to thereceiving system. In order to avoid harmful side effects ofthe user data SEI message, decoder implementations cannot naivelytrust its content. For example, it would be a bad and insecureimplementation practice to forward any JavaScript a decoderimplementation detects to a web browser. The safest way to deal withuser data SEI messages is to simply discard them, but that can havenegative side effects on the quality of experience by the user.¶
End-to-end security with authentication, integrity, orconfidentiality protection will prevent a MANE from performing media-aware operations other than discarding complete packets. In the caseof confidentiality protection, it will even be prevented fromdiscarding packets in a media-aware way. To be allowed to performsuch operations, a MANE is required to be a trusted entity that isincluded in the security context establishment. This on-path inclusion of the MANE forgoes end-to-end security guarantees for the end points.¶
Congestion control for RTPSHALL be used in accordance with RTP[RFC3550] and with any applicable RTP profile, e.g., AVP[RFC3551] or AVPF[RFC4585].If best-effort service is being used, an additional requirement isthat users of this payload formatMUST monitor packet loss to ensurethat the packet loss rate is within an acceptable range. Packet lossis considered acceptable if a TCP flow across the same network pathand experiencing the same network conditions would achieve anaverage throughput, measured on a reasonable timescale, that is notless than all RTP streams combined are achieved. This condition canbe satisfied by implementing congestion-control mechanisms to adaptthe transmission rate, by implementing the number of layers subscribed for a layeredmulticast session, or by arranging for a receiver to leave thesession if the loss rate is unacceptably high.¶
The bitrate adaptation necessary for obeying the congestion controlprinciple is easily achievable when real-time encoding is used, forexample, by adequately tuning the quantization parameter.However, when pre-encoded content is being transmitted, bandwidthadaptation requires the pre-coded bitstream to be tailored for suchadaptivity. The key mechanisms available in[VVC] are temporalscalability and spatial/SNR scalability. A media sender can removeNAL units belonging to higher temporal sublayers (i.e., those NALunits with a high value of TID) or higher spatio-SNR layers until the sending bitrate drops toan acceptable range.¶
The mechanisms mentioned above generally work within a defined profile and level;therefore no renegotiation of the channel is required. Onlywhen non-downgradable parameters (such as profile) are required to bechanged does it become necessary to terminate and restart the RTPstream(s). This may be accomplished by using different RTP payloadtypes.¶
MANEsMAY remove certain unusable packets from the RTP stream whenthat RTP stream was damaged due to previous packet losses. This canhelp reduce the network load in certain special cases. For example,MANEs can remove those FUs where the leading FUs belonging to thesame NAL unit have been lost or those dependent slice segments whenthe leading slice segments belonging to the same slice have beenlost, because the trailing FUs or dependent slice segments aremeaningless to most decoders. MANE can also remove higher temporalscalable layers if the outbound transmission (from the MANE'sviewpoint) experiences congestion.¶
A new media type has been registered with IANA; seeSection 7.1.¶
Dr. Byeongdoo Choi is thanked for the video-codec-related technical discussion and other aspects in this memo.Xin Zhao andDr. Xiang Liare thanked for their contributions on[VVC] specification descriptive content.Spencer Dawkins is thanked for his valuable review comments that led to great improvements of this memo. Some parts of this specification share text with the RTP payloadformat for HEVC[RFC7798]. We thank the authors of thatspecification for their excellent work.¶