This document describes the supported formats (muxers and demuxers)provided by the libavformat library.
The libavformat library provides some generic global options, whichcan be set on all the muxers and demuxers. In addition each muxer ordemuxer may support so-called private options, which are specific forthat component.
Options may be set by specifying -optionvalue in theFFmpeg tools, or by setting the value explicitly in theAVFormatContext
options or using thelibavutil/opt.h APIfor programmatic use.
The list of supported options follows:
Possible values:
Reduce buffering.
Set probing size in bytes, i.e. the size of the data to analyze to getstream information. A higher value will enable detecting moreinformation in case it is dispersed into the stream, but will increaselatency. Must be an integer not lesser than 32. It is 5000000 by default.
Set the maximum number of buffered packets when probing a codec.Default is 2500 packets.
Set packet size.
Set format flags. Some are implemented for a limited number of formats.
Possible values for input files:
Discard corrupted packets.
Enable fast, but inaccurate seeks for some formats.
Generate missing PTS if DTS is present.
Ignore DTS if PTS is also set. In case the PTS is set, the DTS valueis set to NOPTS. This is ignored when thenofillin
flag is set.
Ignore index.
Reduce the latency introduced by buffering during initial input streams analysis.
Do not fill in missing values in packet fields that can be exactly calculated.
Disable AVParsers, this needs+nofillin
too.
Try to interleave output packets by DTS. At present, available only for AVIs with an index.
Possible values for output files:
Automatically apply bitstream filters as required by the output format. Enabled by default.
Only write platform-, build- and time-independent data.This ensures that file and data checksums are reproducible and match betweenplatforms. Its primary use is for regression testing.
Write out packets immediately.
Stop muxing at the end of the shortest stream.It may be needed to increase max_interleave_delta to avoid flushing the longerstreams before EOF.
Allow seeking to non-keyframes on demuxer level when supported if set to 1.Default is 0.
Specify how many microseconds are analyzed to probe the input. Ahigher value will enable detecting more accurate information, but willincrease latency. It defaults to 5,000,000 microseconds = 5 seconds.
Set decryption key.
Set max memory used for timestamp index (per stream).
Set max memory used for buffering real-time frames.
Print specific debug info.
Possible values:
Set maximum muxing or demuxing delay in microseconds.
Set number of frames used to probe fps.
Set microseconds by which audio packets should be interleaved earlier.
Set microseconds for each chunk.
Set size in bytes for each chunk.
Set error detection flags.f_err_detect
is deprecated andshould be used only via theffmpeg
tool.
Possible values:
Verify embedded CRCs.
Detect bitstream specification deviations.
Detect improper bitstream length.
Abort decoding on minor error detection.
Consider things that violate the spec and have not been seen in thewild as errors.
Consider all spec non compliancies as errors.
Consider things that a sane encoder should not do as an error.
Set maximum buffering duration for interleaving. The duration isexpressed in microseconds, and defaults to 10000000 (10 seconds).
To ensure all the streams are interleaved correctly, libavformat willwait until it has at least one packet for each stream before actuallywriting any packets to the output file. When some streams are"sparse" (i.e. there are large gaps between successive packets), thiscan result in excessive buffering.
This field specifies the maximum difference between the timestamps of thefirst and the last packet in the muxing queue, above which libavformatwill output a packet regardless of whether it has queued a packet for allthe streams.
If set to 0, libavformat will continue buffering packets until it hasa packet for each stream, regardless of the maximum timestampdifference between the buffered packets.
Use wallclock as timestamps if set to 1. Default is 0.
Possible values:
Shift timestamps to make them non-negative.Also note that this affects only leading negative timestamps, and notnon-monotonic negative timestamps.
Shift timestamps so that the first timestamp is 0.
Enables shifting when required by the target format.
Disables shifting of timestamp.
When shifting is enabled, all output timestamps are shifted by thesame amount. Audio, video, and subtitles desynching and relativetimestamp differences are preserved compared to how they would havebeen without shifting.
Set number of bytes to skip before reading header and frames if set to 1.Default is 0.
Correct single timestamp overflows if set to 1. Default is 1.
Flush the underlying I/O stream after each packet. Default is -1 (auto), whichmeans that the underlying protocol will decide, 1 enables it, and has theeffect of reducing the latency, 0 disables it and may increase IO throughput insome cases.
Set the output time offset.
offset must be a time duration specification,see(ffmpeg-utils)the Time duration section in the ffmpeg-utils(1) manual.
The offset is added by the muxer to the output timestamps.
Specifying a positive offset means that the corresponding streams aredelayed bt the time duration specified inoffset. Default valueis0
(meaning that no offset is applied).
"," separated list of allowed demuxers. By default all are allowed.
Separator used to separate the fields printed on the command line about theStream parameters.For example, to separate the fields with newlines and indentation:
ffprobe -dump_separator " " -i ~/videos/matrixbench_mpeg2.mpg
Specifies the maximum number of streams. This can be used to reject files thatwould require too many resources due to a large number of streams.
Skip estimation of input duration if it requires an additional probing for PTS at end of file.At present, applicable for MPEG-PS and MPEG-TS.
Set probing size, in bytes, for input duration estimation when it actually requiresan additional probing for PTS at end of file (at present: MPEG-PS and MPEG-TS).It is aimed at users interested in better durations probing for itself, or indirectlybecause using the concat demuxer, for example.The typical use case is an MPEG-TS CBR with a high bitrate, high video buffering andending cleaning with similar PTS for video and audio: in such a scenario, the largephysical gap between the last video packet and the last audio packet makes it necessaryto read many bytes in order to get the video stream duration.Another use case is where the default probing behaviour only reaches a single video frame which isnot the last one of the stream due to frame reordering, so the duration is not accurate.Setting this option has a performance impact even for small files because the probingsize is fixed.Default behaviour is a general purpose trade-off, largely adaptive, but the probing sizewill not be extended to get streams durations at all costs.Must be an integer not lesser than 1, or 0 for default behaviour.
Specify how strictly to follow the standards.f_strict
is deprecated andshould be used only via theffmpeg
tool.
Possible values:
strictly conform to an older more strict version of the spec or reference software
strictly conform to all the things in the spec no matter what consequences
allow unofficial extensions
allow non standardized experimental things, experimental(unfinished/work in progress/not well tested) decoders and encoders.Note: experimental decoders can pose a security risk, do not use this fordecoding untrusted input.
Format stream specifiers allow selection of one or more streams thatmatch specific properties.
The exact semantics of stream specifiers is defined by theavformat_match_stream_specifier()
function declared in thelibavformat/avformat.h header and documented in the(ffmpeg)Stream specifiers section in the ffmpeg(1) manual.
Demuxers are configured elements in FFmpeg that can read themultimedia streams from a particular type of file.
When you configure your FFmpeg build, all the supported demuxersare enabled by default. You can list all available ones using theconfigure option--list-demuxers
.
You can disable all the demuxers using the configure option--disable-demuxers
, and selectively enable a single demuxer withthe option--enable-demuxer=DEMUXER
, or disable itwith the option--disable-demuxer=DEMUXER
.
The option-demuxers
of the ff* tools will display the list ofenabled demuxers. Use-formats
to view a combined list ofenabled demuxers and muxers.
The description of some of the currently available demuxers follows.
Audible Format 2, 3, and 4 demuxer.
This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.
Raw Audio Data Transport Stream AAC demuxer.
This demuxer is used to demux an ADTS input containing a single AAC streamalongwith any ID3v1/2 or APE tags in it.
Animated Portable Network Graphics demuxer.
This demuxer is used to demux APNG files.All headers, but the PNG signature, up to (but not including) the firstfcTL chunk are transmitted as extradata.Frames are then split as being all the chunks between two fcTL ones, orbetween the last fcTL and IEND chunks.
Ignore the loop variable in the file if set. Default is enabled.
Maximum framerate in frames per second. Default of 0 imposes no limit.
Default framerate in frames per second when none is specified in the file(0 meaning as fast as possible). Default is 15.
Advanced Systems Format demuxer.
This demuxer is used to demux ASF files and MMS network streams.
Do not try to resynchronize by looking for a certain optional start code.
Virtual concatenation script demuxer.
This demuxer reads a list of files and other directives from a text file anddemuxes them one after the other, as if all their packets had been muxedtogether.
The timestamps in the files are adjusted so that the first file starts at 0and each next file starts where the previous one finishes. Note that it isdone globally and may cause gaps if all streams do not have exactly the samelength.
All files must have the same streams (same codecs, same time base, etc.).
The duration of each file is used to adjust the timestamps of the next file:if the duration is incorrect (because it was computed using the bit-rate orbecause the file is truncated, for example), it can cause artifacts. Theduration
directive can be used to override the duration stored ineach file.
The script is a text file in extended-ASCII, with one directive per line.Empty lines, leading spaces and lines starting with ’#’ are ignored. Thefollowing directive is recognized:
filepath
Path to a file to read; special characters and spaces must be escaped withbackslash or single quotes.
All subsequent file-related directives apply to that file.
ffconcat version 1.0
Identify the script type and version.
To make FFmpeg recognize the format automatically, this directive mustappear exactly as is (no extra space or byte-order-mark) on the very firstline of the script.
durationdur
Duration of the file. This information can be specified from the file;specifying it here may be more efficient or help if the information from thefile is not available or accurate.
If the duration is set for all files, then it is possible to seek in thewhole concatenated video.
inpointtimestamp
In point of the file. When the demuxer opens the file it instantly seeks to thespecified timestamp. Seeking is done so that all streams can be presentedsuccessfully at In point.
This directive works best with intra frame codecs, because for non-intra frameones you will usually get extra packets before the actual In point and thedecoded content will most likely contain frames before In point too.
For each file, packets before the file In point will have timestamps less thanthe calculated start timestamp of the file (negative in case of the firstfile), and the duration of the files (if not specified by theduration
directive) will be reduced based on their specified In point.
Because of potential packets before the specified In point, packet timestampsmay overlap between two concatenated files.
outpointtimestamp
Out point of the file. When the demuxer reaches the specified decodingtimestamp in any of the streams, it handles it as an end of file condition andskips the current and all the remaining packets from all streams.
Out point is exclusive, which means that the demuxer will not output packetswith a decoding timestamp greater or equal to Out point.
This directive works best with intra frame codecs and formats where all streamsare tightly interleaved. For non-intra frame codecs you will usually getadditional packets with presentation timestamp after Out point therefore thedecoded content will most likely contain frames after Out point too. If yourstreams are not tightly interleaved you may not get all the packets from allstreams before Out point and you may only will be able to decode the earlieststream until Out point.
The duration of the files (if not specified by theduration
directive) will be reduced based on their specified Out point.
file_packet_metadatakey=value
Metadata of the packets of the file. The specified metadata will be set foreach file packet. You can specify this directive multiple times to add multiplemetadata entries.This directive is deprecated, usefile_packet_meta
instead.
file_packet_metakeyvalue
Metadata of the packets of the file. The specified metadata will be set foreach file packet. You can specify this directive multiple times to add multiplemetadata entries.
optionkeyvalue
Option to access, open and probe the file.Can be present multiple times.
stream
Introduce a stream in the virtual file.All subsequent stream-related directives apply to the last introducedstream.Some streams properties must be set in order to allow identifying thematching streams in the subfiles.If no streams are defined in the script, the streams from the first file arecopied.
exact_stream_idid
Set the id of the stream.If this directive is given, the string with the corresponding id in thesubfiles will be used.This is especially useful for MPEG-PS (VOB) files, where the order of thestreams is not reliable.
stream_metakeyvalue
Metadata for the stream.Can be present multiple times.
stream_codecvalue
Codec for the stream.
stream_extradatahex_string
Extradata for the string, encoded in hexadecimal.
chapteridstartend
Add a chapter.id is an unique identifier, possibly small andconsecutive.
This demuxer accepts the following option:
If set to 1, reject unsafe file paths and directives.A file path is considered safe if itdoes not contain a protocol specification and is relative and all componentsonly contain characters from the portable character set (letters, digits,period, underscore and hyphen) and have no period at the beginning of acomponent.
If set to 0, any file name is accepted.
The default is 1.
If set to 1, try to perform automatic conversions on packet data to make thestreams concatenable.The default is 1.
Currently, the only conversion is adding the h264_mp4toannexb bitstreamfilter to H.264 streams in MP4 format. This is necessary in particular ifthere are resolution changes.
If set to 1, every packet will contain thelavf.concat.start_time and thelavf.concat.duration packet metadata values which are the start_time andthe duration of the respective file segments in the concatenated outputexpressed in microseconds. The duration metadata is only set if it is knownbased on the concat file.The default is 0.
# my first filenamefile /mnt/share/file-1.wav# my second filename including whitespacefile '/mnt/share/file 2.wav'# my third filename including whitespace plus single quotefile '/mnt/share/file 3'\''.wav'
ffconcat version 1.0file file-1.wavduration 20.0file subdir/file-2.wav
Dynamic Adaptive Streaming over HTTP demuxer.
This demuxer presents all AVStreams found in the manifest.By setting the discard flags on AVStreams the caller can decidewhich streams to actually receive.Each stream mirrors theid
andbandwidth
properties from the<Representation>
as metadata keys named "id" and "variant_bitrate" respectively.
This demuxer accepts the following option:
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
DVD-Video demuxer, powered by libdvdnav and libdvdread.
Can directly ingest DVD titles, specifically sequential PGCs, intoa conversion pipeline. Menu assets, such as background video or audio,can also be demuxed given the menu’s coordinates (at best effort).
Block devices (DVD drives), ISO files, and directory structures are accepted.Activate with-f dvdvideo
in front of one of these inputs.
This demuxer does NOT have decryption code of any kind. You are on your ownworking with encrypted DVDs, and should not expect support on the matter.
Underlying playback is handled by libdvdnav, and structure parsing by libdvdread.FFmpeg must be built with GPL library support available as well as theconfigure switches--enable-libdvdnav
and--enable-libdvdread
.
You will need to provide either the desired "title number" or exact PGC/PG coordinates.Many open-source DVD players and tools can aid in providing this information.If not specified, the demuxer will default to title 1 which works for many discs.However, due to the flexibility of the format, it is recommended to check manually.There are many discs that are authored strangely or with invalid headers.
If the input is a real DVD drive, please note that there are some drives which maysilently fail on reading bad sectors from the disc, returning random bits insteadwhich is effectively corrupt data. This is especially prominent on aging or rotting discs.A second pass and integrity checks would be needed to detect the corruption.This is not an FFmpeg issue.
DVD-Video is not a directly accessible, linear container format in thetraditional sense. Instead, it allows for complex and programmatic playback ofcarefully muxed MPEG-PS streams that are stored in headerless VOB files.To the end-user, these streams are known simply as "titles", but the actuallogical playback sequence is defined by one or more "PGCs", or Program Group Chains,within the title. The PGC is in turn comprised of multiple "PGs", or Programs",which are the actual video segments (and for a typical video feature, sequentiallyordered). The PGC structure, along with stream layout and metadata, are stored inIFO files that need to be parsed. PGCs can be thought of as playlists in easier terms.
An actual DVD player relies on user GUI interaction via menus and an internal VMto drive the direction of demuxing. Generally, the user would either navigate (via menus)or automatically be redirected to the PGC of their choice. During this process andthe subsequent playback, the DVD player’s internal VM also maintains a state andexecutes instructions that can create jumps to different sectors during playback.This is why libdvdnav is involved, as a linear read of the MPEG-PS blobs on thedisc (VOBs) is not enough to produce the right sequence in many cases.
There are many other DVD structures (a long subject) that will not be discussed here.NAV packets, in particular, are handled by this demuxer to build accurate timingbut not emitted as a stream. For a good high-level understanding, refer to:https://code.videolan.org/videolan/libdvdnav/-/blob/master/doc/dvd_structures
This demuxer accepts the following options:
The title number to play. Must be set ifpgc andpg are not set.Not applicable to menus.Default is 0 (auto), which currently only selects the first available title (title 1)and notifies the user about the implications.
The chapter, or PTT (part-of-title), number to start at. Not applicable to menus.Default is 1.
The chapter, or PTT (part-of-title), number to end at. Not applicable to menus.Default is 0, which is a special value to signal end at the last possible chapter.
The video angle number, referring to what is essentially an additionalvideo stream that is composed from alternate frames interleaved in the VOBs.Not applicable to menus.Default is 1.
The region code to use for playback. Some discs may use this to default playbackat a particular angle in different regions. This option will not affect the region codeof a real DVD drive, if used as an input. Not applicable to menus.Default is 0, "world".
Demux menu assets instead of navigating a title. Requires exact coordinatesof the menu (menu_lu,menu_vts,pgc,pg).Default is false.
The menu language to demux. In DVD, menus are grouped by language.Default is 1, the first language unit.
The VTS where the menu lives, or 0 if it is a VMG menu (root-level).Default is 1, menu of the first VTS.
The entry PGC to start playback, in conjunction withpg.Alternative to settingtitle.Chapter markers are not supported at this time.Must be explicitly set for menus.Default is 0, automatically resolve from value oftitle.
The entry PG to start playback, in conjunction withpgc.Alternative to settingtitle.Chapter markers are not supported at this time.Default is 1, the first PG of the PGC.
Enable this to have accurate chapter (PTT) markers and duration measurement,which requires a slow second pass read in order to index the chapter markertimestamps from NAV packets. This is non-ideal extra work for real optical drives.It is recommended and faster to use this option with a backup of the DVD structurestored on a hard drive. Not compatible withpgc andpg.Default is 0, false.
Skip padding cells (i.e. cells shorter than 1 second) from the beginning.There exist many discs with filler segments at the beginning of the PGC,often with junk data intended for controlling a real DVD player’sbuffering speed and with no other material data value.Not applicable to menus.Default is 1, true.
ffmpeg -f dvdvideo -title 3 -i <path to DVD> ...
ffmpeg -f dvdvideo -chapter_start 3 -chapter_end 6 -title 1 -i <path to DVD> ...
ffmpeg -f dvdvideo -chapter_start 5 -chapter_end 5 -title 1 -i <path to DVD> ...
ffmpeg -f dvdvideo -menu 1 -menu_lu 1 -menu_vts 1 -pgc 1 -pg 1 -i <path to DVD> ...
Electronic Arts Multimedia format demuxer.
This format is used by various Electronic Arts games.
Normally the VP6 alpha channel (if exists) is returned as a secondary videostream, by setting this option you can make the demuxer return a single videostream which contains the alpha channel in addition to the ordinary video.
Interoperable Master Format demuxer.
This demuxer presents audio and video streams found in an IMF Composition, asspecified inSMPTE ST 2067-2.
ffmpeg [-assetmaps <path of ASSETMAP1>,<path of ASSETMAP2>,...] -i <path of CPL> ...
If-assetmaps
is not specified, the demuxer looks for a file calledASSETMAP.xml in the same directory as the CPL.
Adobe Flash Video Format demuxer.
This demuxer is used to demux FLV files and RTMP network streams. In case of live network streams, if you force format, you may use live_flv option instead of flv to survive timestamp discontinuities.KUX is a flv variant used on the Youku platform.
ffmpeg -f flv -i myfile.flv ...ffmpeg -f live_flv -i rtmp://<any.server>/anything/key ....
Allocate the streams according to the onMetaData array content.
Ignore the size of previous tag value.
Output all context of the onMetadata.
Animated GIF demuxer.
It accepts the following options:
Set the minimum valid delay between frames in hundredths of seconds.Range is 0 to 6000. Default value is 2.
Set the maximum valid delay between frames in hundredth of seconds.Range is 0 to 65535. Default value is 65535 (nearly eleven minutes),the maximum value allowed by the specification.
Set the default delay between frames in hundredths of seconds.Range is 0 to 6000. Default value is 10.
GIF files can contain information to loop a certain number of times (orinfinitely). Ifignore_loop is set to 1, then the loop settingfrom the input will be ignored and looping will not occur. If set to 0,then looping will occur and will cycle the number of times according tothe GIF. Default value is 1.
For example, with the overlay filter, place an infinitely looping GIFover another video:
ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
Note that in the above example the shortest option for overlay filter isused to end the output video at the length of the shortest input file,which in this case isinput.mp4 as the GIF in this example loopsinfinitely.
HLS demuxer
Apple HTTP Live Streaming demuxer.
This demuxer presents all AVStreams from all variant streams.The id field is set to the bitrate variant index number. By settingthe discard flags on AVStreams (by pressing ’a’ or ’v’ in ffplay),the caller can decide which variant streams to actually receive.The total bitrate of the variant that the stream belongs to isavailable in a metadata key named "variant_bitrate".
It accepts the following options:
segment index to start live streams at (negative values are from the end).
prefer to use #EXT-X-START if it’s in playlist instead of live_start_index.
’,’ separated list of file extensions that hls is allowed to access.
This blocks disallowed extensions from probingIt also requires all available segments to have matching extensions to the formatexcept mpegts, which is always allowed.It is recommended to set the whitelists correctly instead of depending on extensionsEnabled by default.
Maximum number of times a insufficient list is attempted to be reloaded.Default value is 1000.
The maximum number of times to load m3u8 when it refreshes without new segments.Default value is 1000.
Use persistent HTTP connections. Applicable only for HTTP streams.Enabled by default.
Use multiple HTTP connections for downloading HTTP segments.Enabled by default for HTTP/1.1 servers.
Use HTTP partial requests for downloading HTTP segments.0 = disable, 1 = enable, -1 = auto, Default is auto.
Set options for the demuxer of media segments using a list of key=value pairs separated by:
.
Maximum number of times to reload a segment on error, useful when segment skip on network error is not desired.Default value is 0.
Image file demuxer.
This demuxer reads from a list of image files specified by a pattern.The syntax and meaning of the pattern is specified by theoptionpattern_type.
The pattern may contain a suffix which is used to automaticallydetermine the format of the images contained in the files.
The size, the pixel format, and the format of each image must be thesame for all the files in the sequence.
This demuxer accepts the following options:
Set the frame rate for the video stream. It defaults to 25.
If set to 1, loop over the input. Default value is 0.
Select the pattern type used to interpret the provided filename.
pattern_type accepts one of the following values.
Disable pattern matching, therefore the video will only contain the specifiedimage. You should use this option if you do not want to create sequences frommultiple images and your filenames may contain special pattern characters.
Select a sequence pattern type, used to specify a sequence of filesindexed by sequential numbers.
A sequence pattern may contain the string "%d" or "%0Nd", whichspecifies the position of the characters representing a sequentialnumber in each filename matched by the pattern. If the form"%d0Nd" is used, the string representing the number in eachfilename is 0-padded andN is the total number of 0-paddeddigits representing the number. The literal character ’%’ can bespecified in the pattern with the string "%%".
If the sequence pattern contains "%d" or "%0Nd", the first filename ofthe file list specified by the pattern must contain a numberinclusively contained betweenstart_number andstart_number+start_number_range-1, and all the followingnumbers must be sequential.
For example the pattern "img-%03d.bmp" will match a sequence offilenames of the formimg-001.bmp,img-002.bmp, ...,img-010.bmp, etc.; the pattern "i%%m%%g-%d.jpg" will match asequence of filenames of the formi%m%g-1.jpg,i%m%g-2.jpg, ...,i%m%g-10.jpg, etc.
Note that the pattern must not necessarily contain "%d" or"%0Nd", for example to convert a single image fileimg.jpeg you can employ the command:
ffmpeg -i img.jpeg img.png
Select a glob wildcard pattern type.
The pattern is interpreted like aglob()
pattern. This is onlyselectable if libavformat was compiled with globbing support.
Select a mixed glob wildcard/sequence pattern.
If your version of libavformat was compiled with globbing support, andthe provided pattern contains at least one glob meta character among%*?[]{}
that is preceded by an unescaped "%", the pattern isinterpreted like aglob()
pattern, otherwise it is interpretedlike a sequence pattern.
All glob special characters%*?[]{}
must be prefixedwith "%". To escape a literal "%" you shall use "%%".
For example the patternfoo-%*.jpeg
will match all thefilenames prefixed by "foo-" and terminating with ".jpeg", andfoo-%?%?%?.jpeg
will match all the filenames prefixed with"foo-", followed by a sequence of three characters, and terminatingwith ".jpeg".
This pattern type is deprecated in favor ofglob andsequence.
Default value isglob_sequence.
Set the pixel format of the images to read. If not specified the pixelformat is guessed from the first image file in the sequence.
Set the index of the file matched by the image file pattern to startto read from. Default value is 0.
Set the index interval range to check when looking for the first imagefile in the sequence, starting fromstart_number. Default valueis 5.
If set to 1, will set frame timestamp to modification time of image file. Notethat monotonity of timestamps is not provided: images go in the same order aswithout this option. Default value is 0.If set to 2, will set frame timestamp to the modification time of the image file innanosecond precision.
Set the video size of the images to read. If not specified the videosize is guessed from the first image file in the sequence.
If set to 1, will add two extra fields to the metadata found in input, making themalso available for other filters (seedrawtext filter for examples). Defaultvalue is 0. The extra fields are described below:
Corresponds to the full path to the input file being read.
Corresponds to the name of the file being read.
ffmpeg
for creating a video from the images in the filesequenceimg-001.jpeg,img-002.jpeg, ..., assuming aninput frame rate of 10 frames per second:ffmpeg -framerate 10 -i 'img-%03d.jpeg' out.mkv
ffmpeg -framerate 10 -start_number 100 -i 'img-%03d.jpeg' out.mkv
ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
The Game Music Emu library is a collection of video game music file emulators.
Seehttps://bitbucket.org/mpyne/game-music-emu/overview for more information.
It accepts the following options:
Set the index of which track to demux. The demuxer can only export one track.Track indexes start at 0. Default is to pick the first track. Number of tracksis exported astracks metadata entry.
Set the sampling rate of the exported track. Range is 1000 to 999999. Default is 44100.
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size,which in turn, acts as a ceiling for the size of files that can be read.Default is 50 MiB.
ModPlug based module demuxer
Seehttps://github.com/Konstanty/libmodplug
It will export one 2-channel 16-bit 44.1 kHz audio stream.Optionally, apal8
16-color video stream can be exported with or without printed metadata.
It accepts the following options:
Apply a simple low-pass filter. Can be 1 (on) or 0 (off). Default is 0.
Set amount of reverb. Range 0-100. Default is 0.
Set delay in ms, clamped to 40-250 ms. Default is 0.
Apply bass expansion a.k.a. XBass or megabass. Range is 0 (quiet) to 100 (loud). Default is 0.
Set cutoff i.e. upper-bound for bass frequencies. Range is 10-100 Hz. Default is 0.
Apply a Dolby Pro-Logic surround effect. Range is 0 (quiet) to 100 (heavy). Default is 0.
Set surround delay in ms, clamped to 5-40 ms. Default is 0.
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size,which in turn, acts as a ceiling for the size of files that can be read. Range is 0 to 100 MiB.0 removes buffer size limit (not recommended). Default is 5 MiB.
String which is evaluated using the eval API to assign colors to the generated video stream.Variables which can be used arex
,y
,w
,h
,t
,speed
,tempo
,order
,pattern
androw
.
Generate video stream. Can be 1 (on) or 0 (off). Default is 0.
Set video frame width in ’chars’ where one char indicates 8 pixels. Range is 20-512. Default is 30.
Set video frame height in ’chars’ where one char indicates 8 pixels. Range is 20-512. Default is 30.
Print metadata on video stream. Includesspeed
,tempo
,order
,pattern
,row
andts
(time in ms). Can be 1 (on) or 0 (off). Default is 1.
libopenmpt based module demuxer
Seehttps://lib.openmpt.org/libopenmpt/ for more information.
Some files have multiple subsongs (tracks) this can be set with thesubsongoption.
It accepts the following options:
Set the subsong index. This can be either ’all’, ’auto’, or the index of thesubsong. Subsong indexes start at 0. The default is ’auto’.
The default value is to let libopenmpt choose.
Set the channel layout. Valid values are 1, 2, and 4 channel layouts.The default value is STEREO.
Set the sample rate for libopenmpt to output.Range is from 1000 to INT_MAX. The value default is 48000.
Demuxer for Quicktime File Format & ISO/IEC Base Media File Format (ISO/IEC 14496-12 or MPEG-4 Part 12, ISO/IEC 15444-12 or JPEG 2000 Part 12).
Registered extensions: mov, mp4, m4a, 3gp, 3g2, mj2, psp, m4b, ism, ismv, isma, f4v
This demuxer accepts the following options:
Enable loading of external tracks, disabled by default.Enabling this can theoretically leak information in some use cases.
Allows loading of external tracks via absolute paths, disabled by default.Enabling this poses a security risk. It should only be enabled if the sourceis known to be non-malicious.
When seeking, identify the closest point in each stream individually and demux packets inthat stream from identified point. This can lead to a different sequence of packets comparedto demuxing linearly from the beginning. Default is true.
Ignore any edit list atoms. The demuxer, by default, modifies the stream index to reflect thetimeline described by the edit list. Default is false.
Modify the stream index to reflect the timeline described by the edit list.ignore_editlist
must be set to false for this option to be effective.If bothignore_editlist
and this option are set to false, then only thestart of the stream index is modified to reflect initial dwell time or starting timestampdescribed by the edit list. Default is true.
Don’t parse chapters. This includes GoPro ’HiLight’ tags/moments. Note that chapters areonly parsed when input is seekable. Default is false.
For seekable fragmented input, set fragment’s starting timestamp from media fragment random access box, if present.
Following options are available:
Auto-detect whether to set mfra timestamps as PTS or DTS(default)
Set mfra timestamps as DTS
Set mfra timestamps as PTS
Don’t use mfra box to set timestamps
For fragmented input, set fragment’s starting timestamp tobaseMediaDecodeTime
from thetfdt
box.Default is enabled, which will prefer to use thetfdt
box to set DTS. Disable to use theearliest_presentation_time
from thesidx
box.In either case, the timestamp from themfra
box will be used if it’s available anduse_mfra_for
isset to pts or dts.
Export unrecognized boxes within theudta box as metadata entries. The first fourcharacters of the box type are set as the key. Default is false.
Export entire contents ofXMP_ box anduuid box as a string with keyxmp
. Note thatifexport_all
is set and this option isn’t, the contents ofXMP_ box are still exportedbut with keyXMP_
. Default is false.
4-byte key required to decrypt Audible AAX and AAX+ files. See Audible AAX subsection below.
Fixed key used for handling Audible AAX/AAX+ files. It has been pre-set so should not be necessary tospecify.
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
Very high sample deltas written in a trak’s stts box may occasionally be intended but usually they are written inerror or used to store a negative value for dts correction when treated as signed 32-bit integers. This option letsthe user set an upper limit, beyond which the delta is clamped to 1. Values greater than the limit if negative whencast to int32 are used to adjust onward dts.
Unit is the track time scale. Range is 0 to UINT_MAX. Default isUINT_MAX - 48000*10
which allows up toa 10 second dts correction for 48 kHz audio streams while accommodating 99.9% ofuint32
range.
Interleave packets from multiple tracks at demuxer level. For badly interleaved files, this prevents playback issuescaused by large gaps between packets in different tracks, as MOV/MP4 do not have packet placement requirements.However, this can cause excessive seeking on very badly interleaved files, due to seeking between tracks, so disablingit may prevent I/O issues, at the expense of playback.
Audible AAX files are encrypted M4B files, and they can be decrypted by specifying a 4 byte activation secret.
ffmpeg -activation_bytes 1CEB00DA -i test.aax -vn -c:a copy output.mp4
MPEG-2 transport stream demuxer.
This demuxer accepts the following options:
Set size limit for looking up a new synchronization. Default value is65536.
Skip PMTs for programs not defined in the PAT. Default value is 0.
Override teletext packet PTS and DTS values with the timestamps calculatedfrom the PCR of the first program which the teletext stream is part of and isnot discarded. Default value is 1, set this option to 0 if you want yourteletext packet PTS and DTS values untouched.
Output option carrying the raw packet size in bytes.Show the detected raw packet size, cannot be set by the user.
Scan and combine all PMTs. The value is an integer with value from -1to 1 (-1 means automatic setting, 1 means enabled, 0 meansdisabled). Default value is -1.
Re-use existing streams when a PMT’s version is updated and elementarystreams move to different PIDs. Default value is 0.
Set maximum size, in bytes, of packet emitted by the demuxer. Payloads above this sizeare split across multiple packets. Range is 1 to INT_MAX/2. Default is 204800 bytes.
MJPEG encapsulated in multi-part MIME demuxer.
This demuxer allows reading of MJPEG, where each frame is represented as a part ofmultipart/x-mixed-replace stream.
Default implementation applies a relaxed standard to multi-part MIME boundary detection,to prevent regression with numerous existing endpoints not generating a proper MIMEMJPEG stream. Turning this option on by setting it to 1 will result in a stricter checkof the boundary value.
Raw video demuxer.
This demuxer allows one to read raw video data. Since there is no headerspecifying the assumed video parameters, the user must specify themin order to be able to decode the data correctly.
This demuxer accepts the following options:
Set input video frame rate. Default value is 25.
Set the input video pixel format. Default value isyuv420p
.
Set the input video size. This value must be specified explicitly.
For example to read a rawvideo fileinput.raw withffplay
, assuming a pixel format ofrgb24
, a videosize of320x240
, and a frame rate of 10 images per second, usethe command:
ffplay -f rawvideo -pixel_format rgb24 -video_size 320x240 -framerate 10 input.raw
RCWT (Raw Captions With Time) is a format native to ccextractor, a commonlyused open source tool for processing 608/708 Closed Captions (CC) sources.For more information on the format, see(ffmpeg-formats)rcwtenc.
This demuxer implements the specification as of March 2024, which hasbeen stable and unchanged since April 2014.
ffmpeg -i CC.rcwt.bin CC.ass
Note that if your output appears to be empty, you may have to manuallyset the decoder’sdata_field option to pick the desired CC substream.
ffmpeg -i CC.rcwt.bin -c:s copy CC.scc
Note that the SCC format does not support all of the possible CC extensionsthat can be stored in RCWT (such as EIA-708).
SBaGen script demuxer.
This demuxer reads the script language used by SBaGenhttp://uazu.net/sbagen/ to generate binaural beats sessions. A SBGscript looks like that:
-SEa: 300-2.5/3 440+4.5/0b: 300-2.5/0 440+4.5/3off: -NOW == a+0:07:00 == b+0:14:00 == a+0:21:00 == b+0:30:00 off
A SBG script can mix absolute and relative timestamps. If the script useseither only absolute timestamps (including the script start time) or onlyrelative ones, then its layout is fixed, and the conversion isstraightforward. On the other hand, if the script mixes both kind oftimestamps, then theNOW reference for relative timestamps will betaken from the current time of day at the time the script is read, and thescript layout will be frozen according to that reference. That means that ifthe script is directly played, the actual times will match the absolutetimestamps up to the sound controller’s clock accuracy, but if the usersomehow pauses the playback or seeks, all times will be shifted accordingly.
JSON captions used forTED Talks.
TED does not provide links to the captions, but they can be guessed from thepage. The filetools/bookmarklets.html from the FFmpeg source treecontains a bookmarklet to expose them.
This demuxer accepts the following option:
Set the start time of the TED talk, in milliseconds. The default is 15000(15s). It is used to sync the captions with the downloadable videos, becausethey include a 15s intro.
Example: convert the captions to a format most players understand:
ffmpeg -i http://www.ted.com/talks/subtitles/id/1/lang/en talk1-en.srt
Vapoursynth wrapper.
Due to security concerns, Vapoursynth scripts will notbe autodetected so the input format has to be forced. For ff* CLI tools,add-f vapoursynth
before the input-i yourscript.vpy
.
This demuxer accepts the following option:
The demuxer buffers the entire script into memory. Adjust this value to set the maximum buffer size,which in turn, acts as a ceiling for the size of scripts that can be read.Default is 1 MiB.
Sony Wave64 Audio demuxer.
This demuxer accepts the following options:
See the same option for thewav demuxer.
RIFF Wave Audio demuxer.
This demuxer accepts the following options:
Specify the maximum packet size in bytes for the demuxed packets. By defaultthis is set to 0, which means that a sensible value is chosen based on theinput format.
Muxers are configured elements in FFmpeg which allow writingmultimedia streams to a particular type of file.
When you configure your FFmpeg build, all the supported muxersare enabled by default. You can list all available muxers using theconfigure option--list-muxers
.
You can disable all the muxers with the configure option--disable-muxers
and selectively enable / disable single muxerswith the options--enable-muxer=MUXER
/--disable-muxer=MUXER
.
The option-muxers
of the ff* tools will display the list ofenabled muxers. Use-formats
to view a combined list ofenabled demuxers and muxers.
A description of some of the currently available muxers follows.
This section covers raw muxers. They accept a single stream matchingthe designated codec. They do not store timestamps or metadata. Therecognized extension is the same as the muxer name unless indicatedotherwise.
It comprises the following muxers. The media type and the eventualextensions used to automatically selects the muxer from the outputextensions are also shown.
Dolby Digital, also known as AC-3.
CRI Middleware ADX audio.
This muxer will write out the total sample count near the start of thefirst packet when the output is seekable and the count can be storedin 32 bits.
aptX (Audio Processing Technology for Bluetooth)
aptX HD (Audio Processing Technology for Bluetooth) audio
AVS2-P2 (Audio Video Standard - Second generation - Part 2) /IEEE 1857.4 video
AVS3-P2 (Audio Video Standard - Third generation - Part 2) /IEEE 1857.10 video
Chinese AVS (Audio Video Standard - First generation)
Codec 2 audio.
No extension is registered so format name has to be supplied e.g. withthe ffmpeg CLI tool-f codec2raw
.
Generic data muxer.
This muxer accepts a single stream with any codec of any type. Theinput stream has to be selected using the-map
option with theffmpeg
CLI tool.
No extension is registered so format name has to be supplied e.g. withtheffmpeg
CLI tool-f data
.
Raw DFPWM1a (Dynamic Filter Pulse With Modulation) audio muxer.
BBC Dirac video.
The Dirac Pro codec is a subset and is standardized as SMPTE VC-2.
Avid DNxHD video.
It is standardized as SMPTE VC-3. Accepts DNxHR streams.
DTS Coherent Acoustics (DCA) audio
Dolby Digital Plus, also known as Enhanced AC-3
MPEG-5 Essential Video Coding (EVC) / EVC / MPEG-5 Part 1 EVC video
ITU-T G.722 audio
ITU-T G.723.1 audio
ITU-T G.726 big-endian ("left-justified") audio.
No extension is registered so format name has to be supplied e.g. withtheffmpeg
CLI tool-f g726
.
ITU-T G.726 little-endian ("right-justified") audio.
No extension is registered so format name has to be supplied e.g. withtheffmpeg
CLI tool-f g726le
.
Global System for Mobile Communications audio
ITU-T H.261 video
ITU-T H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2 video
ITU-T H.264 / MPEG-4 Part 10 AVC video. Bitstream shall be convertedto Annex B syntax if it’s in length-prefixed mode.
ITU-T H.265 / MPEG-H Part 2 HEVC video. Bitstream shall be convertedto Annex B syntax if it’s in length-prefixed mode.
MPEG-4 Part 2 video
Motion JPEG video
Meridian Lossless Packing, also known as Packed PCM
MPEG-1 Audio Layer II audio
MPEG-1 Part 2 video.
ITU-T H.262 / MPEG-2 Part 2 video
AV1 low overhead Open Bitstream Units muxer.
Temporal delimiter OBUs will be inserted in all temporal units of thestream.
Raw uncompressed video.
Bluetooth SIG low-complexity subband codec audio
Dolby TrueHD audio
SMPTE 421M / VC-1 video
ffmpeg
:ffmpeg -f lavfi -i testsrc -t 10 -s hd1080p testsrc.yuv
Since the rawvideo muxer do not store the information related to sizeand format, this information must be provided when demuxing the file:
ffplay -video_size 1920x1080 -pixel_format rgb24 -f rawvideo testsrc.rgb
This section covers raw PCM (Pulse-Code Modulation) audio muxers.
They accept a single stream matching the designated codec. They do notstore timestamps or metadata. The recognized extension is the same asthe muxer name.
It comprises the following muxers. The optional additional extensionused to automatically select the muxer from the output extension isalso shown in parentheses.
PCM A-law
PCM 32-bit floating-point big-endian
PCM 32-bit floating-point little-endian
PCM 64-bit floating-point big-endian
PCM 64-bit floating-point little-endian
PCM mu-law
PCM signed 16-bit big-endian
PCM signed 16-bit little-endian
PCM signed 24-bit big-endian
PCM signed 24-bit little-endian
PCM signed 32-bit big-endian
PCM signed 32-bit little-endian
PCM signed 8-bit
PCM unsigned 16-bit big-endian
PCM unsigned 16-bit little-endian
PCM unsigned 24-bit big-endian
PCM unsigned 24-bit little-endian
PCM unsigned 32-bit big-endian
PCM unsigned 32-bit little-endian
PCM unsigned 8-bit
PCM Archimedes VIDC
This section covers formats belonging to the MPEG-1 and MPEG-2 Systemsfamily.
The MPEG-1 Systems format (also known as ISO/IEEC 11172-1 or MPEG-1program stream) has been adopted for the format of media track storedin VCD (Video Compact Disc).
The MPEG-2 Systems standard (also known as ISO/IEEC 13818-1) coverstwo containers formats, one known as transport stream and one known asprogram stream; only the latter is covered here.
The MPEG-2 program stream format (also known as VOB due to thecorresponding file extension) is an extension of MPEG-1 programstream: in addition to support different codecs for the audio andvideo streams, it also stores subtitles and navigation metadata.MPEG-2 program stream has been adopted for storing media streams inSVCD and DVD storage devices.
This section comprises the following muxers.
MPEG-1 Systems / MPEG-1 program stream muxer.
MPEG-1 Systems / MPEG-1 program stream (VCD) muxer.
This muxer can be used to generate tracks in the format accepted bythe VCD (Video Compact Disc) storage devices.
It is the same as the ‘mpeg’ muxer with a few differences.
MPEG-2 program stream (VOB) muxer.
MPEG-2 program stream (DVD VOB) muxer.
This muxer can be used to generate tracks in the format accepted bythe DVD (Digital Versatile Disc) storage devices.
This is the same as the ‘vob’ muxer with a few differences.
MPEG-2 program stream (SVCD VOB) muxer.
This muxer can be used to generate tracks in the format accepted bythe SVCD (Super Video Compact Disc) storage devices.
This is the same as the ‘vob’ muxer with a few differences.
Set user-defined mux rate expressed as a number of bits/s. If notspecied the automatically computed mux rate is employed. Default valueis0
.
Set initial demux-decode delay in microseconds. Default value is500000
.
This section covers formats belonging to the QuickTime / MOV family,including the MPEG-4 Part 14 format and ISO base media file format(ISOBMFF). These formats share a common structure based on the ISObase media file format (ISOBMFF).
The MOV format was originally developed for use with Apple QuickTime.It was later used as the basis for the MPEG-4 Part 1 (later Part 14)format, also known as ISO/IEC 14496-1. That format was thengeneralized into ISOBMFF, also named MPEG-4 Part 12 format, ISO/IEC14496-12, or ISO/IEC 15444-12.
It comprises the following muxers.
Third Generation Partnership Project (3GPP) format for 3G UMTSmultimedia services
Third Generation Partnership Project 2 (3GP2 or 3GPP2) format for 3GCDMA2000 multimedia services, similar to ‘3gp’ with extensionsand limitations
Adobe Flash Video format
MPEG-4 audio file format, as MOV/MP4 but limited to contain only audiostreams, typically played with the Apple ipod device
Microsoft IIS (Internet Information Services) Smooth StreamingAudio/Video (ISMV or ISMA) format. This is based on MPEG-4 Part 14format with a few incompatible variants, used to stream media filesfor the Microsoft IIS server.
QuickTime player format identified by the.mov
extension
MP4 or MPEG-4 Part 14 format
PlayStation Portable MP4/MPEG-4 Part 14 format variant. This is basedon MPEG-4 Part 14 format with a few incompatible variants, used toplay files on PlayStation devices.
The ‘mov’, ‘mp4’, and ‘ismv’ muxers supportfragmentation. Normally, a MOV/MP4 file has all the metadata about allpackets stored in one location.
This data is usually written at the end of the file, but it can bemoved to the start for better playback by adding+faststart
tothe-movflags
, or using theqt-faststart
tool).
A fragmented file consists of a number of fragments, where packets andmetadata about these packets are stored together. Writing a fragmentedfile has the advantage that the file is decodable even if the writingis interrupted (while a normal MOV/MP4 is undecodable if it is notproperly finished), and it requires less memory when writing very longfiles (since writing normal MOV/MP4 files stores info about everysingle packet in memory until the file is closed). The downside isthat it is less compatible with other applications.
Fragmentation is enabled by setting one of the options that definehow to cut the file into fragments:
If more than one condition is specified, fragments are cut when one ofthe specified conditions is fulfilled. The exception to this is theoptionmin_frag_duration, which has to be fulfilled for anyof the other conditions to apply.
Override major brand.
Enable to skip writing the name inside ahdlr
box.Default isfalse
.
set the media encryption key in hexadecimal format
set the media encryption key identifier in hexadecimal format
configure the encryption scheme, allowed values are ‘none’, and‘cenc-aes-ctr’
Create fragments that areduration microseconds long.
Interleave samples within fragments (max number of consecutivesamples, lower is tighter interleaving, but with more overhead. It isset to0
by default.
create fragments that contain up tosize bytes of payload data
specify iods number for the audio profile atom (from -1 to 255),default is-1
specify iods number for the video profile atom (from -1 to 255),default is-1
specify number of lookahead entries for ISM files (from 0 to 255),default is0
do not create fragments that are shorter thanduration microseconds long
Reserves space for the moov atom at the beginning of the file instead of placing themoov atom at the end. If the space reserved is insufficient, muxing will fail.
specify gamma value for gama atom (as a decimal number from 0 to 10),default is0.0
, must be set together with+ movflags
Set various muxing switches. The following flags can be used:
write CMAF (Common Media Application Format) compatible fragmentedMP4 output
write DASH (Dynamic Adaptive Streaming over HTTP) compatible fragmentedMP4 output
Similarly to the ‘omit_tfhd_offset’ flag, this flag avoidswriting the absolute base_data_offset field in tfhd atoms, but does soby using the new default-base-is-moof flag instead. This flag is newfrom 14496-12:2012. This may make the fragments easier to parse incertain circumstances (avoiding basing track fragment locationcalculations on the implicit end of the previous track fragment).
delay writing the initial moov until the first fragment is cut, oruntil the first fragment flush
Disable Nero chapter markers (chpl atom). Normally, both Nero chaptersand a QuickTime chapter track are written to the file. With thisoption set, only the QuickTime chapter track will be written. Nerochapters can cause failures when the file is reprocessed with certaintagging programs, like mp3Tag 2.61a and iTunes 11.3, most likely otherversions are affected as well.
Run a second pass moving the index (moov atom) to the beginning of thefile. This operation can take a while, and will not work in varioussituations such as fragmented output, thus it is not enabled bydefault.
Allow the caller to manually choose when to cut fragments, by callingav_write_frame(ctx, NULL)
to write a fragment with the packetswritten so far. (This is only useful with other applicationsintegrating libavformat, not fromffmpeg
.)
signal that the next fragment is discontinuous from earlier ones
fragment at every frame
start a new fragment at each video keyframe
write a global sidx index at the start of the file
create a live smooth streaming feed (for pushing to a publishing point)
Enables utilization of version 1 of the CTTS box, in which the CTS offsets canbe negative. This enables the initial sample to have DTS/CTS of zero, andreduces the need for edit lists for some cases such as video tracks withB-frames. Additionally, eases conformance with the DASH-IF interoperabilityguidelines.
This option is implicitly set when writing ‘ismv’ (SmoothStreaming) files.
Do not write any absolute base_data_offset in tfhd atoms. This avoidstying fragments to absolute byte positions in the file/streams.
If writing colr atom prioritise usage of ICC profile if it exists instream packet side data.
add RTP hinting tracks to the output file
Write a separate moof (movie fragment) atom for each track. Normally,packets for all tracks are written in a moof atom (which is slightlymore efficient), but with this option set, the muxer writes onemoof/mdat pair for each track, making it easier to separate tracks.
Skip writing of sidx atom. When bitrate overhead due to sidx atom ishigh, this option could be used for cases where sidx atom is notmandatory. When the ‘global_sidx’ flag is enabled, this optionis ignored.
skip writing the mfra/tfra/mfro trailer for fragmented files
use mdta atom for metadata
write colr atom even if the color info is unspecified. This flag isexperimental, may be renamed or changed, do not use from scripts.
write deprecated gama atom
For recoverability - write the output file as a fragmented file.This allows the intermediate file to be read while being written(in particular, if the writing process is aborted uncleanly). Whenwriting is finished, the file is converted to a regular, non-fragmentedfile, which is more compatible and allows easier and quicker seeking.
If writing is aborted, the intermediate file can manually beremuxed to get a regular, non-fragmented file of what had beenwritten into the unfinished file.
Set the timescale written in the movie header box (mvhd
).Range is 1 to INT_MAX. Default is1000
.
Add RTP hinting tracks to the output file.
The following flags can be used:
use mode 0 for H.264 in RTP
use MP4A-LATM packetization instead of MPEG4-GENERIC for AAC
use RFC 2190 packetization instead of RFC 4629 for H.263
send RTCP BYE packets when finishing
do not send RTCP sender reports
skip writing iods atom (default value istrue
)
use edit list (default value isauto
)
use stream ids as track ids (default value isfalse
)
Set the timescale used for video tracks. Range is0
to INT_MAX. Ifset to0
, the timescale is automatically set based on thenative stream time base. Default is0
.
Force or disable writing bitrate box inside stsd box of a track. Thebox contains decoding buffer size (in bytes), maximum bitrate andaverage bitrate for the track. The box will be skipped if none ofthese values can be computed. Default is-1
orauto
,which will write the box only in MP4 mode.
Write producer time reference box (PRFT) with a specified time source for theNTP field in the PRFT box. Set value as ‘wallclock’ to specify timesourceas wallclock time and ‘pts’ to specify timesource as input packets’ PTSvalues.
Specifyon
to force writing a timecode track,off
to disable itandauto
to write a timecode track only for mov and mp4 output (default).
Setting value to ‘pts’ is applicable only for a live encoding use case,where PTS values are set as as wallclock time at the source. For example, anencoding use case with decklink capture source wherevideo_pts andaudio_pts are set to ‘abs_wallclock’.
ffmpeg
:ffmpeg -re<normal input/transcoding options> -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
A64 Commodore 64 video muxer.
This muxer accepts a singlea64_multi
ora64_multi5
codec video stream.
Raw AC-4 audio muxer.
This muxer accepts a singleac4
audio stream.
when enabled, write a CRC checksum for each packet to the output,default isfalse
Audio Data Transport Stream muxer.
It accepts a single AAC stream.
Enable to write ID3v2.4 tags at the start of the stream. Default isdisabled.
Enable to write APE tags at the end of the stream. Default isdisabled.
Enable to set MPEG version bit in the ADTS frame header to 1 whichindicates MPEG-2. Default is 0, which indicates MPEG-4.
MD STUDIO audio muxer.
This muxer accepts a single ATRAC1 audio stream with either one or two channelsand a sample rate of 44100Hz.
As AEA supports storing the track title, this muxer will also writethe title from stream’s metadata to the container.
Audio Interchange File Format muxer.
Enable ID3v2 tags writing when set to 1. Default is 0 (disabled).
Select ID3v2 version to write. Currently only version 3 and 4 (aka.ID3v2.3 and ID3v2.4) are supported. The default is version 4.
High Voltage Software’s Lego Racers game audio muxer.
It accepts a single ADPCM_IMA_ALP stream with no more than 2 channelsand a sample rate not greater than 44100 Hz.
Extensions:tun
,pcm
Set file type.
type accepts the following values:
Set file type as music. Must have a sample rate of 22050 Hz.
Set file type as sfx.
Set file type as per output file extension..pcm
results intypepcm
else typetun
is set.(default)
3GPP AMR (Adaptive Multi-Rate) audio muxer.
It accepts a single audio stream containing an AMR NB stream.
AMV (Actions Media Video) format muxer.
Ubisoft Rayman 2 APM audio muxer.
It accepts a single ADPCM IMA APM audio stream.
Animated Portable Network Graphics muxer.
It accepts a single APNG video stream.
Force a delay expressed in seconds after the last frame of eachrepetition. Default value is0.0
.
specify how many times to play the content,0
causes an infinteloop, with1
there is no loop
ffmpeg
to generate an APNG output with 2 repetitions,and with a delay of half a second after the first repetition:ffmpeg -i INPUT -final_delay 0.5 -plays 2 out.apng
Argonaut Games ASF audio muxer.
It accepts a single ADPCM audio stream.
override file major version, specified as an integer, default value is2
override file minor version, specified as an integer, default value is1
Embed file name into file, if not specified use the output filename. The name is truncated to 8 characters.
Argonaut Games CVG audio muxer.
It accepts a single one-channel ADPCM 22050Hz audio stream.
Theloop andreverb options set the correspondingflags in the header which can be later retrieved to process the audiostream accordingly.
skip sample rate check (default isfalse
)
set loop flag (default isfalse
)
set reverb flag (default istrue
)
Advanced / Active Systems (or Streaming) Format audio muxer.
The ‘asf_stream’ variant should be selected for streaming.
Note that Windows Media Audio (wma) and Windows Media Video (wmv) use thismuxer too.
Set the muxer packet size as a number of bytes. By tuning this settingyou may reduce data fragmentation or muxer overhead depending on yoursource. Default value is3200
, minimum is100
, maximumis64Ki
.
ASS/SSA (SubStation Alpha) subtitles muxer.
It accepts a single ASS subtitles stream.
Write dialogue events immediately, even if they are out-of-order,default isfalse
, otherwise they are cached until the expectedtime event is found.
AST (Audio Stream) muxer.
This format is used to play audio on some Nintendo Wii games.
It accepts a single audio stream.
Theloopstart andloopend options can be used todefine a section of the file to loop for players honoring suchoptions.
Specify loop start position expressesd in milliseconds, from-1
toINT_MAX
, in case-1
is set then no loop is specified(default -1) and theloopend value is ignored.
Specify loop end position expressed in milliseconds, from0
toINT_MAX
, default is0
, in case0
is set itassumes the total stream duration.
SUN AU audio muxer.
It accepts a single audio stream.
Audio Video Interleaved muxer.
AVI is a proprietary format developed by Microsoft, and later formally specifiedthrough the Open DML specification.
Because of differences in players implementations, it might be required to setsome options to make sure that the generated output can be correctly played bythe target player.
If set totrue
, store positive height for raw RGB bitmaps, whichindicates bitmap is stored bottom-up. Note that this option does not flip thebitmap which has to be done manually beforehand, e.g. by using the ‘vflip’filter. Default isfalse
and indicates bitmap is stored top down.
Reserve the specified amount of bytes for the OpenDML master index of eachstream within the file header. By default additional master indexes areembedded within the data packets if there is no space left in the first masterindex and are linked together as a chain of indexes. This index structure cancause problems for some use cases, e.g. third-party software strictly relyingon the OpenDML index specification or when file seeking is slow. Reservingenough index space in the file header avoids these problems.
The required index space depends on the output file size and should be about 16bytes per gigabyte. When this option is omitted or set to zero the necessaryindex space is guessed.
Default value is0
.
Write the channel layout mask into the audio stream header.
This option is enabled by default. Disabling the channel mask can be useful inspecific scenarios, e.g. when merging multiple audio streams into one forcompatibility with software that only supports a single audio stream in AVI(see(ffmpeg-filters)the "amerge" section in the ffmpeg-filters manual).
AV1 (Alliance for Open Media Video codec 1) image format muxer.
This muxers stores images encoded using the AV1 codec.
It accepts one or two video streams. In case two video streams areprovided, the second one shall contain a single plane storing thealpha mask.
In case more than one image is provided, the generated output isconsidered an animated AVIF and the number of loops can be specifiedwith theloop option.
This is based on the specification by Alliance for Open Media at urlhttps://aomediacodec.github.io/av1-avif.
number of times to loop an animated AVIF,0
specify an infiniteloop, default is0
Set the timescale written in the movie header box (mvhd
).Range is 1 to INT_MAX. Default is1000
.
ShockWave Flash (SWF) / ActionScript Virtual Machine 2 (AVM2) format muxer.
It accepts one audio stream, one video stream, or both.
G.729 (.bit) file format muxer.
It accepts a single G.729 audio stream.
Apple CAF (Core Audio Format) muxer.
It accepts a single audio stream.
Codec2 audio audio muxer.
It accepts a single codec2 audio stream.
Chromaprint fingerprinter muxers.
To enable compilation of this filter you need to configure FFmpeg with--enable-chromaprint
.
This muxer feeds audio data to the Chromaprint library, whichgenerates a fingerprint for the provided audio data. See:https://acoustid.org/chromaprint
It takes a single signed native-endian 16-bit raw audio stream of atmost 2 channels.
Select version of algorithm to fingerprint with. Range is0
to4
. Version3
enables silence detection. Default is1
.
Format to output the fingerprint as. Accepts the following options:
Base64 compressed fingerprint(default)
Binary compressed fingerprint
Binary raw fingerprint
Threshold for detecting silence. Range is from-1
to32767
, where-1
disables silence detection. Silencedetection can only be used with version3
of the algorithm.
Silence detection must be disabled for use with the AcoustIDservice. Default is-1
.
CRC (Cyclic Redundancy Check) muxer.
This muxer computes and prints the Adler-32 CRC of all the input audioand video frames. By default audio frames are converted to signed16-bit raw audio and video frames to raw video before computing theCRC.
The output of the muxer consists of a single line of the form:CRC=0xCRC, whereCRC is a hexadecimal number 0-padded to8 digits containing the CRC for all the decoded input frames.
See also theframecrc muxer.
ffmpeg
to compute the CRC of the input, and store it inthe fileout.crc:ffmpeg -i INPUT -f crc out.crc
ffmpeg
to print the CRC to stdout with the command:ffmpeg -i INPUT -f crc -
ffmpeg
byspecifying the audio and video codec and format. For example, tocompute the CRC of the input audio converted to PCM unsigned 8-bitand the input video converted to MPEG-2 video, use the command:ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
Dynamic Adaptive Streaming over HTTP (DASH) muxer.
This muxer creates segments and manifest files according to theMPEG-DASH standard ISO/IEC 23009-1:2014 and following standardupdates.
For more information see:
This muxer creates an MPD (Media Presentation Description) manifestfile and segment files for each stream. Segment files are placed inthe same directory of the MPD manifest file.
The segment filename might contain pre-defined identifiers used in themanifestSegmentTemplate
section as defined in section5.3.9.4.4 of the standard.
Available identifiers are$RepresentationID$
,$Number$
,$Bandwidth$
, and$Time$
. In addition to the standardidentifiers, an ffmpeg-specific$ext$
identifier is alsosupported. When specified,ffmpeg
will replace$ext$
in the file name with muxing format’s extensions such asmp4
,webm
etc.
Assign streams to adaptation sets, specified in the MPD manifestAdaptationSets
section.
An adaptation set contains a set of one or more streams accessed as asingle subset, e.g. corresponding streams encoded at different sizeselectable by the user depending on the available bandwidth, or todifferent audio streams with a different language.
Each adaptation set is specified with the syntax:
id=index,streams=streams
whereindex must be a numerical index, andstreams is asequence of,
-separated stream indices. Multiple adaptationsets can be specified, separated by spaces.
To map all video (or audio) streams to an adaptation set,v
(ora
) can be used as stream identifier instead of IDs.
When no assignment is defined, this defaults to an adaptation set foreach stream.
The following optional fields can also be specified:
Define the descriptor as defined by ISO/IEC 23009-1:2014/Amd.2:2015.
For example:
<SupplementalProperty schemeIdUri=\"urn:mpeg:dash:srd:2014\" value=\"0,0,0,1,1,2,2\"/>
The descriptor string should be a self-closing XML tag.
Override the global fragment duration specified with thefrag_duration option.
Override the global fragment type specified with thefrag_type option.
Override the global segment duration specified with theseg_duration option.
Mark an adaptation set as containing streams meant to be used forTrick Mode for the referenced adaptation set.
A few examples of possible values for theadaptation_setsoption follow:
id=0,seg_duration=2,frag_duration=1,frag_type=duration,streams=v id=1,seg_duration=2,frag_type=none,streams=a
id=0,seg_duration=2,frag_type=none,streams=0 id=1,seg_duration=10,frag_type=none,trick_id=0,streams=1
Set DASH segment files type.
Possible values:
The dash segment files format will be selected based on the streamcodec. This is the default mode.
the dash segment files will be in ISOBMFF/MP4 format
the dash segment files will be in WebM format
Set the maximum number of segments kept outside of the manifest beforeremoving from disk.
Set container format (mp4/webm) options using a:
-separated list ofkey=value parameters. Values containing:
special characters must beescaped.
Set the length in seconds of fragments within segments, fractionalvalue can also be set.
Set the type of interval for fragmentation.
Possible values:
set one fragment per segment
fragment at every frame
fragment at specific time intervals
fragment at keyframes and following P-Frame reordering (Video only,experimental)
Write globalSIDX
atom. Applicable only for single file, mp4output, non-streaming mode.
HLS master playlist name. Default ismaster.m3u8.
Generate HLS playlist files. The master playlist is generated withfilename specified by thehls_master_name option. One mediaplaylist file is generated for each stream with filenamesmedia_0.m3u8,media_1.m3u8, etc.
Specify a list of:
-separated key=value options to pass to theunderlying HTTP protocol. Applicable only for HTTP output.
Use persistent HTTP connections. Applicable only for HTTP output.
Override User-Agent field in HTTP header. Applicable only for HTTPoutput.
Ignore IO errors during open and write. Useful for long-duration runswith network output. This is disabled by default.
Enable or disable segment index correction logic. Applicable only whenuse_template is enabled anduse_timeline isdisabled. This is disabled by default.
When enabled, the logic monitors the flow of segment indexes. If astreams’s segment index value is not at the expected real timeposition, then the logic corrects that index value.
Typically this logic is needed in live streaming use cases. Thenetwork bandwidth fluctuations are common during long runstreaming. Each fluctuation can cause the segment indexes fall behindthe expected real time position.
DASH-templated name to use for the initialization segment. Default isinit-stream$RepresentationID$.$ext$
.$ext$
is replacedwith the file name extension specific for the segment format.
Enable Low-latency Dash by constraining the presence and values ofsome elements. This is disabled by default.
Enable Low-latency HLS (LHLS). Add#EXT-X-PREFETCH
tag withcurrent segment’s URI. hls.js player folks are trying to standardizean open LHLS spec. The draft spec is available athttps://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md.
This option tries to comply with the above open spec. It enablesstreaming andhls_playlist options automatically.This is an experimental feature.
Note: This is not Apple’s version LHLS. Seehttps://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis
Publish master playlist repeatedly every after specified number ofsegment intervals.
Set the maximum playback rate indicated as appropriate for thepurposes of automatically adjusting playback latency and bufferoccupancy during normal playback by clients.
DASH-templated name to use for the media segments. Default ischunk-stream$RepresentationID$-$Number%05d$.$ext$
.$ext$
is replaced with the file name extension specific for the segmentformat.
Use the given HTTP method to create output files. Generally set toPUT
orPOST
.
Set the minimum playback rate indicated as appropriate for thepurposes of automatically adjusting playback latency and bufferoccupancy during normal playback by clients.
Set one or more MPD manifest profiles.
Possible values:
MPEG-DASH ISO Base media file format live profile
DVB-DASH profile
Default value isdash
.
Enable or disable removal of all segments when finished. This isdisabled by default.
Set the segment length in seconds (fractional value can be set). Thevalue is treated as average segment duration when theuse_template option is enabled and theuse_timelineoption is disabled and as minimum segment duration for all the otheruse cases.
Default value is5
.
Enable or disable storing all segments in one file, accessed usingbyte ranges. This is disabled by default.
The name of the single file can be specified with thesingle_file_name option, if not specified assume the basenameof the manifest file with the output format extension.
DASH-templated name to use for the manifestbaseURL
element. Imply that thesingle_file option is set totrue. In the template,$ext$
is replaced with the filename extension specific for the segment format.
Enable or disable chunk streaming mode of output. In chunk streamingmode, each frame will be amoof
fragment which forms achunk. This is disabled by default.
Set an intended target latency in seconds for serving (fractionalvalue can be set). Applicable only when thestreaming andwrite_prft options are enabled. This is an informative fieldsclients can use to measure the latency of the service.
Set timeout for socket I/O operations expressed in seconds (fractionalvalue can be set). Applicable only for HTTP output.
Set the MPD update period, for dynamic content. The unit issecond. If set to0
, the period is automatically computed.
Default value is0
.
Enable or disable use ofSegmentTemplate
instead ofSegmentList
in the manifest. This is enabled by default.
Enable or disable use ofSegmentTimeline
within theSegmentTemplate
manifest section. This is enabled by default.
URL of the page that will return the UTC timestamp in ISOformat, for examplehttps://time.akamai.com/?iso
Set the maximum number of segments kept in the manifest, discard theoldest one. This is useful for live streaming.
If the value is0
, all segments are kept in themanifest. Default value is0
.
Write Producer Reference Time elements on supported streams. This alsoenables writing prft boxes in the underlying muxer. Applicable onlywhen theutc_url option is enabled. It is set toauto bydefault, in which case the muxer will attempt to enable it only inmodes that require it.
Generate a DASH output reading from an input source in realtime usingffmpeg
.
Two multimedia streams are generated from the input file, bothcontaining a video stream encoded through ‘libx264’, and an audiostream encoded with ‘libfdk_aac’. The first multimedia streamcontains video with a bitrate of 800k and audio at the default rate,the second with video scaled to 320x170 pixels at 300k and audioresampled at 22005 Hz.
Thewindow_size option keeps only the latest 5 segments withthe default duration of 5 seconds.
ffmpeg -re -i <input> -map 0 -map 0 -c:a libfdk_aac -c:v libx264 \-b:v:0 800k -profile:v:0 main \-b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline -ar:a:1 22050 \-bf 1 -keyint_min 120 -g 120 -sc_threshold 0 -b_strategy 0 \-use_timeline 1 -use_template 1 -window_size 5 \-adaptation_sets "id=0,streams=v id=1,streams=a" \-f dash /path/to/out.mpd
D-Cinema audio muxer.
It accepts a single 6-channels audio stream resampled at 96000 Hzencoded with the ‘pcm_24daud’ codec.
Useffmpeg
to mux input audio to a ‘5.1’ channel layoutresampled at 96000Hz:
ffmpeg -i INPUT -af aresample=96000,pan=5.1 slow.302
For ffmpeg versions before 7.0 you might have to use the ‘asetnsamples’filter to limit the muxed packet size, because this format does not supportmuxing packets larger than 65535 bytes (3640 samples). For newer ffmpegversions audio is automatically packetized to 36000 byte (2000 sample) packets.
DV (Digital Video) muxer.
It accepts exactly one ‘dvvideo’ video stream and at most two‘pcm_s16’ audio streams. More constraints are defined by theproperty of the video, which must correspond to a DV video supportedprofile, and on the framerate.
Useffmpeg
to convert the input:
ffmpeg -i INPUT -s:v 720x480 -pix_fmt yuv411p -r 29.97 -ac 2 -ar 48000 -y out.dv
FFmpeg metadata muxer.
This muxer writes the streams metadata in the ‘ffmetadata’format.
See(ffmpeg-formats)the Metadata chapter forinformation about the format.
Useffmpeg
to extract metadata from an input file to ametadata.ffmetafile in ‘ffmetadata’ format:
ffmpeg -i INPUT -f ffmetadata metadata.ffmeta
FIFO (First-In First-Out) muxer.
The ‘fifo’ pseudo-muxer allows the separation of encoding andmuxing by using a first-in-first-out queue and running the actual muxerin a separate thread.
This is especially useful in combination withthetee muxer and can be used to send data to severaldestinations with different reliability/writing speed/latency.
The target muxer is either selected from the output name or specifiedthrough thefifo_format option.
The behavior of the ‘fifo’ muxer if the queue fills up or if theoutput fails (e.g. if a packet cannot be written to the output) isselectable:
API users should be aware that callback functions(interrupt_callback
,io_open
andio_close
) usedwithin itsAVFormatContext
must be thread-safe.
If failure occurs, attempt to recover the output. This is especiallyuseful when used with network output, since it makes it possible torestart streaming transparently. By default this option is set tofalse
.
If set totrue
, in case the fifo queue fills up, packets willbe dropped rather than blocking the encoder. This makes it possible tocontinue streaming without delaying the input, at the cost of omittingpart of the stream. By default this option is set tofalse
, so insuch cases the encoder will be blocked until the muxer processes someof the packets and none of them is lost.
Specify the format name. Useful if it cannot be guessed from theoutput name suffix.
Specify format options for the underlying muxer. Muxer options can bespecified as a list ofkey=value pairs separated by ’:’.
Set maximum number of successive unsuccessful recovery attempts afterwhich the output fails permanently. By default this option is set to0
(unlimited).
Specify size of the queue as a number of packets. Default value is60
.
If set totrue
, recovery will be attempted regardless of typeof the error causing the failure. By default this option is set tofalse
and in case of certain (usually permanent) errors therecovery is not attempted even when theattempt_recoveryoption is set totrue
.
If set tofalse
, the real time is used when waiting for therecovery attempt (i.e. the recovery will be attempted after the timespecified by therecovery_wait_time option).
If set totrue
, the time of the processed stream is taken intoaccount instead (i.e. the recovery will be attempted after discardingthe packets corresponding to therecovery_wait_time option).
By default this option is set tofalse
.
Specify waiting time in seconds before the next recovery attempt afterprevious unsuccessful recovery attempt. Default value is5
.
Specify whether to wait for the keyframe after recovering fromqueue overflow or failure. This option is set tofalse
by default.
Buffer the specified amount of packets and delay writing theoutput. Note that the value of thequeue_size option must bebig enough to store the packets for timeshift. At the end of the inputthe fifo buffer is flushed at realtime speed.
Useffmpeg
to stream to an RTMP server, continue processingthe stream at real-time rate even in case of temporary failure(network outage) and attempt to recover streaming every secondindefinitely:
ffmpeg -re -i ... -c:v libx264 -c:a aac -f fifo -fifo_format flv \ -drop_pkts_on_overflow 1 -attempt_recovery 1 -recovery_wait_time 1 \ -map 0:v -map 0:a rtmp://example.com/live/stream_name
Sega film (.cpk) muxer.
This format was used as internal format for several Sega games.
For more information regarding the Sega film file format, visithttp://wiki.multimedia.cx/index.php?title=Sega_FILM.
It accepts at maximum one ‘cinepak’ or raw video stream, and atmaximum one audio stream.
Adobe Filmstrip muxer.
This format is used by several Adobe tools to store a generated filmstrip export. Itaccepts a single raw video stream.
Flexible Image Transport System (FITS) muxer.
This image format is used to store astronomical data.
For more information regarding the format, visithttps://fits.gsfc.nasa.gov.
Raw FLAC audio muxer.
This muxer accepts exactly one FLAC audio stream. Additionally, it is possible to addimages with disposition ‘attached_pic’.
write the file header if set totrue
, default istrue
Useffmpeg
to store the audio stream from an input file,together with several pictures used with ‘attached_pic’disposition:
ffmpeg -i INPUT -i pic1.png -i pic2.jpg -map 0:a -map 1 -map 2 -disposition:v attached_pic OUTPUT
Adobe Flash Video Format muxer.
Possible values:
Place AAC sequence header based on audio stream data.
Disable sequence end tag.
Disable metadata tag.
Disable duration and filesize in metadata when they are equal to zeroat the end of stream. (Be used to non-seekable living stream).
Used to facilitate seeking; particularly for HTTP pseudo streaming.
Per-packet CRC (Cyclic Redundancy Check) testing format.
This muxer computes and prints the Adler-32 CRC for each audioand video packet. By default audio frames are converted to signed16-bit raw audio and video frames to raw video before computing theCRC.
The output of the muxer consists of a line for each audio and videopacket of the form:
stream_index,packet_dts,packet_pts,packet_duration,packet_size, 0xCRC
CRC is a hexadecimal number 0-padded to 8 digits containing theCRC of the packet.
For example to compute the CRC of the audio and video frames inINPUT, converted to raw audio and video packets, and store itin the fileout.crc:
ffmpeg -i INPUT -f framecrc out.crc
To print the information to stdout, use the command:
ffmpeg -i INPUT -f framecrc -
Withffmpeg
, you can select the output format to which theaudio and video frames are encoded before computing the CRC for eachpacket by specifying the audio and video codec. For example, tocompute the CRC of each decoded input audio frame converted to PCMunsigned 8-bit and of each decoded input video frame converted toMPEG-2 video, use the command:
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
See also thecrc muxer.
Per-packet hash testing format.
This muxer computes and prints a cryptographic hash for each audioand video packet. This can be used for packet-by-packet equalitychecks without having to individually do a binary comparison on each.
By default audio frames are converted to signed 16-bit raw audio andvideo frames to raw video before computing the hash, but the outputof explicit conversions to other codecs can also be used. It uses theSHA-256 cryptographic hash function by default, but supports severalother algorithms.
The output of the muxer consists of a line for each audio and videopacket of the form:
stream_index,packet_dts,packet_pts,packet_duration,packet_size,hash
hash is a hexadecimal number representing the computed hashfor the packet.
Use the cryptographic hash function specified by the stringalgorithm.Supported values includeMD5
,murmur3
,RIPEMD128
,RIPEMD160
,RIPEMD256
,RIPEMD320
,SHA160
,SHA224
,SHA256
(default),SHA512/224
,SHA512/256
,SHA384
,SHA512
,CRC32
andadler32
.
To compute the SHA-256 hash of the audio and video frames inINPUT,converted to raw audio and video packets, and store it in the fileout.sha256:
ffmpeg -i INPUT -f framehash out.sha256
To print the information to stdout, using the MD5 hash function, usethe command:
ffmpeg -i INPUT -f framehash -hash md5 -
See also thehash muxer.
Per-packet MD5 testing format.
This is a variant of theframehash muxer. Unlike that muxer,it defaults to using the MD5 hash function.
To compute the MD5 hash of the audio and video frames inINPUT,converted to raw audio and video packets, and store it in the fileout.md5:
ffmpeg -i INPUT -f framemd5 out.md5
To print the information to stdout, use the command:
ffmpeg -i INPUT -f framemd5 -
See also theframehash andmd5 muxers.
Animated GIF muxer.
Note that the GIF format has a very large time base: the delay between two frames cantherefore not be smaller than one centi second.
Set the number of times to loop the output. Use-1
for no loop,0
for looping indefinitely (default).
Force the delay (expressed in centiseconds) after the last frame. Each frameends with a delay until the next frame. The default is-1
, which is aspecial value to tell the muxer to re-use the previous delay. In case of aloop, you might want to customize this value to mark a pause for instance.
Encode a gif looping 10 times, with a 5 seconds delay betweenthe loops:
ffmpeg -i INPUT -loop 10 -final_delay 500 out.gif
Note 1: if you wish to extract the frames into separate GIF files, you need toforce theimage2 muxer:
ffmpeg -i INPUT -c:v gif -f image2 "out%d.gif"
General eXchange Format (GXF) muxer.
GXF was developed by Grass Valley Group, then standardized by SMPTE as SMPTE360M and was extended in SMPTE RDD 14-2007 to include high-definition videoresolutions.
It accepts at most one video stream with codec ‘mjpeg’, or‘mpeg1video’, or ‘mpeg2video’, or ‘dvvideo’ with resolution‘512x480’ or ‘608x576’, and several audio streams with rate 48000Hzand codec ‘pcm16_le’.
Hash testing format.
This muxer computes and prints a cryptographic hash of all the inputaudio and video frames. This can be used for equality checks withouthaving to do a complete binary comparison.
By default audio frames are converted to signed 16-bit raw audio andvideo frames to raw video before computing the hash, but the outputof explicit conversions to other codecs can also be used. Timestampsare ignored. It uses the SHA-256 cryptographic hash function by default,but supports several other algorithms.
The output of the muxer consists of a single line of the form:algo=hash, wherealgo is a short string representingthe hash function used, andhash is a hexadecimal numberrepresenting the computed hash.
Use the cryptographic hash function specified by the stringalgorithm.Supported values includeMD5
,murmur3
,RIPEMD128
,RIPEMD160
,RIPEMD256
,RIPEMD320
,SHA160
,SHA224
,SHA256
(default),SHA512/224
,SHA512/256
,SHA384
,SHA512
,CRC32
andadler32
.
To compute the SHA-256 hash of the input converted to raw audio andvideo, and store it in the fileout.sha256:
ffmpeg -i INPUT -f hash out.sha256
To print an MD5 hash to stdout use the command:
ffmpeg -i INPUT -f hash -hash md5 -
See also theframehash muxer.
HTTP Dynamic Streaming (HDS) muxer.
HTTP dynamic streaming, or HDS, is an adaptive bitrate streaming methoddeveloped by Adobe. HDS delivers MP4 video content over HTTP connections. HDScan be used for on-demand streaming or live streaming.
This muxer creates an .f4m (Adobe Flash Media Manifest File) manifest, an .abst(Adobe Bootstrap File) for each stream, and segment files in a directoryspecified as the output.
These needs to be accessed by an HDS player throuhg HTTPS for it to be able toperform playback on the generated stream.
number of fragments kept outside of the manifest before removing from disk
minimum fragment duration (in microseconds), default value is 1 second(10000000
)
remove all fragments when finished when set totrue
number of fragments kept in the manifest, if set to a value different from0
. By default all segments are kept in the output directory.
Useffmpeg
to generate HDS files to theoutput.hds directory inreal-time rate:
ffmpeg -re -i INPUT -f hds -b:v 200k output.hds
Apple HTTP Live Streaming muxer that segments MPEG-TS according tothe HTTP Live Streaming (HLS) specification.
It creates a playlist file, and one or more segment files. The output filenamespecifies the playlist filename.
By default, the muxer creates a file for each segment produced. These fileshave the same name as the playlist, followed by a sequential number and a.ts extension.
Make sure to require a closed GOP when encoding and to set the GOPsize to fit your segment time constraint.
For example, to convert an input file withffmpeg
:
ffmpeg -i in.mkv -c:v h264 -flags +cgop -g 30 -hls_time 1 out.m3u8
This example will produce the playlist,out.m3u8, and segment files:out0.ts,out1.ts,out2.ts, etc.
See also thesegment muxer, which provides a more generic andflexible implementation of a segmenter, and can be used to perform HLSsegmentation.
Set the initial target segment length. Default value is0.
duration must be a time duration specification,see(ffmpeg-utils)the Time duration section in the ffmpeg-utils(1) manual.
Segment will be cut on the next key frame after this time has passed on thefirst m3u8 list. After the initial playlist is filled,ffmpeg
will cutsegments at duration equal tohls_time.
Set the target segment length. Default value is 2.
duration must be a time duration specification,see(ffmpeg-utils)the Time duration section in the ffmpeg-utils(1) manual.Segment will be cut on the next key frame after this time has passed.
Set the maximum number of playlist entries. If set to 0 the list filewill contain all the segments. Default value is 5.
Set the number of unreferenced segments to keep on disk beforehls_flags delete_segments
deletes them. Increase this to allow continue clients to download segments whichwere recently referenced in the playlist. Default value is 1, meaning segments older thanhls_list_size+1 will be deleted.
Start the playlist sequence number (#EXT-X-MEDIA-SEQUENCE
) according to the specified source.Unlesshls_flags single_file is set, it also specifies source of starting sequence numbers ofsegment and subtitle filenames. In any case, ifhls_flags append_listis set and read playlist sequence number is greater than the specified start sequence number,then that value will be used as start value.
It accepts the following values:
Set the start numbers according to thestart_number option value.
Set the start number as the seconds since epoch (1970-01-01 00:00:00).
Set the start number as the microseconds since epoch (1970-01-01 00:00:00).
Set the start number based on the current date/time as YYYYmmddHHMMSS. e.g. 20161231235759.
Start the playlist sequence number (#EXT-X-MEDIA-SEQUENCE
) from the specifiednumberwhenhls_start_number_source value isgeneric. (This is the default case.)Unlesshls_flags single_file is set, it also specifies starting sequence numbers of segment and subtitle filenames.Default value is 0.
Explicitly set whether the client MAY (1) or MUST NOT (0) cache media segments.
Appendbaseurl to every entry in the playlist.Useful to generate playlists with absolute paths.
Note that the playlist sequence number must be unique for each segmentand it is not to be confused with the segment filename sequence numberwhich can be cyclic, for example if thewrap option isspecified.
Set the segment filename. Unless thehls_flags option is set with‘single_file’,filename is used as a string format with thesegment number appended.
For example:
ffmpeg -i in.nut -hls_segment_filename 'file%03d.ts' out.m3u8
will produce the playlist,out.m3u8, and segment files:file000.ts,file001.ts,file002.ts, etc.
filename may contain a full path or relative path specification,but only the file name part without any path will be contained in the m3u8 segment list.Should a relative path be specified, the path of the created segmentfiles will be relative to the current working directory.Whenstrftime_mkdir is set, the whole expanded value offilename will be written into the m3u8 segment list.
Whenvar_stream_map is set with two or more variant streams, thefilename pattern must contain the string "%v", and this string will beexpanded to the position of variant stream index in the generated segment filenames.
For example:
ffmpeg -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \ -map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \ -hls_segment_filename 'file_%v_%03d.ts' out_%v.m3u8
will produce the playlists segment file sets:file_0_000.ts,file_0_001.ts,file_0_002.ts, etc. andfile_1_000.ts,file_1_001.ts,file_1_002.ts, etc.
The string "%v" may be present in the filename or in the last directory namecontaining the file, but only in one of them. (Additionally, %v may appear multiple times in the lastsub-directory or filename.) If the string %v is present in the directory name, thensub-directories are created after expanding the directory name pattern. Thisenables creation of segments corresponding to different variant streams insubdirectories.
For example:
ffmpeg -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \ -map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \ -hls_segment_filename 'vs%v/file_%03d.ts' vs%v/out.m3u8
will produce the playlists segment file sets:vs0/file_000.ts,vs0/file_001.ts,vs0/file_002.ts, etc. andvs1/file_000.ts,vs1/file_001.ts,vs1/file_002.ts, etc.
Usestrftime()
onfilename to expand the segment filename withlocaltime. The segment number is also available in this mode, but to use it,you need to set ‘second_level_segment_index’ in thehls_flag and%%d will be the specifier.
For example:
ffmpeg -i in.nut -strftime 1 -hls_segment_filename 'file-%Y%m%d-%s.ts' out.m3u8
will produce the playlist,out.m3u8, and segment files:file-20160215-1455569023.ts,file-20160215-1455569024.ts, etc.Note: On some systems/environments, the%s
specifier is notavailable. Seestrftime()
documentation.
For example:
ffmpeg -i in.nut -strftime 1 -hls_flags second_level_segment_index -hls_segment_filename 'file-%Y%m%d-%%04d.ts' out.m3u8
will produce the playlist,out.m3u8, and segment files:file-20160215-0001.ts,file-20160215-0002.ts, etc.
Used together withstrftime, it will create all subdirectories whichare present in the expanded values of optionhls_segment_filename.
For example:
ffmpeg -i in.nut -strftime 1 -strftime_mkdir 1 -hls_segment_filename '%Y%m%d/file-%Y%m%d-%s.ts' out.m3u8
will create a directory201560215 (if it does not exist), and thenproduce the playlist,out.m3u8, and segment files:20160215/file-20160215-1455569023.ts,20160215/file-20160215-1455569024.ts, etc.
For example:
ffmpeg -i in.nut -strftime 1 -strftime_mkdir 1 -hls_segment_filename '%Y/%m/%d/file-%Y%m%d-%s.ts' out.m3u8
will create a directory hierarchy2016/02/15 (if any of them do notexist), and then produce the playlist,out.m3u8, and segment files:2016/02/15/file-20160215-1455569023.ts,2016/02/15/file-20160215-1455569024.ts, etc.
Set output format options using a :-separated list of key=valueparameters. Values containing:
special characters must beescaped.
Use the information inkey_info_file for segment encryption. The firstline ofkey_info_file specifies the key URI written to the playlist. Thekey URL is used to access the encryption key during playback. The second linespecifies the path to the key file used to obtain the key during the encryptionprocess. The key file is read as a single packed array of 16 octets in binaryformat. The optional third line specifies the initialization vector (IV) as ahexadecimal string to be used instead of the segment sequence number (default)for encryption. Changes tokey_info_file will result in segmentencryption with the new key/IV and an entry in the playlist for the new keyURI/IV ifhls_flags periodic_rekey is enabled.
Key info file format:
key URIkey file pathIV (optional)
Example key URIs:
http://server/file.key/path/to/file.keyfile.key
Example key file paths:
file.key/path/to/file.key
Example IV:
0123456789ABCDEF0123456789ABCDEF
Key info file example:
http://server/file.key/path/to/file.key0123456789ABCDEF0123456789ABCDEF
Example shell script:
#!/bin/shBASE_URL=${1:-'.'}openssl rand 16 > file.keyecho $BASE_URL/file.key > file.keyinfoecho file.key >> file.keyinfoecho $(openssl rand -hex 16) >> file.keyinfoffmpeg -f lavfi -re -i testsrc -c:v h264 -hls_flags delete_segments \ -hls_key_info_file file.keyinfo out.m3u8
Enable (1) or disable (0) the AES128 encryption.When enabled every segment generated is encrypted and the encryption keyis saved asplaylist name.key.
Specify a 16-octet key to encrypt the segments, by default it is randomlygenerated.
If set,keyurl is prepended instead ofbaseurl to the key filenamein the playlist.
Specify the 16-octet initialization vector for every segment instead of theautogenerated ones.
Possible values:
Output segment files in MPEG-2 Transport Stream format. This iscompatible with all HLS versions.
Output segment files in fragmented MP4 format, similar to MPEG-DASH.fmp4 files may be used in HLS version 7 and above.
Set filename for the fragment files header file, default filename isinit.mp4.
Whenstrftime is enabled,filename is expanded to the segment filename with localtime.
For example:
ffmpeg -i in.nut -hls_segment_type fmp4 -strftime 1 -hls_fmp4_init_filename "%s_init.mp4" out.m3u8
will produce init like this1602678741_init.mp4.
Resend init file after m3u8 file refresh every time, default is0.
Whenvar_stream_map is set with two or more variant streams, thefilename pattern must contain the string "%v", this string specifiesthe position of variant stream index in the generated init file names.The string "%v" may be present in the filename or in the last directory namecontaining the file. If the string is present in the directory name, thensub-directories are created after expanding the directory name pattern. Thisenables creation of init files corresponding to different variant streams insubdirectories.
Possible values:
If this flag is set, the muxer will store all segments in a single MPEG-TSfile, and will use byte ranges in the playlist. HLS playlists generated withthis way will have the version number 4.
For example:
ffmpeg -i in.nut -hls_flags single_file out.m3u8
will produce the playlist,out.m3u8, and a single segment file,out.ts.
Segment files removed from the playlist are deleted after a period of timeequal to the duration of the segment plus the duration of the playlist.
Append new segments into the end of old segment list,and remove the#EXT-X-ENDLIST
from the old segment list.
Round the duration info in the playlist file segment info to integervalues, instead of using floating point.If there are no other features requiring higher HLS versions be used,then this will allowffmpeg
to output a HLS version 2 m3u8.
Add the#EXT-X-DISCONTINUITY
tag to the playlist, before thefirst segment’s information.
Do not append theEXT-X-ENDLIST
tag at the end of the playlist.
The file specified byhls_key_info_file
will be checked periodically anddetect updates to the encryption info. Be sure to replace this file atomically,including the file containing the AES encryption key.
Add the#EXT-X-INDEPENDENT-SEGMENTS
tag to playlists that has video segmentsand when all the segments of that playlist are guaranteed to start with a key frame.
Add the#EXT-X-I-FRAMES-ONLY
tag to playlists that has video segmentsand can play only I-frames in the#EXT-X-BYTERANGE
mode.
Allow segments to start on frames other than key frames. This improvesbehavior on some players when the time between key frames is inconsistent,but may make things worse on others, and can cause some oddities duringseeking. This flag should be used with thehls_time option.
GenerateEXT-X-PROGRAM-DATE-TIME
tags.
Make it possible to use segment indexes as %%d in thehls_segment_filename option expression besides date/time values whenstrftime option is on. To get fixed width numbers with trailing zeroes, %%0xd formatis available where x is the required width.
Make it possible to use segment sizes (counted in bytes) as %%s inhls_segment_filename option expression besides date/time values whenstrftime is on. To get fixed width numbers with trailing zeroes, %%0xs formatis available where x is the required width.
Make it possible to use segment duration (calculated in microseconds) as %%t inhls_segment_filename option expression besides date/time values whenstrftime is on. To get fixed width numbers with trailing zeroes, %%0xt formatis available where x is the required width.
For example:
ffmpeg -i sample.mpeg \ -f hls -hls_time 3 -hls_list_size 5 \ -hls_flags second_level_segment_index+second_level_segment_size+second_level_segment_duration \ -strftime 1 -strftime_mkdir 1 -hls_segment_filename "segment_%Y%m%d%H%M%S_%%04d_%%08s_%%013t.ts" stream.m3u8
will produce segments like this:segment_20170102194334_0003_00122200_0000003000000.ts,segment_20170102194334_0004_00120072_0000003000000.ts etc.
Write segment data tofilename.tmp and rename to filename only once thesegment is complete.
A webserver serving up segments can be configured to reject requests to *.tmp toprevent access to in-progress segments before they have been added to the m3u8playlist.
This flag also affects how m3u8 playlist files are created. If this flag is set,all playlist files will be written into a temporary file and renamed after theyare complete, similarly as segments are handled. But playlists withfile
protocol and withhls_playlist_type type other than ‘vod’ arealways written into a temporary file regardless of this flag.
Master playlist files specified withmaster_pl_name, if any, withfile
protocol, are always written into temporary file regardless of thisflag ifmaster_pl_publish_rate value is other than zero.
If type is ‘event’, emit#EXT-X-PLAYLIST-TYPE:EVENT
in the m3u8header. This forceshls_list_size to 0; the playlist can only beappended to.
If type is ‘vod’, emit#EXT-X-PLAYLIST-TYPE:VOD
in the m3u8header. This forceshls_list_size to 0; the playlist must not change.
Use the given HTTP method to create the hls files.
For example:
ffmpeg -re -i in.ts -f hls -method PUT http://example.com/live/out.m3u8
will upload all the mpegts segment files to the HTTP server using the HTTP PUTmethod, and update the m3u8 files everyrefresh
times using the samemethod. Note that the HTTP server must support the given method for uploadingfiles.
Override User-Agent field in HTTP header. Applicable only for HTTP output.
Specify a map string defining how to group the audio, video and subtitle streamsinto different variant streams. The variant stream groups are separated byspace.
Expected string format is like this "a:0,v:0 a:1,v:1 ....". Here a:, v:, s: arethe keys to specify audio, video and subtitle streams respectively.Allowed values are 0 to 9 (limited just based on practical usage).
When there are two or more variant streams, the output filename pattern mustcontain the string "%v": this string specifies the position of variant streamindex in the output media playlist filenames. The string "%v" may be present inthe filename or in the last directory name containing the file. If the string ispresent in the directory name, then sub-directories are created after expandingthe directory name pattern. This enables creation of variant streams insubdirectories.
A few examples follow.
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \ -map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \ http://example.com/live/out_%v.m3u8
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \ -map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0,name:my_hd v:1,a:1,name:my_sd" \ http://example.com/live/out_%v.m3u8
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k \ -map 0:v -map 0:a -map 0:v -f hls -var_stream_map "v:0 a:0 v:1" \ http://example.com/live/out_%v.m3u8
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \ -map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0 v:1,a:1" \ http://example.com/live/vs_%v/out.m3u8
#EXT-X-STREAM-INF
tag for each variant stream in the master playlist, the#EXT-X-MEDIA
tag is also added for the two audio only variant streams andthey are mapped to the two video only variant streams with audio group names’aud_low’ and ’aud_high’.By default, a single hls variant containing all the encoded streams is created.ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k -b:v:1 3000k \ -map 0:a -map 0:a -map 0:v -map 0:v -f hls \ -var_stream_map "a:0,agroup:aud_low a:1,agroup:aud_high v:0,agroup:aud_low v:1,agroup:aud_high" \ -master_pl_name master.m3u8 \ http://example.com/live/out_%v.m3u8
#EXT-X-STREAM-INF
tag for each variant stream in the master playlist, the#EXT-X-MEDIA
tag is also added for the two audio only variant streams andthey are mapped to the one video only variant streams with audio group name’aud_low’, and the audio group have default stat is NO or YES.By default, a single hls variant containing all the encoded streams is created.ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k \ -map 0:a -map 0:a -map 0:v -f hls \ -var_stream_map "a:0,agroup:aud_low,default:yes a:1,agroup:aud_low v:0,agroup:aud_low" \ -master_pl_name master.m3u8 \ http://example.com/live/out_%v.m3u8
#EXT-X-STREAM-INF
tag for each variant stream in the master playlist, the#EXT-X-MEDIA
tag is also added for the two audio only variant streams andthey are mapped to the one video only variant streams with audio group name’aud_low’, and the audio group have default stat is NO or YES, and one audiohave and language is named ENG, the other audio language is named CHN. Bydefault, a single hls variant containing all the encoded streams is created.ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k \ -map 0:a -map 0:a -map 0:v -f hls \ -var_stream_map "a:0,agroup:aud_low,default:yes,language:ENG a:1,agroup:aud_low,language:CHN v:0,agroup:aud_low" \ -master_pl_name master.m3u8 \ http://example.com/live/out_%v.m3u8
#EXT-X-MEDIA
tag withTYPE=SUBTITLES
in the master playlist with webvtt subtitle group name’subtitle’ and optional subtitle name, e.g. ’English’. Make sure the inputfile has one text subtitle stream at least.ffmpeg -y -i input_with_subtitle.mkv \ -b:v:0 5250k -c:v h264 -pix_fmt yuv420p -profile:v main -level 4.1 \ -b:a:0 256k \ -c:s webvtt -c:a mp2 -ar 48000 -ac 2 -map 0:v -map 0:a:0 -map 0:s:0 \ -f hls -var_stream_map "v:0,a:0,s:0,sgroup:subtitle,sname:English" \ -master_pl_name master.m3u8 -t 300 -hls_time 10 -hls_init_time 4 -hls_list_size \ 10 -master_pl_publish_rate 10 -hls_flags \ delete_segments+discont_start+split_by_time ./tmp/video.m3u8
Map string which specifies different closed captions groups and theirattributes. The closed captions stream groups are separated by space.
Expected string format is like this"ccgroup:<group name>,instreamid:<INSTREAM-ID>,language:<language code> ....".’ccgroup’ and ’instreamid’ are mandatory attributes. ’language’ is an optionalattribute.
The closed captions groups configured using this option are mapped to differentvariant streams by providing the same ’ccgroup’ name in thevar_stream_map string.
For example:
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \ -a53cc:0 1 -a53cc:1 1 \ -map 0:v -map 0:a -map 0:v -map 0:a -f hls \ -cc_stream_map "ccgroup:cc,instreamid:CC1,language:en ccgroup:cc,instreamid:CC2,language:sp" \ -var_stream_map "v:0,a:0,ccgroup:cc v:1,a:1,ccgroup:cc" \ -master_pl_name master.m3u8 \ http://example.com/live/out_%v.m3u8
will add two#EXT-X-MEDIA
tags withTYPE=CLOSED-CAPTIONS
in themaster playlist for the INSTREAM-IDs ’CC1’ and ’CC2’. Also, it will addCLOSED-CAPTIONS
attribute with group name ’cc’ for the two output variantstreams.
Ifvar_stream_map is not set, then the first available ccgroup incc_stream_map is mapped to the output variant stream.
For example:
ffmpeg -re -i in.ts -b:v 1000k -b:a 64k -a53cc 1 -f hls \ -cc_stream_map "ccgroup:cc,instreamid:CC1,language:en" \ -master_pl_name master.m3u8 \ http://example.com/live/out.m3u8
this will add#EXT-X-MEDIA
tag withTYPE=CLOSED-CAPTIONS
in themaster playlist with group name ’cc’, language ’en’ (english) and INSTREAM-ID’CC1’. Also, it will addCLOSED-CAPTIONS
attribute with group name ’cc’for the output variant stream.
Create HLS master playlist with the given name.
For example:
ffmpeg -re -i in.ts -f hls -master_pl_name master.m3u8 http://example.com/live/out.m3u8
creates an HLS master playlist with namemaster.m3u8 which is publishedathttp://example.com/live/.
Publish master play list repeatedly every after specified number of segment intervals.
For example:
ffmpeg -re -i in.ts -f hls -master_pl_name master.m3u8 \-hls_time 2 -master_pl_publish_rate 30 http://example.com/live/out.m3u8
creates an HLS master playlist with namemaster.m3u8 and keepspublishing it repeatedly every after 30 segments i.e. every after 60s.
Use persistent HTTP connections. Applicable only for HTTP output.
Set timeout for socket I/O operations. Applicable only for HTTP output.
Ignore IO errors during open, write and delete. Useful for long-duration runs with network output.
Set custom HTTP headers, can override built in default headers. Applicable only for HTTP output.
Immersive Audio Model and Formats (IAMF) muxer.
IAMF is used to provide immersive audio content for presentation on a wide rangeof devices in both streaming and offline applications. These applicationsinclude internet audio streaming, multicasting/broadcasting services, filedownload, gaming, communication, virtual and augmented reality, and others. Inthese applications, audio may be played back on a wide range of devices, e.g.,headphones, mobile phones, tablets, TVs, sound bars, home theater systems, andbig screens.
This format was promoted and desgined by Alliance for Open Media.
For more information about this format, seehttps://aomedia.org/iamf/.
ICO file muxer.
Microsoft’s icon file format (ICO) has some strict limitations that should be noted:
BMP Bit Depth FFmpeg Pixel Format1bit pal84bit pal88bit pal816bit rgb555le24bit bgr2432bit bgra
Internet Low Bitrate Codec (iLBC) raw muxer.
It accepts a single ‘ilbc’ audio stream.
Image file muxer.
The ‘image2’ muxer writes video frames to image files.
The output filenames are specified by a pattern, which can be used toproduce sequentially numbered series of files.The pattern may contain the string "%d" or "%0Nd", this stringspecifies the position of the characters representing a numbering inthe filenames. If the form "%0Nd" is used, the stringrepresenting the number in each filename is 0-padded toNdigits. The literal character ’%’ can be specified in the pattern withthe string "%%".
If the pattern contains "%d" or "%0Nd", the first filename ofthe file list specified will contain the number 1, all the followingnumbers will be sequential.
The pattern may contain a suffix which is used to automaticallydetermine the format of the image files to write.
For example the pattern "img-%03d.bmp" will specify a sequence offilenames of the formimg-001.bmp,img-002.bmp, ...,img-010.bmp, etc.The pattern "img%%-%d.jpg" will specify a sequence of filenames of theformimg%-1.jpg,img%-2.jpg, ...,img%-10.jpg,etc.
The image muxer supports the .Y.U.V image file format. This format isspecial in that each image frame consists of three files, foreach of the YUV420P components. To read or write this image file format,specify the name of the ’.Y’ file. The muxer will automatically open the’.U’ and ’.V’ files as required.
The ‘image2pipe’ muxer accepts the same options as the ‘image2’ muxer,but ignores the pattern verification and expansion, as it is supposed to writeto the command output rather than to an actual stored file.
If set to 1, expand the filename with the packet PTS (presentation time stamp).Default value is 0.
Start the sequence from the specified number. Default value is 1.
If set to 1, the filename will always be interpreted as just afilename, not a pattern, and the corresponding file will be continuouslyoverwritten with new images. Default value is 0.
If set to 1, expand the filename with date and time information fromstrftime()
. Default value is 0.
Write output to a temporary file, which is renamed to target filename oncewriting is completed. Default is disabled.
Set protocol options as a :-separated list of key=value parameters. Valuescontaining the:
special character must be escaped.
ffmpeg
for creating a sequence of filesimg-001.jpeg,img-002.jpeg, ..., taking one image every second from the input video:ffmpeg -i in.avi -vsync cfr -r 1 -f image2 'img-%03d.jpeg'
Note that withffmpeg
, if the format is not specified with the-f
option and the output filename specifies an image fileformat, the image2 muxer is automatically selected, so the previouscommand can be written as:
ffmpeg -i in.avi -vsync cfr -r 1 'img-%03d.jpeg'
Note also that the pattern must not necessarily contain "%d" or"%0Nd", for example to create a single image fileimg.jpeg from the start of the input video you can employ the command:
ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
strftime()
function for the syntax.To generate image files from thestrftime()
"%Y-%m-%d_%H-%M-%S" pattern,the followingffmpeg
command can be used:
ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S.jpg"
ffmpeg -f v4l2 -r 1 -i /dev/video0 -copyts -f image2 -frame_pts true %d.jpg
ffmpeg -f x11grab -framerate 1 -i :0.0 -q:v 6 -update 1 -protocol_opts method=PUT http://example.com/desktop.jpg
Berkeley / IRCAM / CARL Sound Filesystem (BICSF) format muxer.
The Berkeley/IRCAM/CARL Sound Format, developed in the 1980s, is a result of themerging of several different earlier sound file formats and systems includingthe csound system developed by Dr Gareth Loy at the Computer Audio Research Lab(CARL) at UC San Diego, the IRCAM sound file system developed by Rob Gross andDan Timis at the Institut de Recherche et Coordination Acoustique / Musique inParis and the Berkeley Fast Filesystem.
It was developed initially as part of the Berkeley/IRCAM/CARL Sound Filesystem,a suite of programs designed to implement a filesystem for audio applicationsrunning under Berkeley UNIX. It was particularly popular in academic musicresearch centres, and was used a number of times in the creation of earlycomputer-generated compositions.
This muxer accepts a single audio stream containing PCM data.
On2 IVF muxer.
IVF was developed by On2 Technologies (formerly known as DuckCorporation), to store internally developed codecs.
This muxer accepts a single ‘vp8’, ‘vp9’, or ‘av1’video stream.
JACOsub subtitle format muxer.
This muxer accepts a single ‘jacosub’ subtitles stream.
For more information about the format, seehttp://unicorn.us.com/jacosub/jscripts.html.
Simon & Schuster Interactive VAG muxer.
This custom VAG container is used by some Simon & Schuster Interactivegames such as "Real War", and "Real War: Rogue States".
This muxer accepts a single ‘adpcm_ima_ssi’ audio stream.
Bluetooth SIG Low Complexity Communication Codec audio (LC3), orETSI TS 103 634 Low Complexity Communication Codec plus (LC3plus).
This muxer accepts a single ‘lc3’ audio stream.
LRC lyrics file format muxer.
LRC (short for LyRiCs) is a computer file format that synchronizessong lyrics with an audio file, such as MP3, Vorbis, or MIDI.
This muxer accepts a single ‘subrip’ or ‘text’ subtitles stream.
The following metadata tags are converted to the format correspondingmetadata:
If ‘encoder_version’ is not explicitly set, it is automaticallyset to the libavformat version.
Matroska container muxer.
This muxer implements the matroska and webm container specs.
The recognized metadata settings in this muxer are:
Set title name provided to a single track. This gets mapped tothe FileDescription element for a stream written as attachment.
Specify the language of the track in the Matroska languages form.
The language can be either the 3 letters bibliographic ISO-639-2 (ISO639-2/B) form (like "fre" for French), or a language code mixed with acountry code for specialities in languages (like "fre-ca" for CanadianFrench).
Set stereo 3D video layout of two views in a single video track.
The following values are recognized:
video is not stereo
Both views are arranged side by side, Left-eye view is on the left
Both views are arranged in top-bottom orientation, Left-eye view is at bottom
Both views are arranged in top-bottom orientation, Left-eye view is on top
Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
Each view is constituted by a row based interleaving, Right-eye view is first row
Each view is constituted by a row based interleaving, Left-eye view is first row
Both views are arranged in a column based interleaving manner, Right-eye view is first column
Both views are arranged in a column based interleaving manner, Left-eye view is first column
All frames are in anaglyph format viewable through red-cyan filters
Both views are arranged side by side, Right-eye view is on the left
All frames are in anaglyph format viewable through green-magenta filters
Both eyes laced in one Block, Left-eye view is first
Both eyes laced in one Block, Right-eye view is first
For example a 3D WebM clip can be created using the following command line:
ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
By default, this muxer writes the index for seeking (called cues in Matroskaterms) at the end of the file, because it cannot know in advance how much spaceto leave for the index at the beginning of the file. However for some use cases– e.g. streaming where seeking is possible but slow – it is useful to put theindex at the beginning of the file.
If this option is set to a non-zero value, the muxer will reservesize bytesof space in the file header and then try to write the cues there when the muxingfinishes. If the reserved space does not suffice, no Cues will be written, thefile will be finalized and writing the trailer will return an error.A safe size for most use cases should be about 50kB per hour of video.
Note that cues are only written if the output is seekable and this option willhave no effect if it is not.
If set, the muxer will write the index at the beginning of the fileby shifting the main data if necessary. This can be combined withreserve_index_space in which case the data is only shifted ifthe initially reserved space turns out to be insufficient.
This option is ignored if the output is unseekable.
Store at most the provided amount of bytes in a cluster.
If not specified, the limit is set automatically to a sensiblehardcoded fixed value.
Store at most the provided number of milliseconds in a cluster.
If not specified, the limit is set automatically to a sensiblehardcoded fixed value.
Create a WebM file conforming to WebM DASH specification. By defaultit is set tofalse
.
Track number for the DASH stream. By default it is set to1
.
Write files assuming it is a live stream. By default it is set tofalse
.
Allow raw VFW mode. By default it is set tofalse
.
If set totrue
, store positive height for raw RGB bitmaps, which indicatesbitmap is stored bottom-up. Note that this option does not flip the bitmapwhich has to be done manually beforehand, e.g. by using the ‘vflip’ filter.Default isfalse
and indicates bitmap is stored top down.
Write a CRC32 element inside every Level 1 element. By default it isset totrue
. This option is ignored for WebM.
Control how the FlagDefault of the output tracks will be set.It influences which tracks players should play by default. The default modeis ‘passthrough’.
Every track with disposition default will have the FlagDefault set.Additionally, for each type of track (audio, video or subtitle), if no trackwith disposition default of this type exists, then the first track of this typewill be marked as default (if existing). This ensures that the default flagis set in a sensible way even if the input originated from containers thatlack the concept of default tracks.
This mode is the same as infer except that if no subtitle track withdisposition default exists, no subtitle track will be marked as default.
In this mode the FlagDefault is set if and only if the AV_DISPOSITION_DEFAULTflag is set in the disposition of the corresponding stream.
MD5 testing format.
This is a variant of thehash muxer. Unlike that muxer, itdefaults to using the MD5 hash function.
See also thehash andframemd5 muxers.
ffmpeg -i INPUT -f md5 out.md5
ffmpeg -i INPUT -f md5 -
MicroDVD subtitle format muxer.
This muxer accepts a single ‘microdvd’ subtitles stream.
Synthetic music Mobile Application Format (SMAF) format muxer.
SMAF is a music data format specified by Yamaha for portableelectronic devices, such as mobile phones and personal digitalassistants.
This muxer accepts a single ‘adpcm_yamaha’ audio stream.
The MP3 muxer writes a raw MP3 stream with the following optional features:
id3v2_version
private option controls which one isused (3 or 4). Settingid3v2_version
to 0 disables the ID3v2 headercompletely.The muxer supports writing attached pictures (APIC frames) to the ID3v2 header.The pictures are supplied to the muxer in form of a video stream with a singlepacket. There can be any number of those streams, each will correspond to asingle APIC frame. The stream metadata tagstitle andcomment mapto APICdescription andpicture type respectively. Seehttp://id3.org/id3v2.4.0-frames for allowed picture types.
Note that the APIC frames must be written at the beginning, so the muxer willbuffer the audio frames until it gets all the pictures. It is therefore advisedto provide the pictures as soon as possible to avoid excessive buffering.
write_xing
private option can be used to disable it. The frame containsvarious information that may be useful to the decoder, like the audio durationor encoder delay.write_id3v1
private option, but as its capabilities arevery limited, its usage is not recommended.Examples:
Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
To attach a picture to an mp3 file select both the audio and the picture streamwithmap
:
ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1-metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3
Write a "clean" MP3 without any extra features:
ffmpeg -i input.wav -write_xing 0 -id3v2_version 0 out.mp3
MPEG transport stream muxer.
This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
The recognized metadata settings in mpegts muxer areservice_provider
andservice_name
. If they are not set the default forservice_provider
is ‘FFmpeg’ and the default forservice_name
is ‘Service01’.
The muxer options are:
Set the ‘transport_stream_id’. This identifies a transponder in DVB.Default is0x0001
.
Set the ‘original_network_id’. This is unique identifier of anetwork in DVB. Its main use is in the unique identification of a servicethrough the path ‘Original_Network_ID, Transport_Stream_ID’. Defaultis0x0001
.
Set the ‘service_id’, also known as program in DVB. Default is0x0001
.
Set the program ‘service_type’. Default isdigital_tv
.Accepts the following options:
Any hexadecimal value between0x01
and0xff
as defined inETSI 300 468.
Digital TV service.
Digital Radio service.
Teletext service.
Advanced Codec Digital Radio service.
MPEG2 Digital HDTV service.
Advanced Codec Digital SDTV service.
Advanced Codec Digital HDTV service.
Set the first PID for PMTs. Default is0x1000
, minimum is0x0020
,maximum is0x1ffa
. This option has no effect in m2ts mode where the PMTPID is fixed0x0100
.
Set the first PID for elementary streams. Default is0x0100
, minimum is0x0020
, maximum is0x1ffa
. This option has no effect in m2ts modewhere the elementary stream PIDs are fixed.
Enable m2ts mode if set to1
. Default value is-1
whichdisables m2ts mode.
Set a constant muxrate. Default is VBR.
Set minimum PES packet payload in bytes. Default is2930
.
Set mpegts flags. Accepts the following options:
Reemit PAT/PMT before writing the next packet.
Use LATM packetization for AAC.
Reemit PAT and PMT at each video frame.
Conform to System B (DVB) instead of System A (ATSC).
Mark the initial packet of each stream as discontinuity.
Emit NIT table.
Disable writing of random access indicator.
Preserve original timestamps, if value is set to1
. Default valueis-1
, which results in shifting timestamps so that they start from 0.
Omit the PES packet length for video packets. Default is1
(true).
Override the default PCR retransmission time in milliseconds. Default is-1
which means that the PCR interval will be determined automatically:20 ms is used for CBR streams, the highest multiple of the frame duration whichis less than 100 ms is used for VBR streams.
Maximum time in seconds between PAT/PMT tables. Default is0.1
.
Maximum time in seconds between SDT tables. Default is0.5
.
Maximum time in seconds between NIT tables. Default is0.5
.
Set PAT, PMT, SDT and NIT version (default0
, valid values are from 0 to 31, inclusively).This option allows updating stream structure so that standard consumer maydetect the change. To do so, reopen outputAVFormatContext
(in case of APIusage) or restartffmpeg
instance, cyclically changingtables_version value:
ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111...ffmpeg -i source3.ts -codec copy -f mpegts -tables_version 31 udp://1.1.1.1:1111ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111...
ffmpeg -i file.mpg -c copy \ -mpegts_original_network_id 0x1122 \ -mpegts_transport_stream_id 0x3344 \ -mpegts_service_id 0x5566 \ -mpegts_pmt_start_pid 0x1500 \ -mpegts_start_pid 0x150 \ -metadata service_provider="Some provider" \ -metadata service_name="Some Channel" \ out.ts
MXF muxer.
The muxer options are:
Set if user comments should be stored if available or never.IRT D-10 does not allow user comments. The default is thus to write them formxf and mxf_opatom but not for mxf_d10
Null muxer.
This muxer does not generate any output file, it is mainly useful fortesting or benchmarking purposes.
For example to benchmark decoding withffmpeg
you can use thecommand:
ffmpeg -benchmark -i INPUT -f null out.null
Note that the above command does not read or write theout.nullfile, but specifying the output file is required by theffmpeg
syntax.
Alternatively you can write the command as:
ffmpeg -benchmark -i INPUT -f null -
Change the syncpoint usage in nut:
Use of this option is not recommended, as the resulting files are very damage sensitive and seeking is not possible. Also in general the overhead from syncpoints is negligible. Note, -write_index
0 can be used to disable all growing data tables, allowing to mux endless streams with limited memory and without these disadvantages.
Thenone andtimestamped flags are experimental.
Write index at the end, the default is to write an index.
ffmpeg -i INPUT -f_strict experimental -syncpoints none - | processor
Ogg container muxer.
Preferred page duration, in microseconds. The muxer will attempt to createpages that are approximatelyduration microseconds long. This allows theuser to compromise between seek granularity and container overhead. The defaultis 1 second. A value of 0 will fill all segments, making pages as large aspossible. A value of 1 will effectively use 1 packet-per-page in mostsituations, giving a small seek granularity at the cost of additional containeroverhead.
Serial value from which to set the streams serial number.Setting it to different and sufficiently large values ensures that the producedogg files can be safely chained.
RCWT (Raw Captions With Time) is a format native to ccextractor, a commonlyused open source tool for processing 608/708 Closed Captions (CC) sources.It can be used to archive the original extracted CC bitstream and to producea source file for later processing or conversion. The format allowsfor interoperability between ccextractor and FFmpeg, is simple to parse,and can be used to create a backup of the CC presentation.
This muxer implements the specification as of March 2024, which hasbeen stable and unchanged since April 2014.
This muxer will have some nuances from the way that ccextractor muxes RCWT.No compatibility issues when processing the output with ccextractorhave been observed as a result of this so far, but mileage may varyand outputs will not be a bit-exact match.
A free specification of RCWT can be found here:https://github.com/CCExtractor/ccextractor/blob/master/docs/BINARY_FILE_FORMAT.TXT
ffmpeg -f lavfi -i "movie=INPUT.mkv[out+subcc]" -map 0:s:0 -c:s copy -f rcwt CC.rcwt.bin
Basic stream segmenter.
This muxer outputs streams to a number of separate files of nearlyfixed duration. Output filename pattern can be set in a fashionsimilar toimage2, or by using astrftime
template ifthestrftime option is enabled.
stream_segment
is a variant of the muxer used to write tostreaming output formats, i.e. which do not require global headers,and is recommended for outputting e.g. to MPEG transport stream segments.ssegment
is a shorter alias forstream_segment
.
Every segment starts with a keyframe of the selected reference stream,which is set through thereference_stream option.
Note that if you want accurate splitting for a video file, you need tomake the input key frames correspond to the exact splitting timesexpected by the segmenter, or the segment muxer will start the newsegment with the key frame found next after the specified starttime.
The segment muxer works best with a single constant frame rate video.
Optionally it can generate a list of the created segments, by settingthe optionsegment_list. The list type is specified by thesegment_list_type option. The entry filenames in the segmentlist are set by default to the basename of the corresponding segmentfiles.
See also thehls muxer, which provides a more specificimplementation for HLS segmentation.
The segment muxer supports the following options:
if set to1
, increment timecode between each segmentIf this is selected, the input need to havea timecode in the first video stream. Default value is0
.
Set the reference stream, as specified by the stringspecifier.Ifspecifier is set toauto
, the reference is chosenautomatically. Otherwise it must be a stream specifier (see the “Streamspecifiers” chapter in the ffmpeg manual) which specifies thereference stream. The default value isauto
.
Override the inner container format, by default it is guessed by the filenameextension.
Set output format options using a :-separated list of key=valueparameters. Values containing the:
special character must beescaped.
Generate also a listfile namedname. If not specified nolistfile is generated.
Set flags affecting the segment list generation.
It currently supports the following flags:
Allow caching (only affects M3U8 list files).
Allow live-friendly file generation.
Update the list file so that it contains at mostsizesegments. If 0 the list file will contain all the segments. Defaultvalue is 0.
Prependprefix to each entry. Useful to generate absolute paths.By default no prefix is applied.
Select the listing format.
The following values are recognized:
Generate a flat list for the created segments, one segment per line.
Generate a list for the created segments, one segment per line,each line matching the format (comma-separated values):
segment_filename,segment_start_time,segment_end_time
segment_filename is the name of the output file generated by themuxer according to the provided pattern. CSV escaping (according toRFC4180) is applied if required.
segment_start_time andsegment_end_time specifythe segment start and end time expressed in seconds.
A list file with the suffix".csv"
or".ext"
willauto-select this format.
‘ext’ is deprecated in favor or ‘csv’.
Generate an ffconcat file for the created segments. The resulting filecan be read using the FFmpegconcat demuxer.
A list file with the suffix".ffcat"
or".ffconcat"
willauto-select this format.
Generate an extended M3U8 file, version 3, compliant withhttp://tools.ietf.org/id/draft-pantos-http-live-streaming.
A list file with the suffix".m3u8"
will auto-select this format.
If not specified the type is guessed from the list file name suffix.
Set segment duration totime, the value must be a durationspecification. Default value is "2". See also thesegment_times option.
Note that splitting may not be accurate, unless you force thereference stream key-frames at the given time. See the introductorynotice and the examples below.
Set minimum segment duration totime, the value must be a durationspecification. This prevents the muxer ending segments at a duration belowthis value. Only effective withsegment_time
. Default value is "0".
If set to "1" split at regular clock time intervals starting from 00:00o’clock. Thetime value specified insegment_time isused for setting the length of the splitting interval.
For example withsegment_time set to "900" this makes it possibleto create files at 12:00 o’clock, 12:15, 12:30, etc.
Default value is "0".
Delay the segment splitting times with the specified duration when usingsegment_atclocktime.
For example withsegment_time set to "900" andsegment_clocktime_offset set to "300" this makes it possible tocreate files at 12:05, 12:20, 12:35, etc.
Default value is "0".
Force the segmenter to only start a new segment if a packet reaches the muxerwithin the specified duration after the segmenting clock time. This way youcan make the segmenter more resilient to backward local time jumps, such asleap seconds or transition to standard time from daylight savings time.
Default is the maximum possible duration which means starting a new segmentregardless of the elapsed time since the last clock time.
Specify the accuracy time when selecting the start time for asegment, expressed as a duration specification. Default value is "0".
When delta is specified a key-frame will start a new segment if itsPTS satisfies the relation:
PTS >= start_time - time_delta
This option is useful when splitting video content, which is alwayssplit at GOP boundaries, in case a key frame is found just before thespecified split time.
In particular may be used in combination with theffmpeg optionforce_key_frames. The key frame times specified byforce_key_frames may not be set accurately because of roundingissues, with the consequence that a key frame time may result set justbefore the specified time. For constant frame rate videos a value of1/(2*frame_rate) should address the worst case mismatch betweenthe specified time and the time set byforce_key_frames.
Specify a list of split points.times contains a list of commaseparated duration specifications, in increasing order. See alsothesegment_time option.
Specify a list of split video frame numbers.frames contains alist of comma separated integer numbers, in increasing order.
This option specifies to start a new segment whenever a referencestream key frame is found and the sequential number (starting from 0)of the frame is greater or equal to the next value in the list.
Wrap around segment index once it reacheslimit.
Set the sequence number of the first segment. Defaults to0
.
Use thestrftime
function to define the name of the newsegments to write. If this is selected, the output segment name mustcontain astrftime
function template. Default value is0
.
If enabled, allow segments to start on frames other than keyframes. Thisimproves behavior on some players when the time between keyframes isinconsistent, but may make things worse on others, and can cause some odditiesduring seeking. Defaults to0
.
Reset timestamps at the beginning of each segment, so that each segmentwill start with near-zero timestamps. It is meant to ease the playbackof the generated segments. May not work with some combinations ofmuxers/codecs. It is set to0
by default.
Specify timestamp offset to apply to the output packet timestamps. Theargument must be a time duration specification, and defaults to 0.
If enabled, write an empty segment if there are no packets during the period asegment would usually span. Otherwise, the segment will be filled with the nextpacket written. Defaults to0
.
Make sure to require a closed GOP when encoding and to set the GOPsize to fit your segment time constraint.
ffmpeg -i in.mkv -codec hevc -flags +cgop -g 60 -map 0 -f segment -segment_list out.list out%03d.nut
ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
ffmpeg
force_key_framesoption to force key frames in the input at the specified location, togetherwith the segment optionsegment_time_delta to account forpossible roundings operated when setting key frame times.ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
In order to force key frames on the input file, transcoding isrequired.
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
libx264
andaac
encoders:ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a aac -f ssegment -segment_list out.list out%03d.ts
ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \-segment_list_flags +live -segment_time 10 out%03d.mkv
Smooth Streaming muxer generates a set of files (Manifest, chunks) suitable for serving with conventional web server.
Specify the number of fragments kept in the manifest. Default 0 (keep all).
Specify the number of fragments kept outside of the manifest before removing from disk. Default 5.
Specify the number of lookahead fragments. Default 2.
Specify the minimum fragment duration (in microseconds). Default 5000000.
Specify whether to remove all fragments when finished. Default 0 (do not remove).
Per stream hash testing format.
This muxer computes and prints a cryptographic hash of all the input frames,on a per-stream basis. This can be used for equality checks without havingto do a complete binary comparison.
By default audio frames are converted to signed 16-bit raw audio andvideo frames to raw video before computing the hash, but the outputof explicit conversions to other codecs can also be used. Timestampsare ignored. It uses the SHA-256 cryptographic hash function by default,but supports several other algorithms.
The output of the muxer consists of one line per stream of the form:streamindex,streamtype,algo=hash, wherestreamindex is the index of the mapped stream,streamtype is asingle character indicating the type of stream,algo is a short stringrepresenting the hash function used, andhash is a hexadecimal numberrepresenting the computed hash.
Use the cryptographic hash function specified by the stringalgorithm.Supported values includeMD5
,murmur3
,RIPEMD128
,RIPEMD160
,RIPEMD256
,RIPEMD320
,SHA160
,SHA224
,SHA256
(default),SHA512/224
,SHA512/256
,SHA384
,SHA512
,CRC32
andadler32
.
To compute the SHA-256 hash of the input converted to raw audio andvideo, and store it in the fileout.sha256:
ffmpeg -i INPUT -f streamhash out.sha256
To print an MD5 hash to stdout use the command:
ffmpeg -i INPUT -f streamhash -hash md5 -
See also thehash andframehash muxers.
The tee muxer can be used to write the same data to several outputs, such as files or streams.It can be used, for example, to stream a video over a network and save it to disk at the same time.
It is different from specifying several outputs to theffmpeg
command-line tool. With the tee muxer, the audio and video data will be encoded only once.With conventional multiple outputs, multiple encoding operations in parallel are initiated,which can be a very expensive process. The tee muxer is not useful when using the libavformat APIdirectly because it is then possible to feed the same packets to several muxers directly.
Since the tee muxer does not represent any particular output format, ffmpeg cannot auto-selectoutput streams. So all streams intended for output must be specified using-map
. Seethe examples below.
Some encoders may need different options depending on the output format;the auto-detection of this can not work with the tee muxer, so they need to be explicitly specified.The main example is theglobal_header flag.
The slave outputs are specified in the file name given to the muxer,separated by ’|’. If any of the slave name contains the ’|’ separator,leading or trailing spaces or any special character, those must beescaped (see(ffmpeg-utils)the "Quoting and escaping"section in the ffmpeg-utils(1) manual).
If set to 1, slave outputs will be processed in separate threads using thefifomuxer. This allows to compensate for different speed/latency/reliability ofoutputs and setup transparent recovery. By default this feature is turned off.
Options to pass to fifo pseudo-muxer instances. Seefifo.
Muxer options can be specified for each slave by prepending them as a list ofkey=value pairs separated by ’:’, between square brackets. Ifthe options values contain a special character or the ’:’ separator, theymust be escaped; note that this is a second level escaping.
The following special options are also recognized:
Specify the format name. Required if it cannot be guessed from theoutput URL.
Specify a list of bitstream filters to apply to the specifiedoutput.
It is possible to specify to which streams a given bitstream filterapplies, by appending a stream specifier to the option separated by/
.spec must be a stream specifier (seeFormat stream specifiers).
If the stream specifier is not specified, the bitstream filters will beapplied to all streams in the output. This will cause that output operationto fail if the output contains streams to which the bitstream filter cannotbe applied e.g.h264_mp4toannexb
being applied to an output containing an audio stream.
Options for a bitstream filter must be specified in the form ofopt=value
.
Several bitstream filters can be specified, separated by ",".
This allows to override tee muxer use_fifo option for individual slave muxer.
This allows to override tee muxer fifo_options for individual slave muxer.Seefifo.
Select the streams that should be mapped to the slave output,specified by a stream specifier. If not specified, this defaults toall the mapped streams. This will cause that output operation to failif the output format does not accept all mapped streams.
You may use multiple stream specifiers separated by commas (,
) e.g.:a:0,v
Specify behaviour on output failure. This can be set to eitherabort
(which isdefault) orignore
.abort
will cause whole process to fail in case of failureon this slave output.ignore
will ignore failure on this output, so other outputswill continue without being affected.
ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a "archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a "[onfail=ignore]archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
ffmpeg
to encode the input, and send the outputto three different destinations. Thedump_extra
bitstreamfilter is used to add extradata information to all the output videokeyframes packets, as requested by the MPEG-TS format. The selectoption is applied toout.aac in order to make it contain onlyaudio packets.ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -f tee "[bsfs/v=dump_extra=freq=keyframe]out.ts|[movflags=+faststart]out.mp4|[select=a]out.aac"
a:1
for the audio output. Notethat a second level escaping must be performed, as ":" is a specialcharacter used to separate options.ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -f tee "[bsfs/v=dump_extra=freq=keyframe]out.ts|[movflags=+faststart]out.mp4|[select=\'a:1\']out.aac"
WebM Live Chunk Muxer.
This muxer writes out WebM headers and chunks as separate files which can beconsumed by clients that support WebM Live streams via DASH.
This muxer supports the following options:
Index of the first chunk (defaults to 0).
Filename of the header where the initialization data will be written.
Duration of each audio chunk in milliseconds (defaults to 5000).
ffmpeg -f v4l2 -i /dev/video0 \ -f alsa -i hw:0 \ -map 0:0 \ -c:v libvpx-vp9 \ -s 640x360 -keyint_min 30 -g 30 \ -f webm_chunk \ -header webm_live_video_360.hdr \ -chunk_start_index 1 \ webm_live_video_360_%d.chk \ -map 1:0 \ -c:a libvorbis \ -b:a 128k \ -f webm_chunk \ -header webm_live_audio_128.hdr \ -chunk_start_index 1 \ -audio_chunk_duration 1000 \ webm_live_audio_128_%d.chk
WebM DASH Manifest muxer.
This muxer implements the WebM DASH Manifest specification to generate the DASHmanifest XML. It also supports manifest generation for DASH live streams.
For more information see:
This muxer supports the following options:
This option has the following syntax: "id=x,streams=a,b,c id=y,streams=d,e" where x and y are theunique identifiers of the adaptation sets and a,b,c,d and e are the indices of the correspondingaudio and video streams. Any number of adaptation sets can be added using this option.
Set this to 1 to create a live stream DASH Manifest. Default: 0.
Start index of the first chunk. This will go in the ‘startNumber’ attributeof the ‘SegmentTemplate’ element in the manifest. Default: 0.
Duration of each chunk in milliseconds. This will go in the ‘duration’attribute of the ‘SegmentTemplate’ element in the manifest. Default: 1000.
URL of the page that will return the UTC timestamp in ISO format. This will goin the ‘value’ attribute of the ‘UTCTiming’ element in the manifest.Default: None.
Smallest time (in seconds) shifting buffer for which any Representation isguaranteed to be available. This will go in the ‘timeShiftBufferDepth’attribute of the ‘MPD’ element. Default: 60.
Minimum update period (in seconds) of the manifest. This will go in the‘minimumUpdatePeriod’ attribute of the ‘MPD’ element. Default: 0.
ffmpeg -f webm_dash_manifest -i video1.webm \ -f webm_dash_manifest -i video2.webm \ -f webm_dash_manifest -i audio1.webm \ -f webm_dash_manifest -i audio2.webm \ -map 0 -map 1 -map 2 -map 3 \ -c copy \ -f webm_dash_manifest \ -adaptation_sets "id=0,streams=0,1 id=1,streams=2,3" \ manifest.xml
WebRTC (Real-Time Communication) muxer that supports sub-second latency streaming according tothe WHIP (WebRTC-HTTP ingestion protocol) specification.
This is an experimental feature.
It uses HTTP as a signaling protocol to exchange SDP capabilities and ICE lite candidates. Then,it uses STUN binding requests and responses to establish a session over UDP. Subsequently, itinitiates a DTLS handshake to exchange the SRTP encryption keys. Lastly, it splits video andaudio frames into RTP packets and encrypts them using SRTP.
Ensure that you use H.264 without B frames and Opus for the audio codec. For example, to convertan input file withffmpeg
to WebRTC:
ffmpeg -re -i input.mp4 -acodec libopus -ar 48000 -ac 2 \ -vcodec libx264 -profile:v baseline -tune zerolatency -threads 1 -bf 0 \ -f whip "http://localhost:1985/rtc/v1/whip/?app=live&stream=livestream"
For this example, we have employed low latency options, resulting in an end-to-end latency ofapproximately 150ms.
This muxer supports the following options:
Set the timeout in milliseconds for ICE and DTLS handshake.Default value is 5000.
Set the maximum size, in bytes, of RTP packets that send out.Default value is 1500.
The optional Bearer token for WHIP Authorization.
The optional certificate file path for DTLS.
The optional private key file path for DTLS.
FFmpeg is able to dump metadata from media files into a simple UTF-8-encodedINI-like text file and then load it back using the metadata muxer/demuxer.
The file format is as follows:
Next a chapter section must contain chapter start and end times in form‘START=num’, ‘END=num’, wherenum is a positiveinteger.
A ffmetadata file might look like this:
;FFMETADATA1title=bike\\shed;this is a commentartist=FFmpeg troll team[CHAPTER]TIMEBASE=1/1000START=0#chapter ends at 0:01:00END=60000title=chapter \#1[STREAM]title=multi\line
By using the ffmetadata muxer and demuxer it is possible to extractmetadata from an input file to an ffmetadata file, and then transcodethe file into an output file with the edited ffmetadata file.
Extracting an ffmetadata file withffmpeg goes as follows:
ffmpeg -i INPUT -f ffmetadata FFMETADATAFILE
Reinserting edited metadata information from the FFMETADATAFILE file canbe done as:
ffmpeg -i INPUT -i FFMETADATAFILE -map_metadata 1 -codec copy OUTPUT
ffmpeg,ffplay,ffprobe,libavformat
The FFmpeg developers.
For details about the authorship, see the Git history of the project(https://git.ffmpeg.org/ffmpeg), e.g. by typing the commandgit log
in the FFmpeg source directory, or browsing theonline repository athttps://git.ffmpeg.org/ffmpeg.
Maintainers for the specific components are listed in the fileMAINTAINERS in the source code tree.
This document was generated onJuly 12, 2025 usingmakeinfo.
Hosting provided bytelepoint.bg