FFmpeg Documentation


Table of Contents


FFmpeg Documentation

1. Introduction

FFmpeg is a very fast video and audio converter. It can also grab from a live audio/video source. The command line interface is designed to be intuitive, in the sense that FFmpeg tries to figure out all parameters that can possibly be derived automatically. You usually only have to specify the target bitrate you want. FFmpeg can also convert from any sample rate to any other, and resize video on the fly with a high quality polyphase filter.

2. Quick Start

2.1 Video and Audio grabbing

FFmpeg can grab video and audio from devices given that you specify the input format and device.

ffmpeg -f audio_device -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg

Note that you must activate the right video source and channel before launching FFmpeg with any TV viewer such as xawtv (http://bytesex.org/xawtv/) by Gerd Knorr. You also have to set the audio recording levels correctly with a standard mixer.

2.2 X11 grabbing

FFmpeg can grab the X11 display.

ffmpeg -f x11grab -s cif -i :0.0 /tmp/out.mpg

0.0 is display.screen number of your X11 server, same as the DISPLAY environment variable.

ffmpeg -f x11grab -s cif -i :0.0+10,20 /tmp/out.mpg

0.0 is display.screen number of your X11 server, same as the DISPLAY environment variable. 10 is the x-offset and 20 the y-offset for the grabbing.

2.3 Video and Audio file format conversion

* FFmpeg can use any supported file format and protocol as input: Examples: * You can use YUV files as input:

ffmpeg -i /tmp/test%d.Y /tmp/out.mpg

It will use the files:

/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...

The Y files use twice the resolution of the U and V files. They are raw files, without header. They can be generated by all decent video decoders. You must specify the size of the image with the @option{-s} option if FFmpeg cannot guess it. * You can input from a raw YUV420P file:

ffmpeg -i /tmp/test.yuv /tmp/out.avi

test.yuv is a file containing raw YUV planar data. Each frame is composed of the Y plane followed by the U and V planes at half vertical and horizontal resolution. * You can output to a raw YUV420P file:

ffmpeg -i mydivx.avi hugefile.yuv

* You can set several input files and output files:

ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg

Converts the audio file a.wav and the raw YUV video file a.yuv to MPEG file a.mpg. * You can also do audio and video conversions at the same time:

ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2

Converts a.wav to MPEG audio at 22050Hz sample rate. * You can encode to several formats at the same time and define a mapping from input stream to output streams:

ffmpeg -i /tmp/a.wav -ab 64k /tmp/a.mp2 -ab 128k /tmp/b.mp2 -map 0:0 -map 0:0

Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map file:index' specifies which input stream is used for each output stream, in the order of the definition of output streams. * You can transcode decrypted VOBs

ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800k -g 300 -bf 2 -acodec mp3 -ab 128k snatch.avi

This is a typical DVD ripping example; the input is a VOB file, the output an AVI file with MPEG-4 video and MP3 audio. Note that in this command we use B-frames so the MPEG-4 stream is DivX5 compatible, and GOP size is 300 which means one intra frame every 10 seconds for 29.97fps input video. Furthermore, the audio stream is MP3-encoded so you need to enable LAME support by passing --enable-mp3lame to configure. The mapping is particularly useful for DVD transcoding to get the desired audio language. NOTE: To see the supported input formats, use ffmpeg -formats.

3. Invocation

3.1 Syntax

The generic syntax is:

ffmpeg [[infile options][@option{-i} infile]]... {[outfile options] outfile}...

As a general rule, options are applied to the next specified file. Therefore, order is important, and you can have the same option on the command line multiple times. Each occurrence is then applied to the next input or output file. * To set the video bitrate of the output file to 64kbit/s:

ffmpeg -i input.avi -b 64k output.avi

* To force the frame rate of the input and output file to 24 fps:

ffmpeg -r 24 -i input.avi output.avi

* To force the frame rate of the output file to 24 fps:

ffmpeg -i input.avi -r 24 output.avi

* To force the frame rate of input file to 1 fps and the output file to 24 fps:

ffmpeg -r 1 -i input.avi -r 24 output.avi

The format option may be needed for raw input files. By default, FFmpeg tries to convert as losslessly as possible: It uses the same audio and video parameters for the outputs as the one specified for the inputs.

3.2 Main options

@option{-L}
Show license.
@option{-h}
Show help.
@option{-version}
Show version.
@option{-formats}
Show available formats, codecs, protocols, ...
@option{-f fmt}
Force format.
@option{-i filename}
input filename
@option{-y}
Overwrite output files.
@option{-t duration}
Set the recording time in seconds. hh:mm:ss[.xxx] syntax is also supported.
@option{-fs limit_size}
Set the file size limit.
@option{-ss position}
Seek to given time position in seconds. hh:mm:ss[.xxx] syntax is also supported.
@option{-itsoffset offset}
Set the input time offset in seconds. [-]hh:mm:ss[.xxx] syntax is also supported. This option affects all the input files that follow it. The offset is added to the timestamps of the input files. Specifying a positive offset means that the corresponding streams are delayed by 'offset' seconds.
@option{-title string}
Set the title.
@option{-timestamp time}
Set the timestamp.
@option{-author string}
Set the author.
@option{-copyright string}
Set the copyright.
@option{-comment string}
Set the comment.
@option{-album string}
Set the album.
@option{-track number}
Set the track.
@option{-year number}
Set the year.
@option{-v verbose}
Control amount of logging.
@option{-target type}
Specify target file type ("vcd", "svcd", "dvd", "dv", "dv50", "pal-vcd", "ntsc-svcd", ... ). All the format options (bitrate, codecs, buffer sizes) are then set automatically. You can just type:
ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
Nevertheless you can specify additional options as long as you know they do not conflict with the standard, as in:
ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
@option{-dframes number}
Set the number of data frames to record.
@option{-scodec codec}
Force subtitle codec ('copy' to copy stream).
@option{-newsubtitle}
Add a new subtitle stream to the current output stream.
@option{-slang code}
Set the ISO 639 language code (3 letters) of the current subtitle stream.

3.3 Video Options

@option{-b bitrate}
Set the video bitrate in bit/s (default = 200 kb/s).
@option{-vframes number}
Set the number of video frames to record.
@option{-r fps}
Set frame rate (Hz value, fraction or abbreviation), (default = 25).
@option{-s size}
Set frame size. The format is `wxh' (ffserver default = 160x128, ffmpeg default = same as source). The following abbreviations are recognized:
`sqcif'
128x96
`qcif'
176x144
`cif'
352x288
`4cif'
704x576
`qqvga'
160x120
`qvga'
320x240
`vga'
640x480
`svga'
800x600
`xga'
1024x768
`uxga'
1600x1200
`qxga'
2048x1536
`sxga'
1280x1024
`qsxga'
2560x2048
`hsxga'
5120x4096
`wvga'
852x480
`wxga'
1366x768
`wsxga'
1600x1024
`wuxga'
1920x1200
`woxga'
2560x1600
`wqsxga'
3200x2048
`wquxga'
3840x2400
`whsxga'
6400x4096
`whuxga'
7680x4800
`cga'
320x200
`ega'
640x350
`hd480'
852x480
`hd720'
1280x720
`hd1080'
1920x1080
@option{-aspect aspect}
Set aspect ratio (4:3, 16:9 or 1.3333, 1.7777).
@option{-croptop size}
Set top crop band size (in pixels).
@option{-cropbottom size}
Set bottom crop band size (in pixels).
@option{-cropleft size}
Set left crop band size (in pixels).
@option{-cropright size}
Set right crop band size (in pixels).
@option{-padtop size}
Set top pad band size (in pixels).
@option{-padbottom size}
Set bottom pad band size (in pixels).
@option{-padleft size}
Set left pad band size (in pixels).
@option{-padright size}
Set right pad band size (in pixels).
@option{-padcolor (hex color)}
Set color of padded bands. The value for padcolor is expressed as a six digit hexadecimal number where the first two digits represent red, the middle two digits green and last two digits blue (default = 000000 (black)).
@option{-vn}
Disable video recording.
@option{-bt tolerance}
Set video bitrate tolerance (in bit/s).
@option{-maxrate bitrate}
Set max video bitrate tolerance (in bit/s).
@option{-minrate bitrate}
Set min video bitrate tolerance (in bit/s).
@option{-bufsize size}
Set rate control buffer size (in bits).
@option{-vcodec codec}
Force video codec to codec. Use the copy special value to tell that the raw codec data must be copied as is.
@option{-sameq}
Use same video quality as source (implies VBR).
@option{-pass n}
Select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video is generated at the exact requested bitrate in the second pass.
@option{-passlogfile file}
Set two pass logfile name to file.
@option{-newvideo}
Add a new video stream to the current output stream.

3.4 Advanced Video Options

@option{-pix_fmt format}
Set pixel format.
@option{-g gop_size}
Set the group of pictures size.
@option{-intra}
Use only intra frames.
@option{-vdt n}
Discard threshold.
@option{-qscale q}
Use fixed video quantizer scale (VBR).
@option{-qmin q}
minimum video quantizer scale (VBR)
@option{-qmax q}
maximum video quantizer scale (VBR)
@option{-qdiff q}
maximum difference between the quantizer scales (VBR)
@option{-qblur blur}
video quantizer scale blur (VBR)
@option{-qcomp compression}
video quantizer scale compression (VBR)
@option{-lmin lambda}
minimum video lagrange factor (VBR)
@option{-lmax lambda}
max video lagrange factor (VBR)
@option{-mblmin lambda}
minimum macroblock quantizer scale (VBR)
@option{-mblmax lambda}
maximum macroblock quantizer scale (VBR) These four options (lmin, lmax, mblmin, mblmax) use 'lambda' units, but you may use the QP2LAMBDA constant to easily convert from 'q' units:
ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
@option{-rc_init_cplx complexity}
initial complexity for single pass encoding
@option{-b_qfactor factor}
qp factor between P- and B-frames
@option{-i_qfactor factor}
qp factor between P- and I-frames
@option{-b_qoffset offset}
qp offset between P- and B-frames
@option{-i_qoffset offset}
qp offset between P- and I-frames
@option{-rc_eq equation}
Set rate control equation (see section 3.10 FFmpeg formula evaluator) (default = tex^qComp).
@option{-rc_override override}
rate control override for specific intervals
@option{-me method}
Set motion estimation method to method. Available methods are (from lowest to best quality):
`zero'
Try just the (0, 0) vector.
`phods'
`log'
`x1'
`epzs'
(default method)
`full'
exhaustive search (slow and marginally better than epzs)
@option{-dct_algo algo}
Set DCT algorithm to algo. Available values are:
`0'
FF_DCT_AUTO (default)
`1'
FF_DCT_FASTINT
`2'
FF_DCT_INT
`3'
FF_DCT_MMX
`4'
FF_DCT_MLIB
`5'
FF_DCT_ALTIVEC
@option{-idct_algo algo}
Set IDCT algorithm to algo. Available values are:
`0'
FF_IDCT_AUTO (default)
`1'
FF_IDCT_INT
`2'
FF_IDCT_SIMPLE
`3'
FF_IDCT_SIMPLEMMX
`4'
FF_IDCT_LIBMPEG2MMX
`5'
FF_IDCT_PS2
`6'
FF_IDCT_MLIB
`7'
FF_IDCT_ARM
`8'
FF_IDCT_ALTIVEC
`9'
FF_IDCT_SH4
`10'
FF_IDCT_SIMPLEARM
@option{-er n}
Set error resilience to n.
`1'
FF_ER_CAREFUL (default)
`2'
FF_ER_COMPLIANT
`3'
FF_ER_AGGRESSIVE
`4'
FF_ER_VERY_AGGRESSIVE
@option{-ec bit_mask}
Set error concealment to bit_mask. bit_mask is a bit mask of the following values:
`1'
FF_EC_GUESS_MVS (default = enabled)
`2'
FF_EC_DEBLOCK (default = enabled)
@option{-bf frames}
Use 'frames' B-frames (supported for MPEG-1, MPEG-2 and MPEG-4).
@option{-mbd mode}
macroblock decision
`0'
FF_MB_DECISION_SIMPLE: Use mb_cmp (cannot change it yet in FFmpeg).
`1'
FF_MB_DECISION_BITS: Choose the one which needs the fewest bits.
`2'
FF_MB_DECISION_RD: rate distortion
@option{-4mv}
Use four motion vector by macroblock (MPEG-4 only).
@option{-part}
Use data partitioning (MPEG-4 only).
@option{-bug param}
Work around encoder bugs that are not auto-detected.
@option{-strict strictness}
How strictly to follow the standards.
@option{-aic}
Enable Advanced intra coding (h263+).
@option{-umv}
Enable Unlimited Motion Vector (h263+)
@option{-deinterlace}
Deinterlace pictures.
@option{-ilme}
Force interlacing support in encoder (MPEG-2 and MPEG-4 only). Use this option if your input file is interlaced and you want to keep the interlaced format for minimum losses. The alternative is to deinterlace the input stream with @option{-deinterlace}, but deinterlacing introduces losses.
@option{-psnr}
Calculate PSNR of compressed frames.
@option{-vstats}
Dump video coding statistics to `vstats_HHMMSS.log'.
@option{-vstats_file file}
Dump video coding statistics to file.
@option{-vhook module}
Insert video processing module. module contains the module name and its parameters separated by spaces.
@option{-top n}
top=1/bottom=0/auto=-1 field first
@option{-dc precision}
Intra_dc_precision.
@option{-vtag fourcc/tag}
Force video tag/fourcc.
@option{-qphist}
Show QP histogram.
@option{-vbsf bitstream filter}
Bitstream filters available are "dump_extra", "remove_extra", "noise".

3.5 Audio Options

@option{-aframes number}
Set the number of audio frames to record.
@option{-ar freq}
Set the audio sampling frequency (default = 44100 Hz).
@option{-ab bitrate}
Set the audio bitrate in bit/s (default = 64k).
@option{-ac channels}
Set the number of audio channels (default = 1).
@option{-an}
Disable audio recording.
@option{-acodec codec}
Force audio codec to codec. Use the copy special value to specify that the raw codec data must be copied as is.
@option{-newaudio}
Add a new audio track to the output file. If you want to specify parameters, do so before -newaudio (-acodec, -ab, etc..). Mapping will be done automatically, if the number of output streams is equal to the number of input streams, else it will pick the first one that matches. You can override the mapping using -map as usual. Example:
ffmpeg -i file.mpg -vcodec copy -acodec ac3 -ab 384k test.mpg -acodec mp2 -ab 192k -newaudio
@option{-alang code}
Set the ISO 639 language code (3 letters) of the current audio stream.

3.6 Advanced Audio options:

@option{-atag fourcc/tag}
Force audio tag/fourcc.
@option{-absf bitstream filter}
Bitstream filters available are "dump_extra", "remove_extra", "noise", "mp3comp", "mp3decomp".

3.7 Subtitle options:

@option{-scodec codec}
Force subtitle codec ('copy' to copy stream).
@option{-newsubtitle}
Add a new subtitle stream to the current output stream.
@option{-slang code}
Set the ISO 639 language code (3 letters) of the current subtitle stream.

3.8 Audio/Video grab options

@option{-vc channel}
Set video grab channel (DV1394 only).
@option{-tvstd standard}
Set television standard (NTSC, PAL (SECAM)).
@option{-isync}
Synchronize read on input.

3.9 Advanced options

@option{-map input stream id[:input stream id]}
Set stream mapping from input streams to output streams. Just enumerate the input streams in the order you want them in the output. [input stream id] sets the (input) stream to sync against.
@option{-map_meta_data outfile:infile}
Set meta data information of outfile from infile.
@option{-debug}
Print specific debug info.
@option{-benchmark}
Add timings for benchmarking.
@option{-dump}
Dump each input packet.
@option{-hex}
When dumping packets, also dump the payload.
@option{-bitexact}
Only use bit exact algorithms (for codec testing).
@option{-ps size}
Set packet size in bits.
@option{-re}
Read input at native frame rate. Mainly used to simulate a grab device.
@option{-loop_input}
Loop over the input stream. Currently it works only for image streams. This option is used for automatic FFserver testing.
@option{-loop_output number_of_times}
Repeatedly loop output for formats that support looping such as animated GIF (0 will loop the output infinitely).
@option{-threads count}
Thread count.
@option{-vsync parameter}
Video sync method. Video will be stretched/squeezed to match the timestamps, it is done by duplicating and dropping frames. With -map you can select from which stream the timestamps should be taken. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one.
@option{-async samples_per_second}
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps, the parameter is the maximum samples per second by which the audio is changed. -async 1 is a special case where only the start of the audio stream is corrected without any later correction.

3.10 FFmpeg formula evaluator

When evaluating a rate control string, FFmpeg uses an internal formula evaluator. The following binary operators are available: +, -, *, /, ^. The following unary operators are available: +, -, (...). The following functions are available:

sinh(x)
cosh(x)
tanh(x)
sin(x)
cos(x)
tan(x)
exp(x)
log(x)
squish(x)
gauss(x)
abs(x)
max(x, y)
min(x, y)
gt(x, y)
lt(x, y)
eq(x, y)
bits2qp(bits)
qp2bits(qp)

The following constants are available:

PI
E
iTex
pTex
tex
mv
fCode
iCount
mcVar
var
isI
isP
isB
avgQP
qComp
avgIITex
avgPITex
avgPPTex
avgBPTex
avgTex

3.11 Protocols

The filename can be `-' to read from standard input or to write to standard output. FFmpeg also handles many protocols specified with an URL syntax. Use 'ffmpeg -formats' to see a list of the supported protocols. The protocol http: is currently used only to communicate with FFserver (see the FFserver documentation). When FFmpeg will be a video player it will also be used for streaming :-)

4. Tips

5. external libraries

FFmpeg can be hooked up with a number of external libraries to add support for more formats. None of them are used by default, their use has to be explicitly requested by passing the appropriate flags to `./configure'.

5.1 AMR

AMR comes in two different flavors, WB and NB. FFmpeg can make use of the AMR WB (floating-point mode) and the AMR NB (floating-point mode) reference decoders and encoders. Go to http://www.penguin.cz/~utx/amr and follow the instructions for installing the libraries. Then pass --enable-amr-nb and/or --enable-amr-wb to configure to enable the libraries.

6. Supported File Formats and Codecs

You can use the -formats option to have an exhaustive list.

6.1 File Formats

FFmpeg supports the following file formats through the libavformat library:
Supported File Format Encoding Decoding Comments
MPEG audio X X
MPEG-1 systems X X muxed audio and video
MPEG-2 PS X X also known as VOB file
MPEG-2 TS X also known as DVB Transport Stream
ASF X X
AVI X X
WAV X X
Macromedia Flash X X Only embedded audio is decoded.
FLV X X Macromedia Flash video files
Real Audio and Video X X
Raw AC3 X X
Raw MJPEG X X
Raw MPEG video X X
Raw PCM8/16 bits, mulaw/Alaw X X
Raw CRI ADX audio X X
Raw Shorten audio X
SUN AU format X X
NUT X X NUT Open Container Format
QuickTime X X
MPEG-4 X X MPEG-4 is a variant of QuickTime.
Raw MPEG4 video X X
DV X X
4xm X 4X Technologies format, used in some games.
Playstation STR X
Id RoQ X X Used in Quake III, Jedi Knight 2, other computer games.
Interplay MVE X Format used in various Interplay computer games.
WC3 Movie X Multimedia format used in Origin's Wing Commander III computer game.
Sega FILM/CPK X Used in many Sega Saturn console games.
Westwood Studios VQA/AUD X Multimedia formats used in Westwood Studios games.
Id Cinematic (.cin) X Used in Quake II.
FLIC format X .fli/.flc files
Sierra VMD X Used in Sierra CD-ROM games.
Sierra Online X .sol files used in Sierra Online games.
Matroska X
Electronic Arts Multimedia X Used in various EA games; files have extensions like WVE and UV2.
Nullsoft Video (NSV) format X
ADTS AAC audio X X
Creative VOC X X Created for the Sound Blaster Pro.
American Laser Games MM X Multimedia format used in games like Mad Dog McCree
AVS X Multimedia format used by the Creature Shock game.
Smacker X Multimedia format used by many games.
GXF X X General eXchange Format SMPTE 360M, used by Thomson Grass Valley playout servers.
CIN X Multimedia format used by Delphine Software games.
MXF X Material eXchange Format SMPTE 377M, used by D-Cinema, broadcast industry.
SEQ X Tiertex .seq files used in the DOS CDROM version of the game Flashback.
DXA X This format is used in non-Windows version of Feeble Files game and different game cutscenes repacked for use with ScummVM.
THP X Used on the Nintendo GameCube.
C93 X Used in the game Cyberia from Interplay.
Bethsoft VID X Used in some games from Bethesda Softworks.
CRYO APC X Audio format used in some games by CRYO Interactive Entertainment.

X means that encoding (resp. decoding) is supported.

6.2 Image Formats

FFmpeg can read and write images for each frame of a video sequence. The following image formats are supported:
Supported Image Format Encoding Decoding Comments
PGM, PPM X X
PAM X X PAM is a PNM extension with alpha support.
PGMYUV X X PGM with U and V components in YUV 4:2:0
JPEG X X Progressive JPEG is not supported.
.Y.U.V X X one raw file per component
animated GIF X X Only uncompressed GIFs are generated.
PNG X X 2 bit and 4 bit/pixel not supported yet.
Targa X Targa (.TGA) image format.
TIFF X X YUV, JPEG and some extension is not supported yet.
SGI X X SGI RGB image format
PTX X V.Flash PTX format

X means that encoding (resp. decoding) is supported.

6.3 Video Codecs

Supported Codec Encoding Decoding Comments
MPEG-1 video X X
MPEG-2 video X X
MPEG-4 X X
MSMPEG4 V1 X X
MSMPEG4 V2 X X
MSMPEG4 V3 X X
WMV7 X X
WMV8 X X not completely working
WMV9 X not completely working
VC1 X
H.261 X X
H.263(+) X X also known as RealVideo 1.0
H.264 X
RealVideo 1.0 X X
RealVideo 2.0 X X
MJPEG X X
lossless MJPEG X X
JPEG-LS X X fourcc: MJLS, lossless and near-lossless is supported
Apple MJPEG-B X
Sunplus MJPEG X fourcc: SP5X
DV X X
HuffYUV X X
FFmpeg Video 1 X X experimental lossless codec (fourcc: FFV1)
FFmpeg Snow X X experimental wavelet codec (fourcc: SNOW)
Asus v1 X X fourcc: ASV1
Asus v2 X X fourcc: ASV2
Creative YUV X fourcc: CYUV
Sorenson Video 1 X X fourcc: SVQ1
Sorenson Video 3 X fourcc: SVQ3
On2 VP3 X still experimental
On2 VP5 X fourcc: VP50
On2 VP6 X fourcc: VP60,VP61,VP62
Theora X X still experimental
Intel Indeo 3 X
FLV X X Sorenson H.263 used in Flash
Flash Screen Video X X fourcc: FSV1
ATI VCR1 X fourcc: VCR1
ATI VCR2 X fourcc: VCR2
Cirrus Logic AccuPak X fourcc: CLJR
4X Video X Used in certain computer games.
Sony Playstation MDEC X
Id RoQ X Used in Quake III, Jedi Knight 2, other computer games.
Xan/WC3 X Used in Wing Commander III .MVE files.
Interplay Video X Used in Interplay .MVE files.
Apple Animation X fourcc: 'rle '
Apple Graphics X fourcc: 'smc '
Apple Video X fourcc: rpza
Apple QuickDraw X fourcc: qdrw
Cinepak X
Microsoft RLE X
Microsoft Video-1 X
Westwood VQA X
Id Cinematic Video X Used in Quake II.
Planar RGB X fourcc: 8BPS
FLIC video X
Duck TrueMotion v1 X fourcc: DUCK
Duck TrueMotion v2 X fourcc: TM20
VMD Video X Used in Sierra VMD files.
MSZH X Part of LCL
ZLIB X X Part of LCL, encoder experimental
TechSmith Camtasia X fourcc: TSCC
IBM Ultimotion X fourcc: ULTI
Miro VideoXL X fourcc: VIXL
QPEG X fourccs: QPEG, Q1.0, Q1.1
LOCO X
Winnov WNV1 X
Autodesk Animator Studio Codec X fourcc: AASC
Fraps FPS1 X
CamStudio X fourcc: CSCD
American Laser Games Video X Used in games like Mad Dog McCree
ZMBV X X Encoder works only on PAL8
AVS Video X Video encoding used by the Creature Shock game.
Smacker Video X Video encoding used in Smacker.
RTjpeg X Video encoding used in NuppelVideo files.
KMVC X Codec used in Worms games.
VMware Video X Codec used in videos captured by VMware.
Cin Video X Codec used in Delphine Software games.
Tiertex Seq Video X Codec used in DOS CDROM FlashBack game.
DXA Video X Codec originally used in Feeble Files game.
AVID DNxHD X aka SMPTE VC3
C93 Video X Codec used in Cyberia game.
THP X Used on the Nintendo GameCube.
Bethsoft VID X Used in some games from Bethesda Softworks.
Renderware TXD X Texture dictionaries used by the Renderware Engine.

X means that encoding (resp. decoding) is supported.

6.4 Audio Codecs

Supported Codec Encoding Decoding Comments
MPEG audio layer 2 IX IX
MPEG audio layer 1/3 IX IX MP3 encoding is supported through the external library LAME.
AC3 IX IX liba52 is used internally for decoding.
Vorbis X X
WMA V1/V2 X X
AAC X X Supported through the external library libfaac/libfaad.
Microsoft ADPCM X X
MS IMA ADPCM X X
QT IMA ADPCM X
4X IMA ADPCM X
G.726 ADPCM X X
Duck DK3 IMA ADPCM X Used in some Sega Saturn console games.
Duck DK4 IMA ADPCM X Used in some Sega Saturn console games.
Westwood Studios IMA ADPCM X Used in Westwood Studios games like Command and Conquer.
SMJPEG IMA ADPCM X Used in certain Loki game ports.
CD-ROM XA ADPCM X
CRI ADX ADPCM X X Used in Sega Dreamcast games.
Electronic Arts ADPCM X Used in various EA titles.
Creative ADPCM X 16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2
THP ADPCM X Used on the Nintendo GameCube.
RA144 X Real 14400 bit/s codec
RA288 X Real 28800 bit/s codec
RADnet X IX Real low bitrate AC3 codec, liba52 is used for decoding.
AMR-NB X X Supported through an external library.
AMR-WB X X Supported through an external library.
DV audio X
Id RoQ DPCM X X Used in Quake III, Jedi Knight 2, other computer games.
Interplay MVE DPCM X Used in various Interplay computer games.
Xan DPCM X Used in Origin's Wing Commander IV AVI files.
Sierra Online DPCM X Used in Sierra Online game audio files.
Apple MACE 3 X
Apple MACE 6 X
FLAC lossless audio X X
Shorten lossless audio X
Apple lossless audio X QuickTime fourcc 'alac'
FFmpeg Sonic X X experimental lossy/lossless codec
Qdesign QDM2 X there are still some distortions
Real COOK X All versions except 5.1 are supported
DSP Group TrueSpeech X
True Audio (TTA) X
Smacker Audio X
WavPack Audio X
Cin Audio X Codec used in Delphine Software games.
Intel Music Coder X
Musepack X Only SV7 is supported
DT$ Coherent Audio X
ATRAC 3 X

X means that encoding (resp. decoding) is supported. I means that an integer-only version is available, too (ensures high performance on systems without hardware floating point support).

7. Platform Specific information

7.1 BSD

BSD make will not build FFmpeg, you need to install and use GNU Make (`gmake').

7.2 Windows

To get help and instructions for using FFmpeg under Windows, check out the FFmpeg Windows Help Forum at http://arrozcru.no-ip.org/ffmpeg/.

7.2.1 Native Windows compilation

Notes:

7.2.2 Visual C++ compatibility

FFmpeg will not compile under Visual C++ -- and it has too many dependencies on the GCC compiler to make a port viable. However, if you want to use the FFmpeg libraries in your own applications, you can still compile those applications using Visual C++. An important restriction to this is that you have to use the dynamically linked versions of the FFmpeg libraries (i.e. the DLLs), and you have to make sure that Visual-C++-compatible import libraries are created during the FFmpeg build process. This description of how to use the FFmpeg libraries with Visual C++ is based on Visual C++ 2005 Express Edition Beta 2. If you have a different version, you might have to modify the procedures slightly. Here are the step-by-step instructions for building the FFmpeg libraries so they can be used with Visual C++:

  1. Install Visual C++ (if you have not done so already).
  2. Install MinGW and MSYS as described above.
  3. Add a call to `vcvars32.bat' (which sets up the environment variables for the Visual C++ tools) as the first line of `msys.bat'. The standard location for `vcvars32.bat' is `C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat', and the standard location for `msys.bat' is `C:\msys\1.0\msys.bat'. If this corresponds to your setup, add the following line as the first line of `msys.bat': call "C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat"
  4. Start the MSYS shell (file `msys.bat') and type link.exe. If you get a help message with the command line options of link.exe, this means your environment variables are set up correctly, the Microsoft linker is on the path and will be used by FFmpeg to create Visual-C++-compatible import libraries.
  5. Extract the current version of FFmpeg and change to the FFmpeg directory.
  6. Type the command ./configure --enable-shared --disable-static --enable-memalign-hack to configure and, if that did not produce any errors, type make to build FFmpeg.
  7. The subdirectories `libavformat', `libavcodec', and `libavutil' should now contain the files `avformat.dll', `avformat.lib', `avcodec.dll', `avcodec.lib', `avutil.dll', and `avutil.lib', respectively. Copy the three DLLs to your System32 directory (typically `C:\Windows\System32').

And here is how to use these libraries with Visual C++:

  1. Create a new console application ("File / New / Project") and then select "Win32 Console Application". On the appropriate page of the Application Wizard, uncheck the "Precompiled headers" option.
  2. Write the source code for your application, or, for testing, just copy the code from an existing sample application into the source file that Visual C++ has already created for you. (Note that your source filehas to have a .cpp extension; otherwise, Visual C++ will not compile the FFmpeg headers correctly because in C mode, it does not recognize the inline keyword.) For example, you can copy `output_example.c' from the FFmpeg distribution (but you will have to make minor modifications so the code will compile under C++, see below).
  3. Open the "Project / Properties" dialog box. In the "Configuration" combo box, select "All Configurations" so that the changes you make will affect both debug and release builds. In the tree view on the left hand side, select "C/C++ / General", then edit the "Additional Include Directories" setting to contain the complete paths to the `libavformat', `libavcodec', and `libavutil' subdirectories of your FFmpeg directory. Note that the directories have to be separated using semicolons. Now select "Linker / General" from the tree view and edit the "Additional Library Directories" setting to contain the same three directories.
  4. Still in the "Project / Properties" dialog box, select "Linker / Input" from the tree view, then add the files `avformat.lib', `avcodec.lib', and `avutil.lib' to the end of the "Additional Dependencies". Note that the names of the libraries have to be separated using spaces.
  5. Now, select "C/C++ / Code Generation" from the tree view. Select "Debug" in the "Configuration" combo box. Make sure that "Runtime Library" is set to "Multi-threaded Debug DLL". Then, select "Release" in the "Configuration" combo box and make sure that "Runtime Library" is set to "Multi-threaded DLL".
  6. Click "OK" to close the "Project / Properties" dialog box and build the application. Hopefully, it should compile and run cleanly. If you used `output_example.c' as your sample application, you will get a few compiler errors, but they are easy to fix. The first type of error occurs because Visual C++ does not allow an int to be converted to an enum without a cast. To solve the problem, insert the required casts (this error occurs once for a CodecID and once for a CodecType). The second type of error occurs because C++ requires the return value of malloc to be cast to the exact type of the pointer it is being assigned to. Visual C++ will complain that, for example, (void *) is being assigned to (uint8_t *) without an explicit cast. So insert an explicit cast in these places to silence the compiler. The third type of error occurs because the snprintf library function is called _snprintf under Visual C++. So just add an underscore to fix the problem. With these changes, `output_example.c' should compile under Visual C++, and the resulting executable should produce valid video files.

7.2.3 Cross compilation for Windows with Linux

You must use the MinGW cross compilation tools available at http://www.mingw.org/. Then configure FFmpeg with the following options:

./configure --target-os=mingw32 --cross-prefix=i386-mingw32msvc-

(you can change the cross-prefix according to the prefix chosen for the MinGW tools). Then you can easily test FFmpeg with Wine (http://www.winehq.com/).

7.2.4 Compilation under Cygwin

Cygwin works very much like Unix. Just install your Cygwin with all the "Base" packages, plus the following "Devel" ones:

binutils, gcc-core, make, subversion

Do not install binutils-20060709-1 (they are buggy on shared builds); use binutils-20050610-1 instead. Then run

./configure --enable-static --disable-shared

to make a static build or

./configure --enable-shared --disable-static

to build shared libraries. If you want to build FFmpeg with additional libraries, download Cygwin "Devel" packages for Ogg and Vorbis from any Cygwin packages repository and/or SDL, xvid, faac, faad2 packages from Cygwin Ports, (http://cygwinports.dotsrc.org/).

7.2.5 Crosscompilation for Windows under Cygwin

With Cygwin you can create Windows binaries that do not need the cygwin1.dll. Just install your Cygwin as explained before, plus these additional "Devel" packages:

gcc-mingw-core, mingw-runtime, mingw-zlib

and add some special flags to your configure invocation. For a static build run

./configure --target-os=mingw32 --enable-memalign-hack --enable-static --disable-shared --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin

and for a build with shared libraries

./configure --target-os=mingw32 --enable-memalign-hack --enable-shared --disable-static --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin

7.3 BeOS

The configure script should guess the configuration itself. Networking support is currently not finished. errno issues fixed by Andrew Bachmann. Old stuff: François Revol - revol at free dot fr - April 2002 The configure script should guess the configuration itself, however I still did not test building on the net_server version of BeOS. FFserver is broken (needs poll() implementation). There are still issues with errno codes, which are negative in BeOS, and that FFmpeg negates when returning. This ends up turning errors into valid results, then crashes. (To be fixed)

8. Developers Guide

8.1 API

8.2 Integrating libavcodec or libavformat in your program

You can integrate all the source code of the libraries to link them statically to avoid any version problem. All you need is to provide a 'config.mak' and a 'config.h' in the parent directory. See the defines generated by ./configure to understand what is needed. You can use libavcodec or libavformat in your commercial program, but any patch you make must be published. The best way to proceed is to send your patches to the FFmpeg mailing list.

8.3 Coding Rules

FFmpeg is programmed in the ISO C90 language with a few additional features from ISO C99, namely:

These features are supported by all compilers we care about, so we will not accept patches to remove their use unless they absolutely do not impair clarity and performance. All code must compile with GCC 2.95 and GCC 3.3. Currently, FFmpeg also compiles with several other compilers, such as the Compaq ccc compiler or Sun Studio 9, and we would like to keep it that way unless it would be exceedingly involved. To ensure compatibility, please do not use any additional C99 features or GCC extensions. Especially watch out for:

Indent size is 4. The presentation is the one specified by 'indent -i4 -kr -nut'. The TAB character is forbidden outside of Makefiles as is any form of trailing whitespace. Commits containing either will be rejected by the Subversion repository. Main priority in FFmpeg is simplicity and small code size (=less bugs). Comments: Use the JavaDoc/Doxygen format (see examples below) so that code documentation can be generated automatically. All nontrivial functions should have a comment above them explaining what the function does, even if it is just one sentence. All structures and their member variables should be documented, too.

/**
 * @file mpeg.c
 * MPEG codec.
 * @author ...
 */

/**
 * Summary sentence.
 * more text ...
 * ...
 */
typedef struct Foobar{
    int var1; /**< var1 description */
    int var2; ///< var2 description
    /** var3 description */
    int var3;
} Foobar;

/**
 * Summary sentence.
 * more text ...
 * ...
 * @param my_parameter description of my_parameter
 * @return return value description
 */
int myfunc(int my_parameter)
...

fprintf and printf are forbidden in libavformat and libavcodec, please use av_log() instead.

8.4 Development Policy

  1. You must not commit code which breaks FFmpeg! (Meaning unfinished but enabled code which breaks compilation or compiles but does not work or breaks the regression tests) You can commit unfinished stuff (for testing etc), but it must be disabled (#ifdef etc) by default so it does not interfere with other developers' work.
  2. You do not have to over-test things. If it works for you, and you think it should work for others, then commit. If your code has problems (portability, triggers compiler bugs, unusual environment etc) they will be reported and eventually fixed.
  3. Do not commit unrelated changes together, split them into self-contained pieces. Also do not forget that if part B depends on part A, but A does not depend on B, then A can and should be committed first and separate from B. Keeping changes well split into self-contained parts makes reviewing and understanding them on the commit log mailing list easier. This also helps in case of debugging later on. Also if you have doubts about splitting or not splitting, do not hesitate to ask/discuss it on the developer mailing list.
  4. Do not change behavior of the program (renaming options etc) without first discussing it on the ffmpeg-devel mailing list. Do not remove functionality from the code. Just improve! Note: Redundant code can be removed.
  5. Do not commit changes to the build system (Makefiles, configure script) which change behavior, defaults etc, without asking first. The same applies to compiler warning fixes, trivial looking fixes and to code maintained by other developers. We usually have a reason for doing things the way we do. Send your changes as patches to the ffmpeg-devel mailing list, and if the code maintainers say OK, you may commit. This does not apply to files you wrote and/or maintain.
  6. We refuse source indentation and other cosmetic changes if they are mixed with functional changes, such commits will be rejected and removed. Every developer has his own indentation style, you should not change it. Of course if you (re)write something, you can use your own style, even though we would prefer if the indentation throughout FFmpeg was consistent (Many projects force a given indentation style - we do not.). If you really need to make indentation changes (try to avoid this), separate them strictly from real changes. NOTE: If you had to put if(){ .. } over a large (> 5 lines) chunk of code, then either do NOT change the indentation of the inner part within (do not move it to the right)! or do so in a separate commit
  7. Always fill out the commit log message. Describe in a few lines what you changed and why. You can refer to mailing list postings if you fix a particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
  8. If you apply a patch by someone else, include the name and email address in the log message. Since the ffmpeg-cvslog mailing list is publicly archived you should add some SPAM protection to the email address. Send an answer to ffmpeg-devel (or wherever you got the patch from) saying that you applied the patch.
  9. When applying patches that have been discussed (at length) on the mailing list, reference the thread in the log message.
  10. Do NOT commit to code actively maintained by others without permission. Send a patch to ffmpeg-devel instead. If noone answers within a reasonable timeframe (12h for build failures and security fixes, 3 days small changes, 1 week for big patches) then commit your patch if you think it is OK. Also note, the maintainer can simply ask for more time to review!
  11. Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits are sent there and reviewed by all the other developers. Bugs and possible improvements or general questions regarding commits are discussed there. We expect you to react if problems with your code are uncovered.
  12. Update the documentation if you change behavior or add features. If you are unsure how best to do this, send a patch to ffmpeg-devel, the documentation maintainer(s) will review and commit your stuff.
  13. Try to keep important discussions and requests (also) on the public developer mailing list, so that all developers can benefit from them.
  14. Never write to unallocated memory, never write over the end of arrays, always check values read from some untrusted source before using them as array index or other risky things.
  15. Remember to check if you need to bump versions for the specific libav parts (libavutil, libavcodec, libavformat) you are changing. You need to change the version integer and the version string. Incrementing the first component means no backward compatibility to previous versions (e.g. removal of a function from the public API). Incrementing the second component means backward compatible change (e.g. addition of a function to the public API). Incrementing the third component means a noteworthy binary compatible change (e.g. encoder bug fix that matters for the decoder).
  16. If you add a new codec, remember to update the changelog, add it to the supported codecs table in the documentation and bump the second component of the `libavcodec' version number appropriately. If it has a fourcc, add it to `libavformat/avienc.c', even if it is only a decoder.
  17. Do not change code to hide warnings without ensuring that the underlying logic is correct and thus the warning was inappropriate.
  18. If you add a new file, give it a proper license header. Do not copy and paste it from a random place, use an existing file as template.

We think our rules are not too hard. If you have comments, contact us. Note, these rules are mostly borrowed from the MPlayer project.

8.5 Submitting patches

First, (see section 8.3 Coding Rules) above if you did not yet. When you submit your patch, try to send a unified diff (diff '-up' option). I cannot read other diffs :-) Also please do not submit patches which contain several unrelated changes. Split them into individual self-contained patches; this makes reviewing them much easier. Run the regression tests before submitting a patch so that you can verify that there are no big problems. Patches should be posted as base64 encoded attachments (or any other encoding which ensures that the patch will not be trashed during transmission) to the ffmpeg-devel mailing list, see http://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-devel It also helps quite a bit if you tell us what the patch does (for example 'replaces lrint by lrintf'), and why (for example '*BSD isn't C99 compliant and has no lrint()') Also please if you send several patches, send each patch as separate mail, do not attach several unrelated patches to the same mail.

8.6 patch submission checklist

  1. Do the regression tests pass with the patch applied?
  2. Is the patch a unified diff?
  3. Is the patch against latest FFmpeg SVN?
  4. Are you subscribed to ffmpeg-dev? (the list is subscribers only due to spam)
  5. Have you checked that the changes are minimal, so that the same cannot be achieved with a smaller patch and/or simpler final code?
  6. If the change is to speed critical code, did you benchmark it?
  7. If you did any benchmarks, did you provide them in the mail?
  8. Have you checked that the patch does not introduce buffer overflows or other security issues?
  9. Is the patch created from the root of the source tree, so it can be applied with patch -p0?
  10. Does the patch not mix functional and cosmetic changes?
  11. Did you add tabs or trailing whitespace to the code? Both are forbidden.
  12. Is the patch attached to the email you send?
  13. Is the mime type of the patch correct? It should be text/x-diff or text/x-patch or at least text/plain and not application/octet-stream.
  14. If the patch fixes a bug, did you provide a verbose analysis of the bug?
  15. If the patch fixes a bug, did you provide enough information, including a sample, so the bug can be reproduced and the fix can be verified?
  16. Did you provide a verbose summary about what the patch does change?
  17. Did you provide a verbose explanation why it changes things like it does?
  18. Did you provide a verbose summary of the user visible advantages and disadvantages if the patch is applied?
  19. Did you provide an example so we can verify the new feature added by the patch easily?
  20. If you added a new file, did you insert a license header? It should be taken from FFmpeg, not randomly copied and pasted from somewhere else.
  21. You should maintain alphabetical order in alphabetically ordered lists as long as doing so does not break API/ABI compatibility.
  22. Did you provide a suggestion for a clear commit log message?

8.7 Patch review process

All patches posted to ffmpeg-devel will be reviewed, unless they contain a clear note that the patch is not for SVN. Reviews and comments will be posted as replies to the patch on the mailing list. The patch submitter then has to take care of every comment, that can be by resubmitting a changed patch or by discussion. Resubmitted patches will themselves be reviewed like any other patch. If at some point a patch passes review with no comments then it is approved, that can for simple and small patches happen immediately while large patches will generally have to be changed and reviewed many times before they are approved. After a patch is approved it will be committed to the repository. We will review all submitted patches, but sometimes we are quite busy so especially for large patches this can take several weeks. When resubmitting patches, please do not make any significant changes not related to the comments received during review. Such patches will be rejected. Instead, submit significant changes or new features as separate patches.

8.8 Regression tests

Before submitting a patch (or committing to the repository), you should at least test that you did not break anything. The regression tests build a synthetic video stream and a synthetic audio stream. These are then encoded and decoded with all codecs or formats. The CRC (or MD5) of each generated file is recorded in a result file. A 'diff' is launched to compare the reference results and the result file. The regression tests then go on to test the FFserver code with a limited set of streams. It is important that this step runs correctly as well. Run 'make test' to test all the codecs and formats. Run 'make fulltest' to test all the codecs, formats and FFserver. [Of course, some patches may change the results of the regression tests. In this case, the reference results of the regression tests shall be modified accordingly].


This document was generated on 13 May 2007 using texi2html 1.56k.