aboutsummaryrefslogtreecommitdiff
path: root/lib/ffmpeg/doc
diff options
context:
space:
mode:
authortheuni <theuni-nospam-@xbmc.org>2011-01-24 16:05:21 -0500
committertheuni <theuni-nospam-@xbmc.org>2011-01-24 16:05:21 -0500
commitc51b1189e3d5353e842991f5859ddcea0f73e426 (patch)
treeef2cb8a6184699aa614f3655dca4ce661cdc108e /lib/ffmpeg/doc
parentbe61ebdc9e897fe40c6f371111724de79ddee8d5 (diff)
Merged cptspiff's code-reshuffle branch.
Squashed commit due to build breakage during code-reshuffle history. Conflicts: xbmc/Util.cpp xbmc/cdrip/CDDARipper.cpp xbmc/filesystem/Directory.cpp xbmc/filesystem/File.cpp
Diffstat (limited to 'lib/ffmpeg/doc')
-rw-r--r--lib/ffmpeg/doc/APIchanges297
-rw-r--r--lib/ffmpeg/doc/TODO82
-rw-r--r--lib/ffmpeg/doc/avutil.txt37
-rw-r--r--lib/ffmpeg/doc/developer.texi436
-rw-r--r--lib/ffmpeg/doc/faq.texi501
-rw-r--r--lib/ffmpeg/doc/ffmpeg-doc.texi984
-rw-r--r--lib/ffmpeg/doc/ffplay-doc.texi173
-rw-r--r--lib/ffmpeg/doc/ffprobe-doc.texi123
-rw-r--r--lib/ffmpeg/doc/ffserver-doc.texi276
-rw-r--r--lib/ffmpeg/doc/ffserver.conf377
-rw-r--r--lib/ffmpeg/doc/fftools-common-opts.texi89
-rw-r--r--lib/ffmpeg/doc/filters.texi258
-rw-r--r--lib/ffmpeg/doc/general.texi1067
-rw-r--r--lib/ffmpeg/doc/issue_tracker.txt228
-rw-r--r--lib/ffmpeg/doc/libavfilter.texi104
-rw-r--r--lib/ffmpeg/doc/optimization.txt235
-rw-r--r--lib/ffmpeg/doc/rate_distortion.txt61
-rw-r--r--lib/ffmpeg/doc/snow.txt630
-rw-r--r--lib/ffmpeg/doc/soc.txt24
-rw-r--r--lib/ffmpeg/doc/swscale.txt99
-rw-r--r--lib/ffmpeg/doc/tablegen.txt70
-rwxr-xr-xlib/ffmpeg/doc/texi2pod.pl423
-rw-r--r--lib/ffmpeg/doc/viterbi.txt110
23 files changed, 6684 insertions, 0 deletions
diff --git a/lib/ffmpeg/doc/APIchanges b/lib/ffmpeg/doc/APIchanges
new file mode 100644
index 0000000000..2150225343
--- /dev/null
+++ b/lib/ffmpeg/doc/APIchanges
@@ -0,0 +1,297 @@
+Never assume the API of libav* to be stable unless at least 1 week has passed since
+the last major version increase.
+The last version increases were:
+libavcodec: ?
+libavdevice: ?
+libavfilter: 2009-10-18
+libavformat: ?
+libpostproc: ?
+libswscale: ?
+libavutil: 2009-03-08
+
+
+API changes, most recent first:
+
+2010-07-11 - r24199 - lavc 52.83.0
+ Add AVCodecContext.lpc_type and AVCodecContext.lpc_passes fields.
+ Add AVLPCType enum.
+ Deprecate AVCodecContext.use_lpc.
+
+2010-07-11 - r24185 - lavc 52.82.0 - avsubtitle_free()
+ Add a function for free the contents of a AVSubtitle generated by
+ avcodec_decode_subtitle.
+
+2010-07-11 - r24174 - lavu 50.22.0 - bswap.h and intreadwrite.h
+ Make the bswap.h and intreadwrite.h API public.
+
+2010-07-08 - r24101 - lavu 50.21.0 - pixdesc.h
+ Rename read/write_line() to av_read/write_image_line().
+
+2010-07-07 - r24091 - lavfi 1.21.0 - avfilter_copy_picref_props()
+ Add avfilter_copy_picref_props().
+
+2010-07-03 - r24021 - lavc 52.79.0
+ Add FF_COMPLIANCE_UNOFFICIAL and change all instances of
+ FF_COMPLIANCE_INOFFICIAL to use FF_COMPLIANCE_UNOFFICIAL.
+
+2010-07-02 - r23985 - lavu 50.20.0 - lfg.h
+ Export av_lfg_init(), av_lfg_get(), av_mlfg_get(), and av_bmg_get() through
+ lfg.h.
+
+2010-06-28 - r23835 - lavfi 1.20.1 - av_parse_color()
+ Extend av_parse_color() syntax, make it accept an alpha value specifier and
+ set the alpha value to 255 by default.
+
+2010-06-22 - r23706 - lavf 52.71.0 - URLProtocol.priv_data_size, priv_data_class
+ Add priv_data_size and priv_data_class to URLProtocol.
+
+2010-06-22 - r23704 - lavf 52.70.0 - url_alloc(), url_connect()
+ Add url_alloc() and url_connect().
+
+2010-06-22 - r23702 - lavf 52.69.0 - av_register_protocol2()
+ Add av_register_protocol2(), deprecating av_register_protocol().
+
+2010-06-09 - r23551 - lavu 50.19.0 - av_compare_mod()
+ Add av_compare_mod() to libavutil/mathematics.h.
+
+2010-06-05 - r23485 - lavu 50.18.0 - eval API
+ Make the eval API public.
+
+2010-06-04 - r23461 - lavu 50.17.0 - AV_BASE64_SIZE
+ Add AV_BASE64_SIZE() macro.
+
+2010-06-02 - r23421 - lavc 52.73.0 - av_get_codec_tag_string()
+ Add av_get_codec_tag_string().
+
+2010-06-01 - r31301 - lsws 0.11.0 - convertPalette API
+ Add sws_convertPalette8ToPacked32() and sws_convertPalette8ToPacked24().
+
+2010-05-26 - r23334 - lavc 52.72.0 - CODEC_CAP_EXPERIMENTAL
+ Add CODEC_CAP_EXPERIMENTAL flag.
+
+2010-05-23 - r23255 - lavu 50.16.0 - av_get_random_seed()
+ Add av_get_random_seed().
+
+2010-05-18 - r23161 - lavf 52.63.0 - AVFMT_FLAG_RTP_HINT
+ Add AVFMT_FLAG_RTP_HINT as possible value for AVFormatContext.flags.
+
+2010-05-09 - r23066 - lavfi 1.20.0 - AVFilterPicRef
+ Add interlaced and top_field_first fields to AVFilterPicRef.
+
+2010-05-01 - r23002 - lavf 52.62.0 - probe function
+ Add av_probe_input_format2 to API, it allows ignoring probe
+ results below given score and returns the actual probe score.
+
+2010-04-01 - r22806 - lavf 52.61.0 - metadata API
+ Add a flag for av_metadata_set2() to disable overwriting of
+ existing tags.
+
+2010-04-01 - r22753 - lavc 52.66.0
+ Add avcodec_get_edge_width().
+
+2010-03-31 - r22750 - lavc 52.65.0
+ Add avcodec_copy_context().
+
+2010-03-31 - r22748 - lavf 52.60.0 - av_match_ext()
+ Make av_match_ext() public.
+
+2010-03-31 - r22736 - lavu 50.14.0 - AVMediaType
+ Move AVMediaType enum from libavcodec to libavutil.
+
+2010-03-31 - r22735 - lavc 52.64.0 - AVMediaType
+ Define AVMediaType enum, and use it instead of enum CodecType, which
+ is deprecated and will be dropped at the next major bump.
+
+2010-03-25 - r22684 - lavu 50.13.0 - av_strerror()
+ Implement av_strerror().
+
+2010-03-23 - r22649 - lavc 52.60.0 - av_dct_init()
+ Support DCT-I and DST-I.
+
+2010-03-15 - r22540 - lavf 52.56.0 - AVFormatContext.start_time_realtime
+ Add AVFormatContext.start_time_realtime field.
+
+2010-03-13 - r22506 - lavfi 1.18.0 - AVFilterPicRef.pos
+ Add AVFilterPicRef.pos field.
+
+2010-03-13 - r22501 - lavu 50.12.0 - error.h
+ Move error code definitions from libavcodec/avcodec.h to
+ the new public header libavutil/error.h.
+
+2010-03-07 - r22291 - lavc 52.56.0 - avfft.h
+ Add public FFT interface.
+
+2010-03-06 - r22251 - lavu 50.11.0 - av_stristr()
+ Add av_stristr().
+
+2010-03-03 - r22174 - lavu 50.10.0 - av_tree_enumerate()
+ Add av_tree_enumerate().
+
+2010-02-07 - r21673 - lavu 50.9.0 - av_compare_ts()
+ Add av_compare_ts().
+
+2010-02-05 - r30513 - lsws 0.10.0 - sws_getCoefficients()
+ Add sws_getCoefficients().
+
+2010-02-01 - r21587 - lavf 52.50.0 - metadata API
+ Add a list of generic tag names, change 'author' -> 'artist',
+ 'year' -> 'date'.
+
+2010-01-30 - r21545 - lavu 50.8.0 - av_get_pix_fmt()
+ Add av_get_pix_fmt().
+
+2010-01-21 - r30381 - lsws 0.9.0 - sws_scale()
+ Change constness attributes of sws_scale() parameters.
+
+2010-01-10 - r21121 - lavfi 1.15.0 - avfilter_graph_config_links()
+ Add a log_ctx parameter to avfilter_graph_config_links().
+
+2010-01-07 - r30236 - lsws 0.8.0 - sws_isSupported{In,Out}put()
+ Add sws_isSupportedInput() and sws_isSupportedOutput() functions.
+
+2010-01-06 - r21035 - lavfi 1.14.0 - avfilter_add_colorspace()
+ Change the avfilter_add_colorspace() signature, make it accept an
+ (AVFilterFormats **) rather than an (AVFilterFormats *) as before.
+
+2010-01-03 - r21007 - lavfi 1.13.0 - avfilter_add_colorspace()
+ Add avfilter_add_colorspace().
+
+2010-01-02 - r20998 - lavf 52.46.0 - av_match_ext()
+ Add av_match_ext(), it should be used in place of match_ext().
+
+2010-01-01 - r20991 - lavf 52.45.0 - av_guess_format()
+ Add av_guess_format(), it should be used in place of guess_format().
+
+2009-12-13 - r20834 - lavf 52.43.0 - metadata API
+ Add av_metadata_set2(), AV_METADATA_DONT_STRDUP_KEY and
+ AV_METADATA_DONT_STRDUP_VAL.
+
+2009-12-13 - r20829 - lavu 50.7.0 - avstring.h API
+ Add av_d2str().
+
+2009-12-13 - r20826 - lavc 52.42.0 - AVStream
+ Add avg_frame_rate.
+
+2009-12-12 - r20808 - lavu 50.6.0 - av_bmg_next()
+ Introduce the av_bmg_next() function.
+
+2009-12-05 - r20734 - lavfi 1.12.0 - avfilter_draw_slice()
+ Add a slice_dir parameter to avfilter_draw_slice().
+
+2009-11-26 - r20611 - lavfi 1.11.0 - AVFilter
+ Remove the next field from AVFilter, this is not anymore required.
+
+2009-11-25 - r20607 - lavfi 1.10.0 - avfilter_next()
+ Introduce the avfilter_next() function.
+
+2009-11-25 - r20605 - lavfi 1.9.0 - avfilter_register()
+ Change the signature of avfilter_register() to make it return an
+ int. This is required since now the registration operation may fail.
+
+2009-11-25 - r20603 - lavu 50.5.0 - pixdesc.h API
+ Make the pixdesc.h API public.
+
+2009-10-27 - r20385 - lavfi 1.5.0 - AVFilter.next
+ Add a next field to AVFilter, this is used for simplifying the
+ registration and management of the registered filters.
+
+2009-10-23 - r20356 - lavfi 1.4.1 - AVFilter.description
+ Add a description field to AVFilter.
+
+2009-10-19 - r20302 - lavfi 1.3.0 - avfilter_make_format_list()
+ Change the interface of avfilter_make_format_list() from
+ avfilter_make_format_list(int n, ...) to
+ avfilter_make_format_list(enum PixelFormat *pix_fmts).
+
+2009-10-18 - r20272 - lavfi 1.0.0 - avfilter_get_video_buffer()
+ Make avfilter_get_video_buffer() recursive and add the w and h
+ parameters to it.
+
+2009-10-07 - r20189 - lavfi 0.5.1 - AVFilterPic
+ Add w and h fields to AVFilterPic.
+
+2009-06-22 - r19250 - lavf 52.34.1 - AVFormatContext.packet_size
+ This is now an unsigned int instead of a signed int.
+
+2009-06-19 - r19222 - lavc 52.32.0 - AVSubtitle.pts
+ Add a pts field to AVSubtitle which gives the subtitle packet pts
+ in AV_TIME_BASE. Some subtitle de-/encoders (e.g. XSUB) will
+ not work right without this.
+
+2009-06-03 - r19078 - lavc 52.30.2 - AV_PKT_FLAG_KEY
+ PKT_FLAG_KEY has been deprecated and will be dropped at the next
+ major version. Use AV_PKT_FLAG_KEY instead.
+
+2009-06-01 - r19025 - lavc 52.30.0 - av_lockmgr_register()
+ av_lockmgr_register() can be used to register a callback function
+ that lavc (and in the future, libraries that depend on lavc) can use
+ to implement mutexes. The application should provide a callback function
+ that implements the AV_LOCK_* operations described in avcodec.h.
+ When the lock manager is registered, FFmpeg is guaranteed to behave
+ correctly in a multi-threaded application.
+
+2009-04-30 - r18719 - lavc 52.28.0 - av_free_packet()
+ av_free_packet() is no longer an inline function. It is now exported.
+
+2009-04-11 - r18431 - lavc 52.25.0 - deprecate av_destruct_packet_nofree()
+ Please use NULL instead. This has been supported since r16506
+ (lavf > 52.23.1, lavc > 52.10.0).
+
+2009-04-07 - r18351 - lavc 52.23.0 - avcodec_decode_video/audio/subtitle
+ The old decoding functions are deprecated, all new code should use the
+ new functions avcodec_decode_video2(), avcodec_decode_audio3() and
+ avcodec_decode_subtitle2(). These new functions take an AVPacket *pkt
+ argument instead of a const uint8_t *buf / int buf_size pair.
+
+2009-04-03 - r18321 - lavu 50.3.0 - av_fifo_space()
+ Introduce the av_fifo_space() function.
+
+2009-04-02 - r18317 - lavc 52.23.0 - AVPacket
+ Move AVPacket declaration from libavformat/avformat.h to
+ libavcodec/avcodec.h.
+
+2009-03-22 - r18163 - lavu 50.2.0 - RGB32 pixel formats
+ Convert the pixel formats PIX_FMT_ARGB, PIX_FMT_RGBA, PIX_FMT_ABGR,
+ PIX_FMT_BGRA, which were defined as macros, into enum PixelFormat values.
+ Conversely PIX_FMT_RGB32, PIX_FMT_RGB32_1, PIX_FMT_BGR32 and
+ PIX_FMT_BGR32_1 are now macros.
+ avcodec_get_pix_fmt() now recognizes the "rgb32" and "bgr32" aliases.
+ Re-sort the enum PixelFormat list accordingly.
+ This change breaks API/ABI backward compatibility.
+
+2009-03-22 - r18133 - lavu 50.1.0 - PIX_FMT_RGB5X5 endian variants
+ Add the enum PixelFormat values:
+ PIX_FMT_RGB565BE, PIX_FMT_RGB565LE, PIX_FMT_RGB555BE, PIX_FMT_RGB555LE,
+ PIX_FMT_BGR565BE, PIX_FMT_BGR565LE, PIX_FMT_BGR555BE, PIX_FMT_BGR555LE.
+
+2009-03-21 - r18116 - lavu 50.0.0 - av_random*
+ The Mersenne Twister PRNG implemented through the av_random* functions
+ was removed. Use the lagged Fibonacci PRNG through the av_lfg* functions
+ instead.
+
+2009-03-08 - r17869 - lavu 50.0.0 - AVFifoBuffer
+ av_fifo_init, av_fifo_read, av_fifo_write and av_fifo_realloc were dropped
+ and replaced by av_fifo_alloc, av_fifo_generic_read, av_fifo_generic_write
+ and av_fifo_realloc2.
+ In addition, the order of the function arguments of av_fifo_generic_read
+ was changed to match av_fifo_generic_write.
+ The AVFifoBuffer/struct AVFifoBuffer may only be used in an opaque way by
+ applications, they may not use sizeof() or directly access members.
+
+2009-03-01 - r17682 - lavf 52.31.0 - Generic metadata API
+ Introduce a new metadata API (see av_metadata_get() and friends).
+ The old API is now deprecated and should not be used anymore. This especially
+ includes the following structure fields:
+ - AVFormatContext.title
+ - AVFormatContext.author
+ - AVFormatContext.copyright
+ - AVFormatContext.comment
+ - AVFormatContext.album
+ - AVFormatContext.year
+ - AVFormatContext.track
+ - AVFormatContext.genre
+ - AVStream.language
+ - AVStream.filename
+ - AVProgram.provider_name
+ - AVProgram.name
+ - AVChapter.title
diff --git a/lib/ffmpeg/doc/TODO b/lib/ffmpeg/doc/TODO
new file mode 100644
index 0000000000..747eee4ab1
--- /dev/null
+++ b/lib/ffmpeg/doc/TODO
@@ -0,0 +1,82 @@
+ffmpeg TODO list:
+----------------
+
+Fabrice's TODO list: (unordered)
+-------------------
+Short term:
+
+- use AVFMTCTX_DISCARD_PKT in ffplay so that DV has a chance to work
+- add RTSP regression test (both client and server)
+- make ffserver allocate AVFormatContext
+- clean up (incompatible change, for 0.5.0):
+ * AVStream -> AVComponent
+ * AVFormatContext -> AVInputStream/AVOutputStream
+ * suppress rate_emu from AVCodecContext
+- add new float/integer audio filterting and conversion : suppress
+ CODEC_ID_PCM_xxc and use CODEC_ID_RAWAUDIO.
+- fix telecine and frame rate conversion
+
+Long term (ask me if you want to help):
+
+- commit new imgconvert API and new PIX_FMT_xxx alpha formats
+- commit new LGPL'ed float and integer-only AC3 decoder
+- add WMA integer-only decoder
+- add new MPEG4-AAC audio decoder (both integer-only and float version)
+
+Michael's TODO list: (unordered) (if anyone wanna help with sth, just ask)
+-------------------
+- optimize H264 CABAC
+- more optimizations
+- simper rate control
+
+Philip'a TODO list: (alphabetically ordered) (please help)
+------------------
+- Add a multi-ffm filetype so that feeds can be recorded into multiple files rather
+ than one big file.
+- Authenticated users support -- where the authentication is in the URL
+- Change ASF files so that the embedded timestamp in the frames is right rather
+ than being an offset from the start of the stream
+- Make ffm files more resilient to changes in the codec structures so that you
+ can play old ffm files.
+
+Baptiste's TODO list:
+-----------------
+- mov edit list support (AVEditList)
+- YUV 10 bit per component support "2vuy"
+- mxf muxer
+- mpeg2 non linear quantizer
+
+unassigned TODO: (unordered)
+---------------
+- use AVFrame for audio codecs too
+- rework aviobuf.c buffering strategy and fix url_fskip
+- generate optimal huffman tables for mjpeg encoding
+- fix ffserver regression tests
+- support xvids motion estimation
+- support x264s motion estimation
+- support x264s rate control
+- SNOW: non translational motion compensation
+- SNOW: more optimal quantization
+- SNOW: 4x4 block support
+- SNOW: 1/8 pel motion compensation support
+- SNOW: iterative motion estimation based on subsampled images
+- SNOW: try B frames and MCTF and see how their PSNR/bitrate/complexity behaves
+- SNOW: try to use the wavelet transformed MC-ed reference frame as context for the entropy coder
+- SNOW: think about/analyize how to make snow use multiple cpus/threads
+- SNOW: finish spec
+- FLAC: lossy encoding (viterbi and naive scalar quantization)
+- libavfilter
+- JPEG2000 decoder & encoder
+- MPEG4 GMC encoding support
+- macroblock based pixel format (better cache locality, somewhat complex, one paper claimed it faster for high res)
+- regression tests for codecs which do not have an encoder (I+P-frame bitstream in svn)
+- add support for using mplayers video filters to ffmpeg
+- H264 encoder
+- per MB ratecontrol (so VCD and such do work better)
+- write a script which iteratively changes all functions between always_inline and noinline and benchmarks the result to find the best set of inlined functions
+- convert all the non SIMD asm into small asm vs. C testcases and submit them to the gcc devels so they can improve gcc
+- generic audio mixing API
+- extract PES packetizer from PS muxer and use it for new TS muxer
+- implement automatic AVBistreamFilter activation
+- make cabac encoder use bytestream (see http://trac.videolan.org/x264/changeset/?format=diff&new=651)
+- merge imdct and windowing, the current code does considerable amounts of redundant work
diff --git a/lib/ffmpeg/doc/avutil.txt b/lib/ffmpeg/doc/avutil.txt
new file mode 100644
index 0000000000..210bd07264
--- /dev/null
+++ b/lib/ffmpeg/doc/avutil.txt
@@ -0,0 +1,37 @@
+AVUtil
+======
+libavutil is a small lightweight library of generally useful functions.
+It is not a library for code needed by both libavcodec and libavformat.
+
+
+Overview:
+=========
+adler32.c adler32 checksum
+aes.c AES encryption and decryption
+fifo.c resizeable first in first out buffer
+intfloat_readwrite.c portable reading and writing of floating point values
+log.c "printf" with context and level
+md5.c MD5 Message-Digest Algorithm
+rational.c code to perform exact calculations with rational numbers
+tree.c generic AVL tree
+crc.c generic CRC checksumming code
+integer.c 128bit integer math
+lls.c
+mathematics.c greatest common divisor, integer sqrt, integer log2, ...
+mem.c memory allocation routines with guaranteed alignment
+softfloat.c
+
+Headers:
+bswap.h big/little/native-endian conversion code
+x86_cpu.h a few useful macros for unifying x86-64 and x86-32 code
+avutil.h
+common.h
+intreadwrite.h reading and writing of unaligned big/little/native-endian integers
+
+
+Goals:
+======
+* Modular (few interdependencies and the possibility of disabling individual parts during ./configure)
+* Small (source and object)
+* Efficient (low CPU and memory usage)
+* Useful (avoid useless features almost no one needs)
diff --git a/lib/ffmpeg/doc/developer.texi b/lib/ffmpeg/doc/developer.texi
new file mode 100644
index 0000000000..edce7ea63a
--- /dev/null
+++ b/lib/ffmpeg/doc/developer.texi
@@ -0,0 +1,436 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle Developer Documentation
+@titlepage
+@sp 7
+@center @titlefont{Developer Documentation}
+@sp 3
+@end titlepage
+
+
+@chapter Developers Guide
+
+@section API
+@itemize @bullet
+@item libavcodec is the library containing the codecs (both encoding and
+decoding). Look at @file{libavcodec/apiexample.c} to see how to use it.
+
+@item libavformat is the library containing the file format handling (mux and
+demux code for several formats). Look at @file{ffplay.c} to use it in a
+player. See @file{libavformat/output-example.c} to use it to generate
+audio or video streams.
+
+@end itemize
+
+@section Integrating libavcodec or libavformat in your program
+
+You can integrate all the source code of the libraries to link them
+statically to avoid any version problem. All you need is to provide a
+'config.mak' and a 'config.h' in the parent directory. See the defines
+generated by ./configure to understand what is needed.
+
+You can use libavcodec or libavformat in your commercial program, but
+@emph{any patch you make must be published}. The best way to proceed is
+to send your patches to the FFmpeg mailing list.
+
+@anchor{Coding Rules}
+@section Coding Rules
+
+FFmpeg is programmed in the ISO C90 language with a few additional
+features from ISO C99, namely:
+@itemize @bullet
+@item
+the @samp{inline} keyword;
+@item
+@samp{//} comments;
+@item
+designated struct initializers (@samp{struct s x = @{ .i = 17 @};})
+@item
+compound literals (@samp{x = (struct s) @{ 17, 23 @};})
+@end itemize
+
+These features are supported by all compilers we care about, so we will not
+accept patches to remove their use unless they absolutely do not impair
+clarity and performance.
+
+All code must compile with GCC 2.95 and GCC 3.3. Currently, FFmpeg also
+compiles with several other compilers, such as the Compaq ccc compiler
+or Sun Studio 9, and we would like to keep it that way unless it would
+be exceedingly involved. To ensure compatibility, please do not use any
+additional C99 features or GCC extensions. Especially watch out for:
+@itemize @bullet
+@item
+mixing statements and declarations;
+@item
+@samp{long long} (use @samp{int64_t} instead);
+@item
+@samp{__attribute__} not protected by @samp{#ifdef __GNUC__} or similar;
+@item
+GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
+@end itemize
+
+Indent size is 4.
+The presentation is one inspired by 'indent -i4 -kr -nut'.
+The TAB character is forbidden outside of Makefiles as is any
+form of trailing whitespace. Commits containing either will be
+rejected by the Subversion repository.
+
+The main priority in FFmpeg is simplicity and small code size in order to
+minimize the bug count.
+
+Comments: Use the JavaDoc/Doxygen
+format (see examples below) so that code documentation
+can be generated automatically. All nontrivial functions should have a comment
+above them explaining what the function does, even if it is just one sentence.
+All structures and their member variables should be documented, too.
+@example
+/**
+ * @@file mpeg.c
+ * MPEG codec.
+ * @@author ...
+ */
+
+/**
+ * Summary sentence.
+ * more text ...
+ * ...
+ */
+typedef struct Foobar@{
+ int var1; /**< var1 description */
+ int var2; ///< var2 description
+ /** var3 description */
+ int var3;
+@} Foobar;
+
+/**
+ * Summary sentence.
+ * more text ...
+ * ...
+ * @@param my_parameter description of my_parameter
+ * @@return return value description
+ */
+int myfunc(int my_parameter)
+...
+@end example
+
+fprintf and printf are forbidden in libavformat and libavcodec,
+please use av_log() instead.
+
+Casts should be used only when necessary. Unneeded parentheses
+should also be avoided if they don't make the code easier to understand.
+
+@section Development Policy
+
+@enumerate
+@item
+ Contributions should be licensed under the LGPL 2.1, including an
+ "or any later version" clause, or the MIT license. GPL 2 including
+ an "or any later version" clause is also acceptable, but LGPL is
+ preferred.
+@item
+ You must not commit code which breaks FFmpeg! (Meaning unfinished but
+ enabled code which breaks compilation or compiles but does not work or
+ breaks the regression tests)
+ You can commit unfinished stuff (for testing etc), but it must be disabled
+ (#ifdef etc) by default so it does not interfere with other developers'
+ work.
+@item
+ You do not have to over-test things. If it works for you, and you think it
+ should work for others, then commit. If your code has problems
+ (portability, triggers compiler bugs, unusual environment etc) they will be
+ reported and eventually fixed.
+@item
+ Do not commit unrelated changes together, split them into self-contained
+ pieces. Also do not forget that if part B depends on part A, but A does not
+ depend on B, then A can and should be committed first and separate from B.
+ Keeping changes well split into self-contained parts makes reviewing and
+ understanding them on the commit log mailing list easier. This also helps
+ in case of debugging later on.
+ Also if you have doubts about splitting or not splitting, do not hesitate to
+ ask/discuss it on the developer mailing list.
+@item
+ Do not change behavior of the program (renaming options etc) without
+ first discussing it on the ffmpeg-devel mailing list. Do not remove
+ functionality from the code. Just improve!
+
+ Note: Redundant code can be removed.
+@item
+ Do not commit changes to the build system (Makefiles, configure script)
+ which change behavior, defaults etc, without asking first. The same
+ applies to compiler warning fixes, trivial looking fixes and to code
+ maintained by other developers. We usually have a reason for doing things
+ the way we do. Send your changes as patches to the ffmpeg-devel mailing
+ list, and if the code maintainers say OK, you may commit. This does not
+ apply to files you wrote and/or maintain.
+@item
+ We refuse source indentation and other cosmetic changes if they are mixed
+ with functional changes, such commits will be rejected and removed. Every
+ developer has his own indentation style, you should not change it. Of course
+ if you (re)write something, you can use your own style, even though we would
+ prefer if the indentation throughout FFmpeg was consistent (Many projects
+ force a given indentation style - we do not.). If you really need to make
+ indentation changes (try to avoid this), separate them strictly from real
+ changes.
+
+ NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
+ then either do NOT change the indentation of the inner part within (do not
+ move it to the right)! or do so in a separate commit
+@item
+ Always fill out the commit log message. Describe in a few lines what you
+ changed and why. You can refer to mailing list postings if you fix a
+ particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
+@item
+ If you apply a patch by someone else, include the name and email address in
+ the log message. Since the ffmpeg-cvslog mailing list is publicly
+ archived you should add some SPAM protection to the email address. Send an
+ answer to ffmpeg-devel (or wherever you got the patch from) saying that
+ you applied the patch.
+@item
+ When applying patches that have been discussed (at length) on the mailing
+ list, reference the thread in the log message.
+@item
+ Do NOT commit to code actively maintained by others without permission.
+ Send a patch to ffmpeg-devel instead. If no one answers within a reasonable
+ timeframe (12h for build failures and security fixes, 3 days small changes,
+ 1 week for big patches) then commit your patch if you think it is OK.
+ Also note, the maintainer can simply ask for more time to review!
+@item
+ Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits
+ are sent there and reviewed by all the other developers. Bugs and possible
+ improvements or general questions regarding commits are discussed there. We
+ expect you to react if problems with your code are uncovered.
+@item
+ Update the documentation if you change behavior or add features. If you are
+ unsure how best to do this, send a patch to ffmpeg-devel, the documentation
+ maintainer(s) will review and commit your stuff.
+@item
+ Try to keep important discussions and requests (also) on the public
+ developer mailing list, so that all developers can benefit from them.
+@item
+ Never write to unallocated memory, never write over the end of arrays,
+ always check values read from some untrusted source before using them
+ as array index or other risky things.
+@item
+ Remember to check if you need to bump versions for the specific libav
+ parts (libavutil, libavcodec, libavformat) you are changing. You need
+ to change the version integer.
+ Incrementing the first component means no backward compatibility to
+ previous versions (e.g. removal of a function from the public API).
+ Incrementing the second component means backward compatible change
+ (e.g. addition of a function to the public API or extension of an
+ existing data structure).
+ Incrementing the third component means a noteworthy binary compatible
+ change (e.g. encoder bug fix that matters for the decoder).
+@item
+ Compiler warnings indicate potential bugs or code with bad style. If a type of
+ warning always points to correct and clean code, that warning should
+ be disabled, not the code changed.
+ Thus the remaining warnings can either be bugs or correct code.
+ If it is a bug, the bug has to be fixed. If it is not, the code should
+ be changed to not generate a warning unless that causes a slowdown
+ or obfuscates the code.
+@item
+ If you add a new file, give it a proper license header. Do not copy and
+ paste it from a random place, use an existing file as template.
+@end enumerate
+
+We think our rules are not too hard. If you have comments, contact us.
+
+Note, these rules are mostly borrowed from the MPlayer project.
+
+@section Submitting patches
+
+First, (@pxref{Coding Rules}) above if you did not yet.
+
+When you submit your patch, try to send a unified diff (diff '-up'
+option). We cannot read other diffs :-)
+
+Also please do not submit a patch which contains several unrelated changes.
+Split it into separate, self-contained pieces. This does not mean splitting
+file by file. Instead, make the patch as small as possible while still
+keeping it as a logical unit that contains an individual change, even
+if it spans multiple files. This makes reviewing your patches much easier
+for us and greatly increases your chances of getting your patch applied.
+
+Use the patcheck tool of FFmpeg to check your patch.
+The tool is located in the tools directory.
+
+Run the regression tests before submitting a patch so that you can
+verify that there are no big problems.
+
+Patches should be posted as base64 encoded attachments (or any other
+encoding which ensures that the patch will not be trashed during
+transmission) to the ffmpeg-devel mailing list, see
+@url{http://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-devel}
+
+It also helps quite a bit if you tell us what the patch does (for example
+'replaces lrint by lrintf'), and why (for example '*BSD isn't C99 compliant
+and has no lrint()')
+
+Also please if you send several patches, send each patch as a separate mail,
+do not attach several unrelated patches to the same mail.
+
+Your patch will be reviewed on the mailing list. You will likely be asked
+to make some changes and are expected to send in an improved version that
+incorporates the requests from the review. This process may go through
+several iterations. Once your patch is deemed good enough, some developer
+will pick it up and commit it to the official FFmpeg tree.
+
+Give us a few days to react. But if some time passes without reaction,
+send a reminder by email. Your patch should eventually be dealt with.
+
+
+@section New codecs or formats checklist
+
+@enumerate
+@item
+ Did you use av_cold for codec initialization and close functions?
+@item
+ Did you add a long_name under NULL_IF_CONFIG_SMALL to the AVCodec or
+ AVInputFormat/AVOutputFormat struct?
+@item
+ Did you bump the minor version number in @file{avcodec.h} or
+ @file{avformat.h}?
+@item
+ Did you register it in @file{allcodecs.c} or @file{allformats.c}?
+@item
+ Did you add the CodecID to @file{avcodec.h}?
+@item
+ If it has a fourcc, did you add it to @file{libavformat/riff.c},
+ even if it is only a decoder?
+@item
+ Did you add a rule to compile the appropriate files in the Makefile?
+ Remember to do this even if you're just adding a format to a file that is
+ already being compiled by some other rule, like a raw demuxer.
+@item
+ Did you add an entry to the table of supported formats or codecs in
+ @file{doc/general.texi}?
+@item
+ Did you add an entry in the Changelog?
+@item
+ If it depends on a parser or a library, did you add that dependency in
+ configure?
+@item
+ Did you "svn add" the appropriate files before commiting?
+@end enumerate
+
+@section patch submission checklist
+
+@enumerate
+@item
+ Do the regression tests pass with the patch applied?
+@item
+ Does @code{make checkheaders} pass with the patch applied?
+@item
+ Is the patch a unified diff?
+@item
+ Is the patch against latest FFmpeg SVN?
+@item
+ Are you subscribed to ffmpeg-dev?
+ (the list is subscribers only due to spam)
+@item
+ Have you checked that the changes are minimal, so that the same cannot be
+ achieved with a smaller patch and/or simpler final code?
+@item
+ If the change is to speed critical code, did you benchmark it?
+@item
+ If you did any benchmarks, did you provide them in the mail?
+@item
+ Have you checked that the patch does not introduce buffer overflows or
+ other security issues?
+@item
+ Did you test your decoder or demuxer against damaged data? If no, see
+ tools/trasher and the noise bitstream filter. Your decoder or demuxer
+ should not crash or end in a (near) infinite loop when fed damaged data.
+@item
+ Is the patch created from the root of the source tree, so it can be
+ applied with @code{patch -p0}?
+@item
+ Does the patch not mix functional and cosmetic changes?
+@item
+ Did you add tabs or trailing whitespace to the code? Both are forbidden.
+@item
+ Is the patch attached to the email you send?
+@item
+ Is the mime type of the patch correct? It should be text/x-diff or
+ text/x-patch or at least text/plain and not application/octet-stream.
+@item
+ If the patch fixes a bug, did you provide a verbose analysis of the bug?
+@item
+ If the patch fixes a bug, did you provide enough information, including
+ a sample, so the bug can be reproduced and the fix can be verified?
+ Note please do not attach samples >100k to mails but rather provide a
+ URL, you can upload to ftp://upload.ffmpeg.org
+@item
+ Did you provide a verbose summary about what the patch does change?
+@item
+ Did you provide a verbose explanation why it changes things like it does?
+@item
+ Did you provide a verbose summary of the user visible advantages and
+ disadvantages if the patch is applied?
+@item
+ Did you provide an example so we can verify the new feature added by the
+ patch easily?
+@item
+ If you added a new file, did you insert a license header? It should be
+ taken from FFmpeg, not randomly copied and pasted from somewhere else.
+@item
+ You should maintain alphabetical order in alphabetically ordered lists as
+ long as doing so does not break API/ABI compatibility.
+@item
+ Lines with similar content should be aligned vertically when doing so
+ improves readability.
+@item
+ Did you provide a suggestion for a clear commit log message?
+@end enumerate
+
+@section Patch review process
+
+All patches posted to ffmpeg-devel will be reviewed, unless they contain a
+clear note that the patch is not for SVN.
+Reviews and comments will be posted as replies to the patch on the
+mailing list. The patch submitter then has to take care of every comment,
+that can be by resubmitting a changed patch or by discussion. Resubmitted
+patches will themselves be reviewed like any other patch. If at some point
+a patch passes review with no comments then it is approved, that can for
+simple and small patches happen immediately while large patches will generally
+have to be changed and reviewed many times before they are approved.
+After a patch is approved it will be committed to the repository.
+
+We will review all submitted patches, but sometimes we are quite busy so
+especially for large patches this can take several weeks.
+
+When resubmitting patches, please do not make any significant changes
+not related to the comments received during review. Such patches will
+be rejected. Instead, submit significant changes or new features as
+separate patches.
+
+@section Regression tests
+
+Before submitting a patch (or committing to the repository), you should at least
+test that you did not break anything.
+
+The regression tests build a synthetic video stream and a synthetic
+audio stream. These are then encoded and decoded with all codecs or
+formats. The CRC (or MD5) of each generated file is recorded in a
+result file. A 'diff' is launched to compare the reference results and
+the result file. The output is checked immediately after each test
+has run.
+
+The regression tests then go on to test the FFserver code with a
+limited set of streams. It is important that this step runs correctly
+as well.
+
+Run 'make test' to test all the codecs and formats. Commands like
+'make regtest-mpeg2' can be used to run a single test. By default,
+make will abort if any test fails. To run all tests regardless,
+use make -k. To get a more verbose output, use 'make V=1 test' or
+'make V=2 test'.
+
+Run 'make fulltest' to test all the codecs, formats and FFserver.
+
+[Of course, some patches may change the results of the regression tests. In
+this case, the reference results of the regression tests shall be modified
+accordingly].
+
+@bye
diff --git a/lib/ffmpeg/doc/faq.texi b/lib/ffmpeg/doc/faq.texi
new file mode 100644
index 0000000000..3f17738940
--- /dev/null
+++ b/lib/ffmpeg/doc/faq.texi
@@ -0,0 +1,501 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle FFmpeg FAQ
+@titlepage
+@sp 7
+@center @titlefont{FFmpeg FAQ}
+@sp 3
+@end titlepage
+
+
+@chapter General Questions
+
+@section When will the next FFmpeg version be released? / Why are FFmpeg releases so few and far between?
+
+Like most open source projects FFmpeg suffers from a certain lack of
+manpower. For this reason the developers have to prioritize the work
+they do and putting out releases is not at the top of the list, fixing
+bugs and reviewing patches takes precedence. Please don't complain or
+request more timely and/or frequent releases unless you are willing to
+help out creating them.
+
+@section I have a problem with an old version of FFmpeg; where should I report it?
+Nowhere. Upgrade to the latest release or if there is no recent release upgrade
+to Subversion HEAD. You could also try to report it. Maybe you will get lucky and
+become the first person in history to get an answer different from "upgrade
+to Subversion HEAD".
+
+@section Why doesn't FFmpeg support feature [xyz]?
+
+Because no one has taken on that task yet. FFmpeg development is
+driven by the tasks that are important to the individual developers.
+If there is a feature that is important to you, the best way to get
+it implemented is to undertake the task yourself or sponsor a developer.
+
+@section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it?
+
+No. Windows DLLs are not portable, bloated and often slow.
+Moreover FFmpeg strives to support all codecs natively.
+A DLL loader is not conducive to that goal.
+
+@section My bug report/mail to ffmpeg-devel/user has not received any replies.
+
+Likely reasons
+@itemize
+@item We are busy and haven't had time yet to read your report or
+investigate the issue.
+@item You didn't follow bugreports.html.
+@item You didn't use Subversion HEAD.
+@item You reported a segmentation fault without gdb output.
+@item You describe a problem but not how to reproduce it.
+@item It's unclear if you use ffmpeg as command line tool or use
+libav* from another application.
+@item You speak about a video having problems on playback but
+not what you use to play it.
+@item We have no faint clue what you are talking about besides
+that it is related to FFmpeg.
+@end itemize
+
+@section Is there a forum for FFmpeg? I do not like mailing lists.
+
+You may view our mailing lists with a more forum-alike look here:
+@url{http://dir.gmane.org/gmane.comp.video.ffmpeg.user},
+but, if you post, please remember that our mailing list rules still apply there.
+
+@section I cannot read this file although this format seems to be supported by ffmpeg.
+
+Even if ffmpeg can read the container format, it may not support all its
+codecs. Please consult the supported codec list in the ffmpeg
+documentation.
+
+@section Which codecs are supported by Windows?
+
+Windows does not support standard formats like MPEG very well, unless you
+install some additional codecs.
+
+The following list of video codecs should work on most Windows systems:
+@table @option
+@item msmpeg4v2
+.avi/.asf
+@item msmpeg4
+.asf only
+@item wmv1
+.asf only
+@item wmv2
+.asf only
+@item mpeg4
+Only if you have some MPEG-4 codec like ffdshow or Xvid installed.
+@item mpeg1
+.mpg only
+@end table
+Note, ASF files often have .wmv or .wma extensions in Windows. It should also
+be mentioned that Microsoft claims a patent on the ASF format, and may sue
+or threaten users who create ASF files with non-Microsoft software. It is
+strongly advised to avoid ASF where possible.
+
+The following list of audio codecs should work on most Windows systems:
+@table @option
+@item adpcm_ima_wav
+@item adpcm_ms
+@item pcm
+always
+@item mp3
+If some MP3 codec like LAME is installed.
+@end table
+
+
+@chapter Compilation
+
+@section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'}
+
+This is a bug in gcc. Do not report it to us. Instead, please report it to
+the gcc developers. Note that we will not add workarounds for gcc bugs.
+
+Also note that (some of) the gcc developers believe this is not a bug or
+not a bug they should fix:
+@url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}.
+Then again, some of them do not know the difference between an undecidable
+problem and an NP-hard problem...
+
+@chapter Usage
+
+@section ffmpeg does not work; what is wrong?
+
+Try a @code{make distclean} in the ffmpeg source directory before the build. If this does not help see
+(@url{http://ffmpeg.org/bugreports.html}).
+
+@section How do I encode single pictures into movies?
+
+First, rename your pictures to follow a numerical sequence.
+For example, img1.jpg, img2.jpg, img3.jpg,...
+Then you may run:
+
+@example
+ ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
+@end example
+
+Notice that @samp{%d} is replaced by the image number.
+
+@file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc...
+
+If you have large number of pictures to rename, you can use the
+following command to ease the burden. The command, using the bourne
+shell syntax, symbolically links all files in the current directory
+that match @code{*jpg} to the @file{/tmp} directory in the sequence of
+@file{img001.jpg}, @file{img002.jpg} and so on.
+
+@example
+ x=1; for i in *jpg; do counter=$(printf %03d $x); ln "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
+@end example
+
+If you want to sequence them by oldest modified first, substitute
+@code{$(ls -r -t *jpg)} in place of @code{*jpg}.
+
+Then run:
+
+@example
+ ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
+@end example
+
+The same logic is used for any image format that ffmpeg reads.
+
+@section How do I encode movie to single pictures?
+
+Use:
+
+@example
+ ffmpeg -i movie.mpg movie%d.jpg
+@end example
+
+The @file{movie.mpg} used as input will be converted to
+@file{movie1.jpg}, @file{movie2.jpg}, etc...
+
+Instead of relying on file format self-recognition, you may also use
+@table @option
+@item -vcodec ppm
+@item -vcodec png
+@item -vcodec mjpeg
+@end table
+to force the encoding.
+
+Applying that to the previous example:
+@example
+ ffmpeg -i movie.mpg -f image2 -vcodec mjpeg menu%d.jpg
+@end example
+
+Beware that there is no "jpeg" codec. Use "mjpeg" instead.
+
+@section Why do I see a slight quality degradation with multithreaded MPEG* encoding?
+
+For multithreaded MPEG* encoding, the encoded slices must be independent,
+otherwise thread n would practically have to wait for n-1 to finish, so it's
+quite logical that there is a small reduction of quality. This is not a bug.
+
+@section How can I read from the standard input or write to the standard output?
+
+Use @file{-} as file name.
+
+@section Why does the chrominance data seem to be sampled at a different time from the luminance data on bt8x8 captures on Linux?
+
+This is a well-known bug in the bt8x8 driver. For 2.4.26 there is a patch at
+(@url{http://svn.ffmpeg.org/michael/trunk/patches/bttv-420-2.4.26.patch?view=co}). This may also
+apply cleanly to other 2.4-series kernels.
+
+@section How do I avoid the ugly aliasing artifacts in bt8x8 captures on Linux?
+
+Pass 'combfilter=1 lumafilter=1' to the bttv driver. Note though that 'combfilter=1'
+will cause somewhat too strong filtering. A fix is to apply (@url{http://svn.ffmpeg.org/michael/trunk/patches/bttv-comb-2.4.26.patch?view=co})
+or (@url{http://svn.ffmpeg.org/michael/trunk/patches/bttv-comb-2.6.6.patch?view=co})
+and pass 'combfilter=2'.
+
+@section -f jpeg doesn't work.
+
+Try '-f image2 test%d.jpg'.
+
+@section Why can I not change the framerate?
+
+Some codecs, like MPEG-1/2, only allow a small number of fixed framerates.
+Choose a different codec with the -vcodec command line option.
+
+@section How do I encode Xvid or DivX video with ffmpeg?
+
+Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4
+standard (note that there are many other coding formats that use this
+same standard). Thus, use '-vcodec mpeg4' to encode in these formats. The
+default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want
+a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will
+force the fourcc 'xvid' to be stored as the video fourcc rather than the
+default.
+
+@section How do I encode videos which play on the iPod?
+
+@table @option
+@item needed stuff
+-acodec libfaac -vcodec mpeg4 width<=320 height<=240
+@item working stuff
+4mv, title
+@item non-working stuff
+B-frames
+@item example command line
+ffmpeg -i input -acodec libfaac -ab 128kb -vcodec mpeg4 -b 1200kb -mbd 2 -flags +4mv -trellis 2 -aic 2 -cmp 2 -subcmp 2 -s 320x180 -metadata title=X output.mp4
+@end table
+
+@section How do I encode videos which play on the PSP?
+
+@table @option
+@item needed stuff
+-acodec libfaac -vcodec mpeg4 width*height<=76800 width%16=0 height%16=0 -ar 24000 -r 30000/1001 or 15000/1001 -f psp
+@item working stuff
+4mv, title
+@item non-working stuff
+B-frames
+@item example command line
+ffmpeg -i input -acodec libfaac -ab 128kb -vcodec mpeg4 -b 1200kb -ar 24000 -mbd 2 -flags +4mv -trellis 2 -aic 2 -cmp 2 -subcmp 2 -s 368x192 -r 30000/1001 -metadata title=X -f psp output.mp4
+@item needed stuff for H.264
+-acodec libfaac -vcodec libx264 width*height<=76800 width%16=0? height%16=0? -ar 48000 -coder 1 -r 30000/1001 or 15000/1001 -f psp
+@item working stuff for H.264
+title, loop filter
+@item non-working stuff for H.264
+CAVLC
+@item example command line
+ffmpeg -i input -acodec libfaac -ab 128kb -vcodec libx264 -b 1200kb -ar 48000 -mbd 2 -coder 1 -cmp 2 -subcmp 2 -s 368x192 -r 30000/1001 -metadata title=X -f psp -flags loop -trellis 2 -partitions parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 output.mp4
+@item higher resolution for newer PSP firmwares, width<=480, height<=272
+-vcodec libx264 -level 21 -coder 1 -f psp
+@item example command line
+ffmpeg -i input -acodec libfaac -ab 128kb -ac 2 -ar 48000 -vcodec libx264 -level 21 -b 640kb -coder 1 -f psp -flags +loop -trellis 2 -partitions +parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 -g 250 -s 480x272 output.mp4
+@end table
+
+@section Which are good parameters for encoding high quality MPEG-4?
+
+'-mbd rd -flags +4mv+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2',
+things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'.
+
+@section Which are good parameters for encoding high quality MPEG-1/MPEG-2?
+
+'-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2'
+but beware the '-g 100' might cause problems with some decoders.
+Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd.
+
+@section Interlaced video looks very bad when encoded with ffmpeg, what is wrong?
+
+You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced
+material, and try '-top 0/1' if the result looks really messed-up.
+
+@section How can I read DirectShow files?
+
+If you have built FFmpeg with @code{./configure --enable-avisynth}
+(only possible on MinGW/Cygwin platforms),
+then you may use any file that DirectShow can read as input.
+
+Just create an "input.avs" text file with this single line ...
+@example
+ DirectShowSource("C:\path to your file\yourfile.asf")
+@end example
+... and then feed that text file to FFmpeg:
+@example
+ ffmpeg -i input.avs
+@end example
+
+For ANY other help on Avisynth, please visit @url{http://www.avisynth.org/}.
+
+@section How can I join video files?
+
+A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to join video files by
+merely concatenating them.
+
+Hence you may concatenate your multimedia files by first transcoding them to
+these privileged formats, then using the humble @code{cat} command (or the
+equally humble @code{copy} under Windows), and finally transcoding back to your
+format of choice.
+
+@example
+ffmpeg -i input1.avi -sameq intermediate1.mpg
+ffmpeg -i input2.avi -sameq intermediate2.mpg
+cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg
+ffmpeg -i intermediate_all.mpg -sameq output.avi
+@end example
+
+Notice that you should either use @code{-sameq} or set a reasonably high
+bitrate for your intermediate and output files, if you want to preserve
+video quality.
+
+Also notice that you may avoid the huge intermediate files by taking advantage
+of named pipes, should your platform support it:
+
+@example
+mkfifo intermediate1.mpg
+mkfifo intermediate2.mpg
+ffmpeg -i input1.avi -sameq -y intermediate1.mpg < /dev/null &
+ffmpeg -i input2.avi -sameq -y intermediate2.mpg < /dev/null &
+cat intermediate1.mpg intermediate2.mpg |\
+ffmpeg -f mpeg -i - -sameq -vcodec mpeg4 -acodec libmp3lame output.avi
+@end example
+
+Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
+allow concatenation, and the transcoding step is almost lossless.
+When using multiple yuv4mpegpipe(s), the first line needs to be discarded
+from all but the first stream. This can be accomplished by piping through
+@code{tail} as seen below. Note that when piping through @code{tail} you
+must use command grouping, @code{@{ ;@}}, to background properly.
+
+For example, let's say we want to join two FLV files into an output.flv file:
+
+@example
+mkfifo temp1.a
+mkfifo temp1.v
+mkfifo temp2.a
+mkfifo temp2.v
+mkfifo all.a
+mkfifo all.v
+ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
+ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
+ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null &
+@{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} &
+cat temp1.a temp2.a > all.a &
+cat temp1.v temp2.v > all.v &
+ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
+ -f yuv4mpegpipe -i all.v \
+ -sameq -y output.flv
+rm temp[12].[av] all.[av]
+@end example
+
+@section FFmpeg does not adhere to the -maxrate setting, some frames are bigger than maxrate/fps.
+
+Read the MPEG spec about video buffer verifier.
+
+@section I want CBR, but no matter what I do frame sizes differ.
+
+You do not understand what CBR is, please read the MPEG spec.
+Read about video buffer verifier and constant bitrate.
+The one sentence summary is that there is a buffer and the input rate is
+constant, the output can vary as needed.
+
+@section How do I check if a stream is CBR?
+
+To quote the MPEG-2 spec:
+"There is no way to tell that a bitstream is constant bitrate without
+examining all of the vbv_delay values and making complicated computations."
+
+
+@chapter Development
+
+@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
+
+Yes. Read the Developers Guide of the FFmpeg documentation. Alternatively,
+examine the source code for one of the many open source projects that
+already incorporate FFmpeg at (@url{projects.html}).
+
+@section Can you support my C compiler XXX?
+
+It depends. If your compiler is C99-compliant, then patches to support
+it are likely to be welcome if they do not pollute the source code
+with @code{#ifdef}s related to the compiler.
+
+@section Is Microsoft Visual C++ supported?
+
+No. Microsoft Visual C++ is not compliant to the C99 standard and does
+not - among other things - support the inline assembly used in FFmpeg.
+If you wish to use MSVC++ for your
+project then you can link the MSVC++ code with libav* as long as
+you compile the latter with a working C compiler. For more information, see
+the @emph{Microsoft Visual C++ compatibility} section in the FFmpeg
+documentation.
+
+There have been efforts to make FFmpeg compatible with MSVC++ in the
+past. However, they have all been rejected as too intrusive, especially
+since MinGW does the job adequately. None of the core developers
+work with MSVC++ and thus this item is low priority. Should you find
+the silver bullet that solves this problem, feel free to shoot it at us.
+
+We strongly recommend you to move over from MSVC++ to MinGW tools.
+
+@section Can I use FFmpeg or libavcodec under Windows?
+
+Yes, but the Cygwin or MinGW tools @emph{must} be used to compile FFmpeg.
+Read the @emph{Windows} section in the FFmpeg documentation to find more
+information.
+
+To get help and instructions for building FFmpeg under Windows, check out
+the FFmpeg Windows Help Forum at
+@url{http://ffmpeg.arrozcru.org/}.
+
+@section Can you add automake, libtool or autoconf support?
+
+No. These tools are too bloated and they complicate the build.
+
+@section Why not rewrite ffmpeg in object-oriented C++?
+
+FFmpeg is already organized in a highly modular manner and does not need to
+be rewritten in a formal object language. Further, many of the developers
+favor straight C; it works for them. For more arguments on this matter,
+read "Programming Religion" at (@url{http://www.tux.org/lkml/#s15}).
+
+@section Why are the ffmpeg programs devoid of debugging symbols?
+
+The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug
+information. Those binaries are stripped to create ffmpeg, ffplay, etc. If
+you need the debug information, used the *_g versions.
+
+@section I do not like the LGPL, can I contribute code under the GPL instead?
+
+Yes, as long as the code is optional and can easily and cleanly be placed
+under #if CONFIG_GPL without breaking anything. So for example a new codec
+or filter would be OK under GPL while a bug fix to LGPL code would not.
+
+@section I want to compile xyz.c alone but my compiler produced many errors.
+
+Common code is in its own files in libav* and is used by the individual
+codecs. They will not work without the common parts, you have to compile
+the whole libav*. If you wish, disable some parts with configure switches.
+You can also try to hack it and remove more, but if you had problems fixing
+the compilation failure then you are probably not qualified for this.
+
+@section I'm using libavcodec from within my C++ application but the linker complains about missing symbols which seem to be available.
+
+FFmpeg is a pure C project, so to use the libraries within your C++ application
+you need to explicitly state that you are using a C library. You can do this by
+encompassing your FFmpeg includes using @code{extern "C"}.
+
+See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3}
+
+@section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat?
+
+You have to implement a URLProtocol, see libavformat/file.c in FFmpeg
+and libmpdemux/demux_lavf.c in MPlayer sources.
+
+@section I get "No compatible shell script interpreter found." in MSys.
+
+The standard MSys bash (2.04) is broken. You need to install 2.05 or later.
+
+@section I get "./configure: line <xxx>: pr: command not found" in MSys.
+
+The standard MSys install doesn't come with pr. You need to get it from the coreutils package.
+
+@section I tried to pass RTP packets into a decoder, but it doesn't work.
+
+RTP is a container format like any other, you must first depacketize the
+codec frames/samples stored in RTP and then feed to the decoder.
+
+@section Where can I find libav* headers for Pascal/Delphi?
+
+see @url{http://www.iversenit.dk/dev/ffmpeg-headers/}
+
+@section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
+
+see @url{http://svn.ffmpeg.org/michael/trunk/docs/}
+
+@section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?
+
+Even if peculiar since it is network oriented, RTP is a container like any
+other. You have to @emph{demux} RTP before feeding the payload to libavcodec.
+In this specific case please look at RFC 4629 to see how it should be done.
+
+@section AVStream.r_frame_rate is wrong, it is much larger than the framerate.
+
+r_frame_rate is NOT the average framerate, it is the smallest framerate
+that can accurately represent all timestamps. So no, it is not
+wrong if it is larger than the average!
+For example, if you have mixed 25 and 30 fps content, then r_frame_rate
+will be 150.
+
+@bye
diff --git a/lib/ffmpeg/doc/ffmpeg-doc.texi b/lib/ffmpeg/doc/ffmpeg-doc.texi
new file mode 100644
index 0000000000..7e3abadbb8
--- /dev/null
+++ b/lib/ffmpeg/doc/ffmpeg-doc.texi
@@ -0,0 +1,984 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle FFmpeg Documentation
+@titlepage
+@sp 7
+@center @titlefont{FFmpeg Documentation}
+@sp 3
+@end titlepage
+
+@chapter Synopsis
+
+The generic syntax is:
+
+@example
+@c man begin SYNOPSIS
+ffmpeg [[infile options][@option{-i} @var{infile}]]... @{[outfile options] @var{outfile}@}...
+@c man end
+@end example
+
+@chapter Description
+@c man begin DESCRIPTION
+
+FFmpeg is a very fast video and audio converter. It can also grab from
+a live audio/video source.
+
+The command line interface is designed to be intuitive, in the sense
+that FFmpeg tries to figure out all parameters that can possibly be
+derived automatically. You usually only have to specify the target
+bitrate you want.
+
+FFmpeg can also convert from any sample rate to any other, and resize
+video on the fly with a high quality polyphase filter.
+
+As a general rule, options are applied to the next specified
+file. Therefore, order is important, and you can have the same
+option on the command line multiple times. Each occurrence is
+then applied to the next input or output file.
+
+* To set the video bitrate of the output file to 64kbit/s:
+@example
+ffmpeg -i input.avi -b 64k output.avi
+@end example
+
+* To force the frame rate of the output file to 24 fps:
+@example
+ffmpeg -i input.avi -r 24 output.avi
+@end example
+
+* To force the frame rate of the input file (valid for raw formats only)
+to 1 fps and the frame rate of the output file to 24 fps:
+@example
+ffmpeg -r 1 -i input.m2v -r 24 output.avi
+@end example
+
+The format option may be needed for raw input files.
+
+By default, FFmpeg tries to convert as losslessly as possible: It
+uses the same audio and video parameters for the outputs as the one
+specified for the inputs.
+
+@c man end DESCRIPTION
+
+@chapter Options
+@c man begin OPTIONS
+
+@include fftools-common-opts.texi
+
+@section Main options
+
+@table @option
+
+@item -f @var{fmt}
+Force format.
+
+@item -i @var{filename}
+input file name
+
+@item -y
+Overwrite output files.
+
+@item -t @var{duration}
+Restrict the transcoded/captured video sequence
+to the duration specified in seconds.
+@code{hh:mm:ss[.xxx]} syntax is also supported.
+
+@item -fs @var{limit_size}
+Set the file size limit.
+
+@item -ss @var{position}
+Seek to given time position in seconds.
+@code{hh:mm:ss[.xxx]} syntax is also supported.
+
+@item -itsoffset @var{offset}
+Set the input time offset in seconds.
+@code{[-]hh:mm:ss[.xxx]} syntax is also supported.
+This option affects all the input files that follow it.
+The offset is added to the timestamps of the input files.
+Specifying a positive offset means that the corresponding
+streams are delayed by 'offset' seconds.
+
+@item -timestamp @var{time}
+Set the recording timestamp in the container.
+The syntax for @var{time} is:
+@example
+now|([(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH[:MM[:SS[.m...]]])|(HH[MM[SS[.m...]]]))[Z|z])
+@end example
+If the value is "now" it takes the current time.
+Time is local time unless 'Z' or 'z' is appended, in which case it is
+interpreted as UTC.
+If the year-month-day part is not specified it takes the current
+year-month-day.
+
+@item -metadata @var{key}=@var{value}
+Set a metadata key/value pair.
+
+For example, for setting the title in the output file:
+@example
+ffmpeg -i in.avi -metadata title="my title" out.flv
+@end example
+
+@item -v @var{number}
+Set the logging verbosity level.
+
+@item -target @var{type}
+Specify target file type ("vcd", "svcd", "dvd", "dv", "dv50", "pal-vcd",
+"ntsc-svcd", ... ). All the format options (bitrate, codecs,
+buffer sizes) are then set automatically. You can just type:
+
+@example
+ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
+@end example
+
+Nevertheless you can specify additional options as long as you know
+they do not conflict with the standard, as in:
+
+@example
+ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
+@end example
+
+@item -dframes @var{number}
+Set the number of data frames to record.
+
+@item -scodec @var{codec}
+Force subtitle codec ('copy' to copy stream).
+
+@item -newsubtitle
+Add a new subtitle stream to the current output stream.
+
+@item -slang @var{code}
+Set the ISO 639 language code (3 letters) of the current subtitle stream.
+
+@end table
+
+@section Video Options
+
+@table @option
+@item -b @var{bitrate}
+Set the video bitrate in bit/s (default = 200 kb/s).
+@item -vframes @var{number}
+Set the number of video frames to record.
+@item -r @var{fps}
+Set frame rate (Hz value, fraction or abbreviation), (default = 25).
+@item -s @var{size}
+Set frame size. The format is @samp{wxh} (ffserver default = 160x128, ffmpeg default = same as source).
+The following abbreviations are recognized:
+@table @samp
+@item sqcif
+128x96
+@item qcif
+176x144
+@item cif
+352x288
+@item 4cif
+704x576
+@item 16cif
+1408x1152
+@item qqvga
+160x120
+@item qvga
+320x240
+@item vga
+640x480
+@item svga
+800x600
+@item xga
+1024x768
+@item uxga
+1600x1200
+@item qxga
+2048x1536
+@item sxga
+1280x1024
+@item qsxga
+2560x2048
+@item hsxga
+5120x4096
+@item wvga
+852x480
+@item wxga
+1366x768
+@item wsxga
+1600x1024
+@item wuxga
+1920x1200
+@item woxga
+2560x1600
+@item wqsxga
+3200x2048
+@item wquxga
+3840x2400
+@item whsxga
+6400x4096
+@item whuxga
+7680x4800
+@item cga
+320x200
+@item ega
+640x350
+@item hd480
+852x480
+@item hd720
+1280x720
+@item hd1080
+1920x1080
+@end table
+
+@item -aspect @var{aspect}
+Set aspect ratio (4:3, 16:9 or 1.3333, 1.7777).
+@item -croptop @var{size} (deprecated - use -vf crop=x:y:width:height instead)
+Set top crop band size (in pixels).
+@item -cropbottom @var{size} (deprecated - use -vf crop=x:y:width:height instead)
+Set bottom crop band size (in pixels).
+@item -cropleft @var{size} (deprecated - use -vf crop=x:y:width:height instead)
+Set left crop band size (in pixels).
+@item -cropright @var{size} (deprecated - use -vf crop=x:y:width:height instead)
+Set right crop band size (in pixels).
+@item -padtop @var{size}
+@item -padbottom @var{size}
+@item -padleft @var{size}
+@item -padright @var{size}
+@item -padcolor @var{hex_color}
+All the pad options have been removed. Use -vf
+pad=width:height:x:y:color instead.
+@item -vn
+Disable video recording.
+@item -bt @var{tolerance}
+Set video bitrate tolerance (in bits, default 4000k).
+Has a minimum value of: (target_bitrate/target_framerate).
+In 1-pass mode, bitrate tolerance specifies how far ratecontrol is
+willing to deviate from the target average bitrate value. This is
+not related to min/max bitrate. Lowering tolerance too much has
+an adverse effect on quality.
+@item -maxrate @var{bitrate}
+Set max video bitrate (in bit/s).
+Requires -bufsize to be set.
+@item -minrate @var{bitrate}
+Set min video bitrate (in bit/s).
+Most useful in setting up a CBR encode:
+@example
+ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
+@end example
+It is of little use elsewise.
+@item -bufsize @var{size}
+Set video buffer verifier buffer size (in bits).
+@item -vcodec @var{codec}
+Force video codec to @var{codec}. Use the @code{copy} special value to
+tell that the raw codec data must be copied as is.
+@item -sameq
+Use same video quality as source (implies VBR).
+
+@item -pass @var{n}
+Select the pass number (1 or 2). It is used to do two-pass
+video encoding. The statistics of the video are recorded in the first
+pass into a log file (see also the option -passlogfile),
+and in the second pass that log file is used to generate the video
+at the exact requested bitrate.
+On pass 1, you may just deactivate audio and set output to null,
+examples for Windows and Unix:
+@example
+ffmpeg -i foo.mov -vcodec libxvid -pass 1 -an -f rawvideo -y NUL
+ffmpeg -i foo.mov -vcodec libxvid -pass 1 -an -f rawvideo -y /dev/null
+@end example
+
+@item -passlogfile @var{prefix}
+Set two-pass log file name prefix to @var{prefix}, the default file name
+prefix is ``ffmpeg2pass''. The complete file name will be
+@file{PREFIX-N.log}, where N is a number specific to the output
+stream.
+
+@item -newvideo
+Add a new video stream to the current output stream.
+
+@item -vlang @var{code}
+Set the ISO 639 language code (3 letters) of the current video stream.
+
+@item -vf @var{filter_graph}
+@var{filter_graph} is a description of the filter graph to apply to
+the input video.
+Use the option "-filters" to show all the available filters (including
+also sources and sinks).
+
+@end table
+
+@section Advanced Video Options
+
+@table @option
+@item -pix_fmt @var{format}
+Set pixel format. Use 'list' as parameter to show all the supported
+pixel formats.
+@item -sws_flags @var{flags}
+Set SwScaler flags.
+@item -g @var{gop_size}
+Set the group of pictures size.
+@item -intra
+Use only intra frames.
+@item -vdt @var{n}
+Discard threshold.
+@item -qscale @var{q}
+Use fixed video quantizer scale (VBR).
+@item -qmin @var{q}
+minimum video quantizer scale (VBR)
+@item -qmax @var{q}
+maximum video quantizer scale (VBR)
+@item -qdiff @var{q}
+maximum difference between the quantizer scales (VBR)
+@item -qblur @var{blur}
+video quantizer scale blur (VBR) (range 0.0 - 1.0)
+@item -qcomp @var{compression}
+video quantizer scale compression (VBR) (default 0.5).
+Constant of ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0
+
+@item -lmin @var{lambda}
+minimum video lagrange factor (VBR)
+@item -lmax @var{lambda}
+max video lagrange factor (VBR)
+@item -mblmin @var{lambda}
+minimum macroblock quantizer scale (VBR)
+@item -mblmax @var{lambda}
+maximum macroblock quantizer scale (VBR)
+
+These four options (lmin, lmax, mblmin, mblmax) use 'lambda' units,
+but you may use the QP2LAMBDA constant to easily convert from 'q' units:
+@example
+ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
+@end example
+
+@item -rc_init_cplx @var{complexity}
+initial complexity for single pass encoding
+@item -b_qfactor @var{factor}
+qp factor between P- and B-frames
+@item -i_qfactor @var{factor}
+qp factor between P- and I-frames
+@item -b_qoffset @var{offset}
+qp offset between P- and B-frames
+@item -i_qoffset @var{offset}
+qp offset between P- and I-frames
+@item -rc_eq @var{equation}
+Set rate control equation (@pxref{FFmpeg formula
+evaluator}) (default = @code{tex^qComp}).
+@item -rc_override @var{override}
+rate control override for specific intervals
+@item -me_method @var{method}
+Set motion estimation method to @var{method}.
+Available methods are (from lowest to best quality):
+@table @samp
+@item zero
+Try just the (0, 0) vector.
+@item phods
+@item log
+@item x1
+@item hex
+@item umh
+@item epzs
+(default method)
+@item full
+exhaustive search (slow and marginally better than epzs)
+@end table
+
+@item -dct_algo @var{algo}
+Set DCT algorithm to @var{algo}. Available values are:
+@table @samp
+@item 0
+FF_DCT_AUTO (default)
+@item 1
+FF_DCT_FASTINT
+@item 2
+FF_DCT_INT
+@item 3
+FF_DCT_MMX
+@item 4
+FF_DCT_MLIB
+@item 5
+FF_DCT_ALTIVEC
+@end table
+
+@item -idct_algo @var{algo}
+Set IDCT algorithm to @var{algo}. Available values are:
+@table @samp
+@item 0
+FF_IDCT_AUTO (default)
+@item 1
+FF_IDCT_INT
+@item 2
+FF_IDCT_SIMPLE
+@item 3
+FF_IDCT_SIMPLEMMX
+@item 4
+FF_IDCT_LIBMPEG2MMX
+@item 5
+FF_IDCT_PS2
+@item 6
+FF_IDCT_MLIB
+@item 7
+FF_IDCT_ARM
+@item 8
+FF_IDCT_ALTIVEC
+@item 9
+FF_IDCT_SH4
+@item 10
+FF_IDCT_SIMPLEARM
+@end table
+
+@item -er @var{n}
+Set error resilience to @var{n}.
+@table @samp
+@item 1
+FF_ER_CAREFUL (default)
+@item 2
+FF_ER_COMPLIANT
+@item 3
+FF_ER_AGGRESSIVE
+@item 4
+FF_ER_VERY_AGGRESSIVE
+@end table
+
+@item -ec @var{bit_mask}
+Set error concealment to @var{bit_mask}. @var{bit_mask} is a bit mask of
+the following values:
+@table @samp
+@item 1
+FF_EC_GUESS_MVS (default = enabled)
+@item 2
+FF_EC_DEBLOCK (default = enabled)
+@end table
+
+@item -bf @var{frames}
+Use 'frames' B-frames (supported for MPEG-1, MPEG-2 and MPEG-4).
+@item -mbd @var{mode}
+macroblock decision
+@table @samp
+@item 0
+FF_MB_DECISION_SIMPLE: Use mb_cmp (cannot change it yet in FFmpeg).
+@item 1
+FF_MB_DECISION_BITS: Choose the one which needs the fewest bits.
+@item 2
+FF_MB_DECISION_RD: rate distortion
+@end table
+
+@item -4mv
+Use four motion vector by macroblock (MPEG-4 only).
+@item -part
+Use data partitioning (MPEG-4 only).
+@item -bug @var{param}
+Work around encoder bugs that are not auto-detected.
+@item -strict @var{strictness}
+How strictly to follow the standards.
+@item -aic
+Enable Advanced intra coding (h263+).
+@item -umv
+Enable Unlimited Motion Vector (h263+)
+
+@item -deinterlace
+Deinterlace pictures.
+@item -ilme
+Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
+Use this option if your input file is interlaced and you want
+to keep the interlaced format for minimum losses.
+The alternative is to deinterlace the input stream with
+@option{-deinterlace}, but deinterlacing introduces losses.
+@item -psnr
+Calculate PSNR of compressed frames.
+@item -vstats
+Dump video coding statistics to @file{vstats_HHMMSS.log}.
+@item -vstats_file @var{file}
+Dump video coding statistics to @var{file}.
+@item -top @var{n}
+top=1/bottom=0/auto=-1 field first
+@item -dc @var{precision}
+Intra_dc_precision.
+@item -vtag @var{fourcc/tag}
+Force video tag/fourcc.
+@item -qphist
+Show QP histogram.
+@item -vbsf @var{bitstream_filter}
+Bitstream filters available are "dump_extra", "remove_extra", "noise", "h264_mp4toannexb", "imxdump", "mjpegadump".
+@example
+ffmpeg -i h264.mp4 -vcodec copy -vbsf h264_mp4toannexb -an out.h264
+@end example
+@end table
+
+@section Audio Options
+
+@table @option
+@item -aframes @var{number}
+Set the number of audio frames to record.
+@item -ar @var{freq}
+Set the audio sampling frequency (default = 44100 Hz).
+@item -ab @var{bitrate}
+Set the audio bitrate in bit/s (default = 64k).
+@item -aq @var{q}
+Set the audio quality (codec-specific, VBR).
+@item -ac @var{channels}
+Set the number of audio channels (default = 1).
+@item -an
+Disable audio recording.
+@item -acodec @var{codec}
+Force audio codec to @var{codec}. Use the @code{copy} special value to
+specify that the raw codec data must be copied as is.
+@item -newaudio
+Add a new audio track to the output file. If you want to specify parameters,
+do so before @code{-newaudio} (@code{-acodec}, @code{-ab}, etc..).
+
+Mapping will be done automatically, if the number of output streams is equal to
+the number of input streams, else it will pick the first one that matches. You
+can override the mapping using @code{-map} as usual.
+
+Example:
+@example
+ffmpeg -i file.mpg -vcodec copy -acodec ac3 -ab 384k test.mpg -acodec mp2 -ab 192k -newaudio
+@end example
+@item -alang @var{code}
+Set the ISO 639 language code (3 letters) of the current audio stream.
+@end table
+
+@section Advanced Audio options:
+
+@table @option
+@item -atag @var{fourcc/tag}
+Force audio tag/fourcc.
+@item -absf @var{bitstream_filter}
+Bitstream filters available are "dump_extra", "remove_extra", "noise", "mp3comp", "mp3decomp".
+@end table
+
+@section Subtitle options:
+
+@table @option
+@item -scodec @var{codec}
+Force subtitle codec ('copy' to copy stream).
+@item -newsubtitle
+Add a new subtitle stream to the current output stream.
+@item -slang @var{code}
+Set the ISO 639 language code (3 letters) of the current subtitle stream.
+@item -sn
+Disable subtitle recording.
+@item -sbsf @var{bitstream_filter}
+Bitstream filters available are "mov2textsub", "text2movsub".
+@example
+ffmpeg -i file.mov -an -vn -sbsf mov2textsub -scodec copy -f rawvideo sub.txt
+@end example
+@end table
+
+@section Audio/Video grab options
+
+@table @option
+@item -vc @var{channel}
+Set video grab channel (DV1394 only).
+@item -tvstd @var{standard}
+Set television standard (NTSC, PAL (SECAM)).
+@item -isync
+Synchronize read on input.
+@end table
+
+@section Advanced options
+
+@table @option
+@item -map @var{input_stream_id}[:@var{sync_stream_id}]
+Set stream mapping from input streams to output streams.
+Just enumerate the input streams in the order you want them in the output.
+@var{sync_stream_id} if specified sets the input stream to sync
+against.
+@item -map_meta_data @var{outfile}:@var{infile}
+Set meta data information of @var{outfile} from @var{infile}.
+@item -debug
+Print specific debug info.
+@item -benchmark
+Show benchmarking information at the end of an encode.
+Shows CPU time used and maximum memory consumption.
+Maximum memory consumption is not supported on all systems,
+it will usually display as 0 if not supported.
+@item -dump
+Dump each input packet.
+@item -hex
+When dumping packets, also dump the payload.
+@item -bitexact
+Only use bit exact algorithms (for codec testing).
+@item -ps @var{size}
+Set RTP payload size in bytes.
+@item -re
+Read input at native frame rate. Mainly used to simulate a grab device.
+@item -loop_input
+Loop over the input stream. Currently it works only for image
+streams. This option is used for automatic FFserver testing.
+@item -loop_output @var{number_of_times}
+Repeatedly loop output for formats that support looping such as animated GIF
+(0 will loop the output infinitely).
+@item -threads @var{count}
+Thread count.
+@item -vsync @var{parameter}
+Video sync method.
+0 Each frame is passed with its timestamp from the demuxer to the muxer
+1 Frames will be duplicated and dropped to achieve exactly the requested
+ constant framerate.
+2 Frames are passed through with their timestamp or dropped so as to prevent
+ 2 frames from having the same timestamp
+-1 Chooses between 1 and 2 depending on muxer capabilities. This is the default method.
+
+With -map you can select from
+which stream the timestamps should be taken. You can leave either video or
+audio unchanged and sync the remaining stream(s) to the unchanged one.
+@item -async @var{samples_per_second}
+Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
+the parameter is the maximum samples per second by which the audio is changed.
+-async 1 is a special case where only the start of the audio stream is corrected
+without any later correction.
+@item -copyts
+Copy timestamps from input to output.
+@item -shortest
+Finish encoding when the shortest input stream ends.
+@item -dts_delta_threshold
+Timestamp discontinuity delta threshold.
+@item -muxdelay @var{seconds}
+Set the maximum demux-decode delay.
+@item -muxpreload @var{seconds}
+Set the initial demux-decode delay.
+@item -streamid @var{output-stream-index}:@var{new-value}
+Assign a new value to a stream's stream-id field in the next output file.
+All stream-id fields are reset to default for each output file.
+
+For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
+an output mpegts file:
+@example
+ffmpeg -i infile -streamid 0:33 -streamid 1:36 out.ts
+@end example
+@end table
+
+@section Preset files
+
+A preset file contains a sequence of @var{option}=@var{value} pairs,
+one for each line, specifying a sequence of options which would be
+awkward to specify on the command line. Lines starting with the hash
+('#') character are ignored and are used to provide comments. Check
+the @file{ffpresets} directory in the FFmpeg source tree for examples.
+
+Preset files are specified with the @code{vpre}, @code{apre},
+@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
+filename of the preset instead of a preset name as input and can be
+used for any kind of codec. For the @code{vpre}, @code{apre}, and
+@code{spre} options, the options specified in a preset file are
+applied to the currently selected codec of the same type as the preset
+option.
+
+The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
+preset options identifies the preset file to use according to the
+following rules:
+
+First ffmpeg searches for a file named @var{arg}.ffpreset in the
+directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
+the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
+in that order. For example, if the argument is @code{libx264-max}, it will
+search for the file @file{libx264-max.ffpreset}.
+
+If no such file is found, then ffmpeg will search for a file named
+@var{codec_name}-@var{arg}.ffpreset in the above-mentioned
+directories, where @var{codec_name} is the name of the codec to which
+the preset file options will be applied. For example, if you select
+the video codec with @code{-vcodec libx264} and use @code{-vpre max},
+then it will search for the file @file{libx264-max.ffpreset}.
+
+@anchor{FFmpeg formula evaluator}
+@section FFmpeg formula evaluator
+
+When evaluating a rate control string, FFmpeg uses an internal formula
+evaluator.
+
+The following binary operators are available: @code{+}, @code{-},
+@code{*}, @code{/}, @code{^}.
+
+The following unary operators are available: @code{+}, @code{-},
+@code{(...)}.
+
+The following statements are available: @code{ld}, @code{st},
+@code{while}.
+
+The following functions are available:
+@table @var
+@item sinh(x)
+@item cosh(x)
+@item tanh(x)
+@item sin(x)
+@item cos(x)
+@item tan(x)
+@item atan(x)
+@item asin(x)
+@item acos(x)
+@item exp(x)
+@item log(x)
+@item abs(x)
+@item squish(x)
+@item gauss(x)
+@item mod(x, y)
+@item max(x, y)
+@item min(x, y)
+@item eq(x, y)
+@item gte(x, y)
+@item gt(x, y)
+@item lte(x, y)
+@item lt(x, y)
+@item bits2qp(bits)
+@item qp2bits(qp)
+@end table
+
+The following constants are available:
+@table @var
+@item PI
+@item E
+@item iTex
+@item pTex
+@item tex
+@item mv
+@item fCode
+@item iCount
+@item mcVar
+@item var
+@item isI
+@item isP
+@item isB
+@item avgQP
+@item qComp
+@item avgIITex
+@item avgPITex
+@item avgPPTex
+@item avgBPTex
+@item avgTex
+@end table
+
+@c man end
+
+@section Protocols
+
+The file name can be @file{-} to read from standard input or to write
+to standard output.
+
+FFmpeg also handles many protocols specified with an URL syntax.
+
+Use 'ffmpeg -protocols' to see a list of the supported protocols.
+
+The protocol @code{http:} is currently used only to communicate with
+FFserver (see the FFserver documentation). When FFmpeg will be a
+video player it will also be used for streaming :-)
+
+@chapter Tips
+@c man begin TIPS
+
+@itemize
+@item
+For streaming at very low bitrate application, use a low frame rate
+and a small GOP size. This is especially true for RealVideo where
+the Linux player does not seem to be very fast, so it can miss
+frames. An example is:
+
+@example
+ffmpeg -g 3 -r 3 -t 10 -b 50k -s qcif -f rv10 /tmp/b.rm
+@end example
+
+@item
+The parameter 'q' which is displayed while encoding is the current
+quantizer. The value 1 indicates that a very good quality could
+be achieved. The value 31 indicates the worst quality. If q=31 appears
+too often, it means that the encoder cannot compress enough to meet
+your bitrate. You must either increase the bitrate, decrease the
+frame rate or decrease the frame size.
+
+@item
+If your computer is not fast enough, you can speed up the
+compression at the expense of the compression ratio. You can use
+'-me zero' to speed up motion estimation, and '-intra' to disable
+motion estimation completely (you have only I-frames, which means it
+is about as good as JPEG compression).
+
+@item
+To have very low audio bitrates, reduce the sampling frequency
+(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
+
+@item
+To have a constant quality (but a variable bitrate), use the option
+'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
+quality).
+
+@item
+When converting video files, you can use the '-sameq' option which
+uses the same quality factor in the encoder as in the decoder.
+It allows almost lossless encoding.
+
+@end itemize
+@c man end TIPS
+
+@chapter Examples
+@c man begin EXAMPLES
+
+@section Video and Audio grabbing
+
+FFmpeg can grab video and audio from devices given that you specify the input
+format and device.
+
+@example
+ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
+@end example
+
+Note that you must activate the right video source and channel before
+launching FFmpeg with any TV viewer such as xawtv
+(@url{http://linux.bytesex.org/xawtv/}) by Gerd Knorr. You also
+have to set the audio recording levels correctly with a
+standard mixer.
+
+@section X11 grabbing
+
+FFmpeg can grab the X11 display.
+
+@example
+ffmpeg -f x11grab -s cif -i :0.0 /tmp/out.mpg
+@end example
+
+0.0 is display.screen number of your X11 server, same as
+the DISPLAY environment variable.
+
+@example
+ffmpeg -f x11grab -s cif -i :0.0+10,20 /tmp/out.mpg
+@end example
+
+0.0 is display.screen number of your X11 server, same as the DISPLAY environment
+variable. 10 is the x-offset and 20 the y-offset for the grabbing.
+
+@section Video and Audio file format conversion
+
+* FFmpeg can use any supported file format and protocol as input:
+
+Examples:
+
+* You can use YUV files as input:
+
+@example
+ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
+@end example
+
+It will use the files:
+@example
+/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
+/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
+@end example
+
+The Y files use twice the resolution of the U and V files. They are
+raw files, without header. They can be generated by all decent video
+decoders. You must specify the size of the image with the @option{-s} option
+if FFmpeg cannot guess it.
+
+* You can input from a raw YUV420P file:
+
+@example
+ffmpeg -i /tmp/test.yuv /tmp/out.avi
+@end example
+
+test.yuv is a file containing raw YUV planar data. Each frame is composed
+of the Y plane followed by the U and V planes at half vertical and
+horizontal resolution.
+
+* You can output to a raw YUV420P file:
+
+@example
+ffmpeg -i mydivx.avi hugefile.yuv
+@end example
+
+* You can set several input files and output files:
+
+@example
+ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
+@end example
+
+Converts the audio file a.wav and the raw YUV video file a.yuv
+to MPEG file a.mpg.
+
+* You can also do audio and video conversions at the same time:
+
+@example
+ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
+@end example
+
+Converts a.wav to MPEG audio at 22050 Hz sample rate.
+
+* You can encode to several formats at the same time and define a
+mapping from input stream to output streams:
+
+@example
+ffmpeg -i /tmp/a.wav -ab 64k /tmp/a.mp2 -ab 128k /tmp/b.mp2 -map 0:0 -map 0:0
+@end example
+
+Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
+file:index' specifies which input stream is used for each output
+stream, in the order of the definition of output streams.
+
+* You can transcode decrypted VOBs:
+
+@example
+ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800k -g 300 -bf 2 -acodec libmp3lame -ab 128k snatch.avi
+@end example
+
+This is a typical DVD ripping example; the input is a VOB file, the
+output an AVI file with MPEG-4 video and MP3 audio. Note that in this
+command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
+GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
+input video. Furthermore, the audio stream is MP3-encoded so you need
+to enable LAME support by passing @code{--enable-libmp3lame} to configure.
+The mapping is particularly useful for DVD transcoding
+to get the desired audio language.
+
+NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
+
+* You can extract images from a video, or create a video from many images:
+
+For extracting images from a video:
+@example
+ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
+@end example
+
+This will extract one video frame per second from the video and will
+output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
+etc. Images will be rescaled to fit the new WxH values.
+
+If you want to extract just a limited number of frames, you can use the
+above command in combination with the -vframes or -t option, or in
+combination with -ss to start extracting from a certain point in time.
+
+For creating a video from many images:
+@example
+ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
+@end example
+
+The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
+composed of three digits padded with zeroes to express the sequence
+number. It is the same syntax supported by the C printf function, but
+only formats accepting a normal integer are suitable.
+
+* You can put many streams of the same type in the output:
+
+@example
+ffmpeg -i test1.avi -i test2.avi -vcodec copy -acodec copy -vcodec copy -acodec copy test12.avi -newvideo -newaudio
+@end example
+
+In addition to the first video and audio streams, the resulting
+output file @file{test12.avi} will contain the second video
+and the second audio stream found in the input streams list.
+
+The @code{-newvideo}, @code{-newaudio} and @code{-newsubtitle}
+options have to be specified immediately after the name of the output
+file to which you want to add them.
+@c man end EXAMPLES
+
+@include filters.texi
+
+@ignore
+
+@setfilename ffmpeg
+@settitle FFmpeg video converter
+
+@c man begin SEEALSO
+ffplay(1), ffprobe(1), ffserver(1) and the FFmpeg HTML documentation
+@c man end
+
+@c man begin AUTHORS
+The FFmpeg developers
+@c man end
+
+@end ignore
+
+@bye
diff --git a/lib/ffmpeg/doc/ffplay-doc.texi b/lib/ffmpeg/doc/ffplay-doc.texi
new file mode 100644
index 0000000000..5e8032fb59
--- /dev/null
+++ b/lib/ffmpeg/doc/ffplay-doc.texi
@@ -0,0 +1,173 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle FFplay Documentation
+@titlepage
+@sp 7
+@center @titlefont{FFplay Documentation}
+@sp 3
+@end titlepage
+
+@chapter Synopsis
+
+@example
+@c man begin SYNOPSIS
+ffplay [options] @file{input_file}
+@c man end
+@end example
+
+@chapter Description
+@c man begin DESCRIPTION
+
+FFplay is a very simple and portable media player using the FFmpeg
+libraries and the SDL library. It is mostly used as a testbed for the
+various FFmpeg APIs.
+@c man end
+
+@chapter Options
+@c man begin OPTIONS
+
+@include fftools-common-opts.texi
+
+@section Main options
+
+@table @option
+@item -x @var{width}
+Force displayed width.
+@item -y @var{height}
+Force displayed height.
+@item -s @var{size}
+Set frame size (WxH or abbreviation), needed for videos which don't
+contain a header with the frame size like raw YUV.
+@item -an
+Disable audio.
+@item -vn
+Disable video.
+@item -ss @var{pos}
+Seek to a given position in seconds.
+@item -t @var{duration}
+play <duration> seconds of audio/video
+@item -bytes
+Seek by bytes.
+@item -nodisp
+Disable graphical display.
+@item -f @var{fmt}
+Force format.
+@item -window_title @var{title}
+Set window title (default is the input filename).
+@item -loop @var{number}
+Loops movie playback <number> times. 0 means forever.
+@item -vf @var{filter_graph}
+@var{filter_graph} is a description of the filter graph to apply to
+the input video.
+Use the option "-filters" to show all the available filters (including
+also sources and sinks).
+
+@end table
+
+@section Advanced options
+@table @option
+@item -pix_fmt @var{format}
+Set pixel format.
+@item -stats
+Show the stream duration, the codec parameters, the current position in
+the stream and the audio/video synchronisation drift.
+@item -debug
+Print specific debug info.
+@item -bug
+Work around bugs.
+@item -vismv
+Visualize motion vectors.
+@item -fast
+Non-spec-compliant optimizations.
+@item -genpts
+Generate pts.
+@item -rtp_tcp
+Force RTP/TCP protocol usage instead of RTP/UDP. It is only meaningful
+if you are streaming with the RTSP protocol.
+@item -sync @var{type}
+Set the master clock to audio (@code{type=audio}), video
+(@code{type=video}) or external (@code{type=ext}). Default is audio. The
+master clock is used to control audio-video synchronization. Most media
+players use audio as master clock, but in some cases (streaming or high
+quality broadcast) it is necessary to change that. This option is mainly
+used for debugging purposes.
+@item -threads @var{count}
+Set the thread count.
+@item -ast @var{audio_stream_number}
+Select the desired audio stream number, counting from 0. The number
+refers to the list of all the input audio streams. If it is greater
+than the number of audio streams minus one, then the last one is
+selected, if it is negative the audio playback is disabled.
+@item -vst @var{video_stream_number}
+Select the desired video stream number, counting from 0. The number
+refers to the list of all the input video streams. If it is greater
+than the number of video streams minus one, then the last one is
+selected, if it is negative the video playback is disabled.
+@item -sst @var{subtitle_stream_number}
+Select the desired subtitle stream number, counting from 0. The number
+refers to the list of all the input subtitle streams. If it is greater
+than the number of subtitle streams minus one, then the last one is
+selected, if it is negative the subtitle rendering is disabled.
+@item -autoexit
+Exit when video is done playing.
+@item -exitonkeydown
+Exit if any key is pressed.
+@item -exitonmousedown
+Exit if any mouse button is pressed.
+@end table
+
+@section While playing
+
+@table @key
+@item q, ESC
+Quit.
+
+@item f
+Toggle full screen.
+
+@item p, SPC
+Pause.
+
+@item a
+Cycle audio channel.
+
+@item v
+Cycle video channel.
+
+@item t
+Cycle subtitle channel.
+
+@item w
+Show audio waves.
+
+@item left/right
+Seek backward/forward 10 seconds.
+
+@item down/up
+Seek backward/forward 1 minute.
+
+@item mouse click
+Seek to percentage in file corresponding to fraction of width.
+
+@end table
+
+@c man end
+
+@include filters.texi
+
+@ignore
+
+@setfilename ffplay
+@settitle FFplay media player
+
+@c man begin SEEALSO
+ffmpeg(1), ffprobe(1), ffserver(1) and the FFmpeg HTML documentation
+@c man end
+
+@c man begin AUTHORS
+The FFmpeg developers
+@c man end
+
+@end ignore
+
+@bye
diff --git a/lib/ffmpeg/doc/ffprobe-doc.texi b/lib/ffmpeg/doc/ffprobe-doc.texi
new file mode 100644
index 0000000000..a1a11c16fd
--- /dev/null
+++ b/lib/ffmpeg/doc/ffprobe-doc.texi
@@ -0,0 +1,123 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle FFprobe Documentation
+@titlepage
+@sp 7
+@center @titlefont{FFprobe Documentation}
+@sp 3
+@end titlepage
+
+@chapter Synopsis
+
+The generic syntax is:
+
+@example
+@c man begin SYNOPSIS
+ffprobe [options] [@file{input_file}]
+@c man end
+@end example
+
+@chapter Description
+@c man begin DESCRIPTION
+
+FFprobe gathers information from multimedia streams and prints it in
+human- and machine-readable fashion.
+
+For example it can be used to check the format of the container used
+by a multimedia stream and the format and type of each media stream
+contained in it.
+
+If a filename is specified in input, ffprobe will try to open and
+probe the file content. If the file cannot be opened or recognized as
+a multimedia file, a positive exit code is returned.
+
+FFprobe may be employed both as a standalone application or in
+combination with a textual filter, which may perform more
+sophisticated processing, e.g. statistical processing or plotting.
+
+Options are used to list some of the formats supported by ffprobe or
+for specifying which information to display, and for setting how
+ffprobe will show it.
+
+FFprobe output is designed to be easily parsable by a textual filter,
+and consists of one or more sections of the form:
+@example
+[SECTION]
+key1=val1
+...
+keyN=valN
+[/SECTION]
+@end example
+
+Metadata tags stored in the container or in the streams are recognized
+and printed in the corresponding ``FORMAT'' or ``STREAM'' section, and
+are prefixed by the string ``TAG:''.
+
+@c man end
+
+@chapter Options
+@c man begin OPTIONS
+
+@include fftools-common-opts.texi
+
+@section Main options
+
+@table @option
+
+@item -convert_tags
+Convert the tag names in the format container to the generic FFmpeg tag names.
+
+@item -f @var{format}
+Force format to use.
+
+@item -unit
+Show the unit of the displayed values.
+
+@item -prefix
+Show a SI prefixes of the displayed values.
+Unless ``-byte_binary_prefix'' option is used all the prefix
+are decimal.
+
+@item -byte_binary_prefix
+Force the use of binary prefixes for byte values.
+
+@item -sexagesimal
+Use sexagesimal format HH:MM:SS.MICROSECONDS for time values.
+
+@item -pretty
+Prettify the format of the displayed values, it corresponds to the
+options ``-unit -prefix -byte_binary_prefix -sexagesimal''.
+
+@item -show_format
+Show information about the container format of the input multimedia
+stream.
+
+All the container format information is printed within a section with
+name ``FORMAT''.
+
+@item -show_streams
+Show information about each media stream contained in the input
+multimedia stream.
+
+Each media stream information is printed within a dedicated section
+with name ``STREAM''.
+
+@end table
+@c man end
+
+@ignore
+
+@setfilename ffprobe
+@settitle FFprobe media prober
+
+@c man begin SEEALSO
+ffmpeg(1), ffplay(1), ffserver(1) and the FFmpeg HTML documentation
+@c man end
+
+@c man begin AUTHORS
+The FFmpeg developers
+@c man end
+
+@end ignore
+
+@bye
diff --git a/lib/ffmpeg/doc/ffserver-doc.texi b/lib/ffmpeg/doc/ffserver-doc.texi
new file mode 100644
index 0000000000..77deb85317
--- /dev/null
+++ b/lib/ffmpeg/doc/ffserver-doc.texi
@@ -0,0 +1,276 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle FFserver Documentation
+@titlepage
+@sp 7
+@center @titlefont{FFserver Documentation}
+@sp 3
+@end titlepage
+
+@chapter Synopsys
+
+The generic syntax is:
+
+@example
+@c man begin SYNOPSIS
+ffserver [options]
+@c man end
+@end example
+
+@chapter Description
+@c man begin DESCRIPTION
+
+FFserver is a streaming server for both audio and video. It supports
+several live feeds, streaming from files and time shifting on live feeds
+(you can seek to positions in the past on each live feed, provided you
+specify a big enough feed storage in ffserver.conf).
+
+FFserver runs in daemon mode by default; that is, it puts itself in
+the background and detaches from its TTY, unless it is launched in
+debug mode or a NoDaemon option is specified in the configuration
+file.
+
+This documentation covers only the streaming aspects of ffserver /
+ffmpeg. All questions about parameters for ffmpeg, codec questions,
+etc. are not covered here. Read @file{ffmpeg-doc.html} for more
+information.
+
+@section How does it work?
+
+FFserver receives prerecorded files or FFM streams from some ffmpeg
+instance as input, then streams them over RTP/RTSP/HTTP.
+
+An ffserver instance will listen on some port as specified in the
+configuration file. You can launch one or more instances of ffmpeg and
+send one or more FFM streams to the port where ffserver is expecting
+to receive them. Alternately, you can make ffserver launch such ffmpeg
+instances at startup.
+
+Input streams are called feeds, and each one is specified by a <Feed>
+section in the configuration file.
+
+For each feed you can have different output streams in various
+formats, each one specified by a <Stream> section in the configuration
+file.
+
+@section Status stream
+
+FFserver supports an HTTP interface which exposes the current status
+of the server.
+
+Simply point your browser to the address of the special status stream
+specified in the configuration file.
+
+For example if you have:
+@example
+<Stream status.html>
+Format status
+
+# Only allow local people to get the status
+ACL allow localhost
+ACL allow 192.168.0.0 192.168.255.255
+</Stream>
+@end example
+
+then the server will post a page with the status information when
+the special stream @file{status.html} is requested.
+
+@section What can this do?
+
+When properly configured and running, you can capture video and audio in real
+time from a suitable capture card, and stream it out over the Internet to
+either Windows Media Player or RealAudio player (with some restrictions).
+
+It can also stream from files, though that is currently broken. Very often, a
+web server can be used to serve up the files just as well.
+
+It can stream prerecorded video from .ffm files, though it is somewhat tricky
+to make it work correctly.
+
+@section What do I need?
+
+I use Linux on a 900 MHz Duron with a cheapo Bt848 based TV capture card. I'm
+using stock Linux 2.4.17 with the stock drivers. [Actually that isn't true,
+I needed some special drivers for my motherboard-based sound card.]
+
+I understand that FreeBSD systems work just fine as well.
+
+@section How do I make it work?
+
+First, build the kit. It *really* helps to have installed LAME first. Then when
+you run the ffserver ./configure, make sure that you have the
+@code{--enable-libmp3lame} flag turned on.
+
+LAME is important as it allows for streaming audio to Windows Media Player.
+Don't ask why the other audio types do not work.
+
+As a simple test, just run the following two command lines where INPUTFILE
+is some file which you can decode with ffmpeg:
+
+@example
+./ffserver -f doc/ffserver.conf &
+./ffmpeg -i INPUTFILE http://localhost:8090/feed1.ffm
+@end example
+
+At this point you should be able to go to your Windows machine and fire up
+Windows Media Player (WMP). Go to Open URL and enter
+
+@example
+ http://<linuxbox>:8090/test.asf
+@end example
+
+You should (after a short delay) see video and hear audio.
+
+WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
+transfer the entire file before starting to play.
+The same is true of AVI files.
+
+@section What happens next?
+
+You should edit the ffserver.conf file to suit your needs (in terms of
+frame rates etc). Then install ffserver and ffmpeg, write a script to start
+them up, and off you go.
+
+@section Troubleshooting
+
+@subsection I don't hear any audio, but video is fine.
+
+Maybe you didn't install LAME, or got your ./configure statement wrong. Check
+the ffmpeg output to see if a line referring to MP3 is present. If not, then
+your configuration was incorrect. If it is, then maybe your wiring is not
+set up correctly. Maybe the sound card is not getting data from the right
+input source. Maybe you have a really awful audio interface (like I do)
+that only captures in stereo and also requires that one channel be flipped.
+If you are one of these people, then export 'AUDIO_FLIP_LEFT=1' before
+starting ffmpeg.
+
+@subsection The audio and video loose sync after a while.
+
+Yes, they do.
+
+@subsection After a long while, the video update rate goes way down in WMP.
+
+Yes, it does. Who knows why?
+
+@subsection WMP 6.4 behaves differently to WMP 7.
+
+Yes, it does. Any thoughts on this would be gratefully received. These
+differences extend to embedding WMP into a web page. [There are two
+object IDs that you can use: The old one, which does not play well, and
+the new one, which does (both tested on the same system). However,
+I suspect that the new one is not available unless you have installed WMP 7].
+
+@section What else can it do?
+
+You can replay video from .ffm files that was recorded earlier.
+However, there are a number of caveats, including the fact that the
+ffserver parameters must match the original parameters used to record the
+file. If they do not, then ffserver deletes the file before recording into it.
+(Now that I write this, it seems broken).
+
+You can fiddle with many of the codec choices and encoding parameters, and
+there are a bunch more parameters that you cannot control. Post a message
+to the mailing list if there are some 'must have' parameters. Look in
+ffserver.conf for a list of the currently available controls.
+
+It will automatically generate the ASX or RAM files that are often used
+in browsers. These files are actually redirections to the underlying ASF
+or RM file. The reason for this is that the browser often fetches the
+entire file before starting up the external viewer. The redirection files
+are very small and can be transferred quickly. [The stream itself is
+often 'infinite' and thus the browser tries to download it and never
+finishes.]
+
+@section Tips
+
+* When you connect to a live stream, most players (WMP, RA, etc) want to
+buffer a certain number of seconds of material so that they can display the
+signal continuously. However, ffserver (by default) starts sending data
+in realtime. This means that there is a pause of a few seconds while the
+buffering is being done by the player. The good news is that this can be
+cured by adding a '?buffer=5' to the end of the URL. This means that the
+stream should start 5 seconds in the past -- and so the first 5 seconds
+of the stream are sent as fast as the network will allow. It will then
+slow down to real time. This noticeably improves the startup experience.
+
+You can also add a 'Preroll 15' statement into the ffserver.conf that will
+add the 15 second prebuffering on all requests that do not otherwise
+specify a time. In addition, ffserver will skip frames until a key_frame
+is found. This further reduces the startup delay by not transferring data
+that will be discarded.
+
+* You may want to adjust the MaxBandwidth in the ffserver.conf to limit
+the amount of bandwidth consumed by live streams.
+
+@section Why does the ?buffer / Preroll stop working after a time?
+
+It turns out that (on my machine at least) the number of frames successfully
+grabbed is marginally less than the number that ought to be grabbed. This
+means that the timestamp in the encoded data stream gets behind realtime.
+This means that if you say 'Preroll 10', then when the stream gets 10
+or more seconds behind, there is no Preroll left.
+
+Fixing this requires a change in the internals of how timestamps are
+handled.
+
+@section Does the @code{?date=} stuff work.
+
+Yes (subject to the limitation outlined above). Also note that whenever you
+start ffserver, it deletes the ffm file (if any parameters have changed),
+thus wiping out what you had recorded before.
+
+The format of the @code{?date=xxxxxx} is fairly flexible. You should use one
+of the following formats (the 'T' is literal):
+
+@example
+* YYYY-MM-DDTHH:MM:SS (localtime)
+* YYYY-MM-DDTHH:MM:SSZ (UTC)
+@end example
+
+You can omit the YYYY-MM-DD, and then it refers to the current day. However
+note that @samp{?date=16:00:00} refers to 16:00 on the current day -- this
+may be in the future and so is unlikely to be useful.
+
+You use this by adding the ?date= to the end of the URL for the stream.
+For example: @samp{http://localhost:8080/test.asf?date=2002-07-26T23:05:00}.
+@c man end
+
+@chapter Options
+@c man begin OPTIONS
+
+@include fftools-common-opts.texi
+
+@section Main options
+
+@table @option
+@item -f @var{configfile}
+Use @file{configfile} instead of @file{/etc/ffserver.conf}.
+@item -n
+Enable no-launch mode. This option disables all the Launch directives
+within the various <Stream> sections. FFserver will not launch any
+ffmpeg instance, so you will have to launch them manually.
+@item -d
+Enable debug mode. This option increases log verbosity, directs log
+messages to stdout and causes ffserver to run in the foreground
+rather than as a daemon.
+@end table
+@c man end
+
+@ignore
+
+@setfilename ffserver
+@settitle FFserver video server
+
+@c man begin SEEALSO
+
+ffmpeg(1), ffplay(1), ffprobe(1), the @file{ffmpeg/doc/ffserver.conf}
+example and the FFmpeg HTML documentation
+@c man end
+
+@c man begin AUTHORS
+The FFmpeg developers
+@c man end
+
+@end ignore
+
+@bye
diff --git a/lib/ffmpeg/doc/ffserver.conf b/lib/ffmpeg/doc/ffserver.conf
new file mode 100644
index 0000000000..217117005c
--- /dev/null
+++ b/lib/ffmpeg/doc/ffserver.conf
@@ -0,0 +1,377 @@
+# Port on which the server is listening. You must select a different
+# port from your standard HTTP web server if it is running on the same
+# computer.
+Port 8090
+
+# Address on which the server is bound. Only useful if you have
+# several network interfaces.
+BindAddress 0.0.0.0
+
+# Number of simultaneous HTTP connections that can be handled. It has
+# to be defined *before* the MaxClients parameter, since it defines the
+# MaxClients maximum limit.
+MaxHTTPConnections 2000
+
+# Number of simultaneous requests that can be handled. Since FFServer
+# is very fast, it is more likely that you will want to leave this high
+# and use MaxBandwidth, below.
+MaxClients 1000
+
+# This the maximum amount of kbit/sec that you are prepared to
+# consume when streaming to clients.
+MaxBandwidth 1000
+
+# Access log file (uses standard Apache log file format)
+# '-' is the standard output.
+CustomLog -
+
+# Suppress that if you want to launch ffserver as a daemon.
+NoDaemon
+
+
+##################################################################
+# Definition of the live feeds. Each live feed contains one video
+# and/or audio sequence coming from an ffmpeg encoder or another
+# ffserver. This sequence may be encoded simultaneously with several
+# codecs at several resolutions.
+
+<Feed feed1.ffm>
+
+# You must use 'ffmpeg' to send a live feed to ffserver. In this
+# example, you can type:
+#
+# ffmpeg http://localhost:8090/feed1.ffm
+
+# ffserver can also do time shifting. It means that it can stream any
+# previously recorded live stream. The request should contain:
+# "http://xxxx?date=[YYYY-MM-DDT][[HH:]MM:]SS[.m...]".You must specify
+# a path where the feed is stored on disk. You also specify the
+# maximum size of the feed, where zero means unlimited. Default:
+# File=/tmp/feed_name.ffm FileMaxSize=5M
+File /tmp/feed1.ffm
+FileMaxSize 200K
+
+# You could specify
+# ReadOnlyFile /saved/specialvideo.ffm
+# This marks the file as readonly and it will not be deleted or updated.
+
+# Specify launch in order to start ffmpeg automatically.
+# First ffmpeg must be defined with an appropriate path if needed,
+# after that options can follow, but avoid adding the http:// field
+#Launch ffmpeg
+
+# Only allow connections from localhost to the feed.
+ACL allow 127.0.0.1
+
+</Feed>
+
+
+##################################################################
+# Now you can define each stream which will be generated from the
+# original audio and video stream. Each format has a filename (here
+# 'test1.mpg'). FFServer will send this stream when answering a
+# request containing this filename.
+
+<Stream test1.mpg>
+
+# coming from live feed 'feed1'
+Feed feed1.ffm
+
+# Format of the stream : you can choose among:
+# mpeg : MPEG-1 multiplexed video and audio
+# mpegvideo : only MPEG-1 video
+# mp2 : MPEG-2 audio (use AudioCodec to select layer 2 and 3 codec)
+# ogg : Ogg format (Vorbis audio codec)
+# rm : RealNetworks-compatible stream. Multiplexed audio and video.
+# ra : RealNetworks-compatible stream. Audio only.
+# mpjpeg : Multipart JPEG (works with Netscape without any plugin)
+# jpeg : Generate a single JPEG image.
+# asf : ASF compatible streaming (Windows Media Player format).
+# swf : Macromedia Flash compatible stream
+# avi : AVI format (MPEG-4 video, MPEG audio sound)
+Format mpeg
+
+# Bitrate for the audio stream. Codecs usually support only a few
+# different bitrates.
+AudioBitRate 32
+
+# Number of audio channels: 1 = mono, 2 = stereo
+AudioChannels 1
+
+# Sampling frequency for audio. When using low bitrates, you should
+# lower this frequency to 22050 or 11025. The supported frequencies
+# depend on the selected audio codec.
+AudioSampleRate 44100
+
+# Bitrate for the video stream
+VideoBitRate 64
+
+# Ratecontrol buffer size
+VideoBufferSize 40
+
+# Number of frames per second
+VideoFrameRate 3
+
+# Size of the video frame: WxH (default: 160x128)
+# The following abbreviations are defined: sqcif, qcif, cif, 4cif, qqvga,
+# qvga, vga, svga, xga, uxga, qxga, sxga, qsxga, hsxga, wvga, wxga, wsxga,
+# wuxga, woxga, wqsxga, wquxga, whsxga, whuxga, cga, ega, hd480, hd720,
+# hd1080
+VideoSize 160x128
+
+# Transmit only intra frames (useful for low bitrates, but kills frame rate).
+#VideoIntraOnly
+
+# If non-intra only, an intra frame is transmitted every VideoGopSize
+# frames. Video synchronization can only begin at an intra frame.
+VideoGopSize 12
+
+# More MPEG-4 parameters
+# VideoHighQuality
+# Video4MotionVector
+
+# Choose your codecs:
+#AudioCodec mp2
+#VideoCodec mpeg1video
+
+# Suppress audio
+#NoAudio
+
+# Suppress video
+#NoVideo
+
+#VideoQMin 3
+#VideoQMax 31
+
+# Set this to the number of seconds backwards in time to start. Note that
+# most players will buffer 5-10 seconds of video, and also you need to allow
+# for a keyframe to appear in the data stream.
+#Preroll 15
+
+# ACL:
+
+# You can allow ranges of addresses (or single addresses)
+#ACL ALLOW <first address> <last address>
+
+# You can deny ranges of addresses (or single addresses)
+#ACL DENY <first address> <last address>
+
+# You can repeat the ACL allow/deny as often as you like. It is on a per
+# stream basis. The first match defines the action. If there are no matches,
+# then the default is the inverse of the last ACL statement.
+#
+# Thus 'ACL allow localhost' only allows access from localhost.
+# 'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
+# allow everybody else.
+
+</Stream>
+
+
+##################################################################
+# Example streams
+
+
+# Multipart JPEG
+
+#<Stream test.mjpg>
+#Feed feed1.ffm
+#Format mpjpeg
+#VideoFrameRate 2
+#VideoIntraOnly
+#NoAudio
+#Strict -1
+#</Stream>
+
+
+# Single JPEG
+
+#<Stream test.jpg>
+#Feed feed1.ffm
+#Format jpeg
+#VideoFrameRate 2
+#VideoIntraOnly
+##VideoSize 352x240
+#NoAudio
+#Strict -1
+#</Stream>
+
+
+# Flash
+
+#<Stream test.swf>
+#Feed feed1.ffm
+#Format swf
+#VideoFrameRate 2
+#VideoIntraOnly
+#NoAudio
+#</Stream>
+
+
+# ASF compatible
+
+<Stream test.asf>
+Feed feed1.ffm
+Format asf
+VideoFrameRate 15
+VideoSize 352x240
+VideoBitRate 256
+VideoBufferSize 40
+VideoGopSize 30
+AudioBitRate 64
+StartSendOnKey
+</Stream>
+
+
+# MP3 audio
+
+#<Stream test.mp3>
+#Feed feed1.ffm
+#Format mp2
+#AudioCodec mp3
+#AudioBitRate 64
+#AudioChannels 1
+#AudioSampleRate 44100
+#NoVideo
+#</Stream>
+
+
+# Ogg Vorbis audio
+
+#<Stream test.ogg>
+#Feed feed1.ffm
+#Title "Stream title"
+#AudioBitRate 64
+#AudioChannels 2
+#AudioSampleRate 44100
+#NoVideo
+#</Stream>
+
+
+# Real with audio only at 32 kbits
+
+#<Stream test.ra>
+#Feed feed1.ffm
+#Format rm
+#AudioBitRate 32
+#NoVideo
+#NoAudio
+#</Stream>
+
+
+# Real with audio and video at 64 kbits
+
+#<Stream test.rm>
+#Feed feed1.ffm
+#Format rm
+#AudioBitRate 32
+#VideoBitRate 128
+#VideoFrameRate 25
+#VideoGopSize 25
+#NoAudio
+#</Stream>
+
+
+##################################################################
+# A stream coming from a file: you only need to set the input
+# filename and optionally a new format. Supported conversions:
+# AVI -> ASF
+
+#<Stream file.rm>
+#File "/usr/local/httpd/htdocs/tlive.rm"
+#NoAudio
+#</Stream>
+
+#<Stream file.asf>
+#File "/usr/local/httpd/htdocs/test.asf"
+#NoAudio
+#Author "Me"
+#Copyright "Super MegaCorp"
+#Title "Test stream from disk"
+#Comment "Test comment"
+#</Stream>
+
+
+##################################################################
+# RTSP examples
+#
+# You can access this stream with the RTSP URL:
+# rtsp://localhost:5454/test1-rtsp.mpg
+#
+# A non-standard RTSP redirector is also created. Its URL is:
+# http://localhost:8090/test1-rtsp.rtsp
+
+#<Stream test1-rtsp.mpg>
+#Format rtp
+#File "/usr/local/httpd/htdocs/test1.mpg"
+#</Stream>
+
+
+# Transcode an incoming live feed to another live feed,
+# using libx264 and video presets
+
+#<Stream live.h264>
+#Format rtp
+#Feed feed1.ffm
+#VideoCodec libx264
+#VideoFrameRate 24
+#VideoBitRate 100
+#VideoSize 480x272
+#AVPresetVideo default
+#AVPresetVideo baseline
+#AVOptionVideo flags +global_header
+#
+#AudioCodec libfaac
+#AudioBitRate 32
+#AudioChannels 2
+#AudioSampleRate 22050
+#AVOptionAudio flags +global_header
+#</Stream>
+
+##################################################################
+# SDP/multicast examples
+#
+# If you want to send your stream in multicast, you must set the
+# multicast address with MulticastAddress. The port and the TTL can
+# also be set.
+#
+# An SDP file is automatically generated by ffserver by adding the
+# 'sdp' extension to the stream name (here
+# http://localhost:8090/test1-sdp.sdp). You should usually give this
+# file to your player to play the stream.
+#
+# The 'NoLoop' option can be used to avoid looping when the stream is
+# terminated.
+
+#<Stream test1-sdp.mpg>
+#Format rtp
+#File "/usr/local/httpd/htdocs/test1.mpg"
+#MulticastAddress 224.124.0.1
+#MulticastPort 5000
+#MulticastTTL 16
+#NoLoop
+#</Stream>
+
+
+##################################################################
+# Special streams
+
+# Server status
+
+<Stream stat.html>
+Format status
+
+# Only allow local people to get the status
+ACL allow localhost
+ACL allow 192.168.0.0 192.168.255.255
+
+#FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
+</Stream>
+
+
+# Redirect index.html to the appropriate site
+
+<Redirect index.html>
+URL http://www.ffmpeg.org/
+</Redirect>
+
+
diff --git a/lib/ffmpeg/doc/fftools-common-opts.texi b/lib/ffmpeg/doc/fftools-common-opts.texi
new file mode 100644
index 0000000000..618441e045
--- /dev/null
+++ b/lib/ffmpeg/doc/fftools-common-opts.texi
@@ -0,0 +1,89 @@
+All the numerical options, if not specified otherwise, accept in input
+a string representing a number, which may contain one of the
+International System number postfixes, for example 'K', 'M', 'G'.
+If 'i' is appended after the postfix, powers of 2 are used instead of
+powers of 10. The 'B' postfix multiplies the value for 8, and can be
+appended after another postfix or used alone. This allows using for
+example 'KB', 'MiB', 'G' and 'B' as postfix.
+
+Options which do not take arguments are boolean options, and set the
+corresponding value to true. They can be set to false by prefixing
+with "no" the option name, for example using "-nofoo" in the
+commandline will set to false the boolean option with name "foo".
+
+@section Generic options
+
+These options are shared amongst the ff* tools.
+
+@table @option
+
+@item -L
+Show license.
+
+@item -h, -?, -help, --help
+Show help.
+
+@item -version
+Show version.
+
+@item -formats
+Show available formats.
+
+The fields preceding the format names have the following meanings:
+@table @samp
+@item D
+Decoding available
+@item E
+Encoding available
+@end table
+
+@item -codecs
+Show available codecs.
+
+The fields preceding the codec names have the following meanings:
+@table @samp
+@item D
+Decoding available
+@item E
+Encoding available
+@item V/A/S
+Video/audio/subtitle codec
+@item S
+Codec supports slices
+@item D
+Codec supports direct rendering
+@item T
+Codec can handle input truncated at random locations instead of only at frame boundaries
+@end table
+
+@item -bsfs
+Show available bitstream filters.
+
+@item -protocols
+Show available protocols.
+
+@item -filters
+Show available libavfilter filters.
+
+@item -pix_fmts
+Show available pixel formats.
+
+@item -loglevel @var{loglevel}
+Set the logging level used by the library.
+@var{loglevel} is a number or a string containing one of the following values:
+@table @samp
+@item quiet
+@item panic
+@item fatal
+@item error
+@item warning
+@item info
+@item verbose
+@item debug
+@end table
+
+By default the program logs to stderr, if coloring is supported by the
+terminal, colors are used to mark errors and warnings. Log coloring
+can be disabled setting the environment variable @env{NO_COLOR}.
+
+@end table
diff --git a/lib/ffmpeg/doc/filters.texi b/lib/ffmpeg/doc/filters.texi
new file mode 100644
index 0000000000..cd4364a786
--- /dev/null
+++ b/lib/ffmpeg/doc/filters.texi
@@ -0,0 +1,258 @@
+@chapter Video Filters
+@c man begin VIDEO FILTERS
+
+When you configure your FFmpeg build, you can disable any of the
+existing filters using --disable-filters.
+The configure output will show the video filters included in your
+build.
+
+Below is a description of the currently available video filters.
+
+@section crop
+
+Crop the input video to @var{x}:@var{y}:@var{width}:@var{height}.
+
+@example
+./ffmpeg -i in.avi -vf "crop=0:0:0:240" out.avi
+@end example
+
+@var{x} and @var{y} specify the position of the top-left corner of the
+output (non-cropped) area.
+
+The default value of @var{x} and @var{y} is 0.
+
+The @var{width} and @var{height} parameters specify the width and height
+of the output (non-cropped) area.
+
+A value of 0 is interpreted as the maximum possible size contained in
+the area delimited by the top-left corner at position x:y.
+
+For example the parameters:
+
+@example
+"crop=100:100:0:0"
+@end example
+
+will delimit the rectangle with the top-left corner placed at position
+100:100 and the right-bottom corner corresponding to the right-bottom
+corner of the input image.
+
+The default value of @var{width} and @var{height} is 0.
+
+@section format
+
+Convert the input video to one of the specified pixel formats.
+Libavfilter will try to pick one that is supported for the input to
+the next filter.
+
+The filter accepts a list of pixel format names, separated by ``:'',
+for example ``yuv420p:monow:rgb24''.
+
+The following command:
+
+@example
+./ffmpeg -i in.avi -vf "format=yuv420p" out.avi
+@end example
+
+will convert the input video to the format ``yuv420p''.
+
+@section noformat
+
+Force libavfilter not to use any of the specified pixel formats for the
+input to the next filter.
+
+The filter accepts a list of pixel format names, separated by ``:'',
+for example ``yuv420p:monow:rgb24''.
+
+The following command:
+
+@example
+./ffmpeg -i in.avi -vf "noformat=yuv420p, vflip" out.avi
+@end example
+
+will make libavfilter use a format different from ``yuv420p'' for the
+input to the vflip filter.
+
+@section null
+
+Pass the source unchanged to the output.
+
+@section pad
+
+Add paddings to the input image, and places the original input at the
+given coordinates @var{x}, @var{y}.
+
+It accepts the following parameters:
+@var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
+
+Follows the description of the accepted parameters.
+
+@table @option
+@item width, height
+
+Specify the size of the output image with the paddings added. If the
+value for @var{width} or @var{height} is 0, the corresponding input size
+is used for the output.
+
+The default value of @var{width} and @var{height} is 0.
+
+@item x, y
+
+Specify the offsets where to place the input image in the padded area
+with respect to the top/left border of the output image.
+
+The default value of @var{x} and @var{y} is 0.
+
+@item color
+
+Specify the color of the padded area, it can be the name of a color
+(case insensitive match) or a 0xRRGGBB[AA] sequence.
+
+The default value of @var{color} is ``black''.
+
+@end table
+
+@section pixdesctest
+
+Pixel format descriptor test filter, mainly useful for internal
+testing. The output video should be equal to the input video.
+
+For example:
+@example
+format=monow, pixdesctest
+@end example
+
+can be used to test the monowhite pixel format descriptor definition.
+
+@section scale
+
+Scale the input video to @var{width}:@var{height} and/or convert the image format.
+
+For example the command:
+
+@example
+./ffmpeg -i in.avi -vf "scale=200:100" out.avi
+@end example
+
+will scale the input video to a size of 200x100.
+
+If the input image format is different from the format requested by
+the next filter, the scale filter will convert the input to the
+requested format.
+
+If the value for @var{width} or @var{height} is 0, the respective input
+size is used for the output.
+
+If the value for @var{width} or @var{height} is -1, the scale filter will
+use, for the respective output size, a value that maintains the aspect
+ratio of the input image.
+
+The default value of @var{width} and @var{height} is 0.
+
+@section slicify
+
+Pass the images of input video on to next video filter as multiple
+slices.
+
+@example
+./ffmpeg -i in.avi -vf "slicify=32" out.avi
+@end example
+
+The filter accepts the slice height as parameter. If the parameter is
+not specified it will use the default value of 16.
+
+Adding this in the beginning of filter chains should make filtering
+faster due to better use of the memory cache.
+
+@section unsharp
+
+Sharpen or blur the input video.
+
+It accepts the following parameters:
+@var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
+
+Negative values for the amount will blur the input video, while positive
+values will sharpen. All parameters are optional and default to the
+equivalent of the string '5:5:1.0:0:0:0.0'.
+
+@table @option
+
+@item luma_msize_x
+Set the luma matrix horizontal size. It can be an integer between 3
+and 13, default value is 5.
+
+@item luma_msize_y
+Set the luma matrix vertical size. It can be an integer between 3
+and 13, default value is 5.
+
+@item luma_amount
+Set the luma effect strength. It can be a float number between -2.0
+and 5.0, default value is 1.0.
+
+@item chroma_msize_x
+Set the chroma matrix horizontal size. It can be an integer between 3
+and 13, default value is 0.
+
+@item chroma_msize_y
+Set the chroma matrix vertical size. It can be an integer between 3
+and 13, default value is 0.
+
+@item luma_amount
+Set the chroma effect strength. It can be a float number between -2.0
+and 5.0, default value is 0.0.
+
+@end table
+
+@example
+# Strong luma sharpen effect parameters
+unsharp=7:7:2.5
+
+# Strong blur of both luma and chroma parameters
+unsharp=7:7:-2:7:7:-2
+
+# Use the default values with @command{ffmpeg}
+./ffmpeg -i in.avi -vf "unsharp" out.mp4
+@end example
+
+@section vflip
+
+Flip the input video vertically.
+
+@example
+./ffmpeg -i in.avi -vf "vflip" out.avi
+@end example
+
+@c man end VIDEO FILTERS
+
+@chapter Video Sources
+@c man begin VIDEO SOURCES
+
+Below is a description of the currently available video sources.
+
+@section nullsrc
+
+Null video source, never return images. It is mainly useful as a
+template and to be employed in analysis / debugging tools.
+
+It accepts as optional parameter a string of the form
+@var{width}:@var{height}, where @var{width} and @var{height} specify the size of
+the configured source.
+
+The default values of @var{width} and @var{height} are respectively 352
+and 288 (corresponding to the CIF size format).
+
+@c man end VIDEO SOURCES
+
+@chapter Video Sinks
+@c man begin VIDEO SINKS
+
+Below is a description of the currently available video sinks.
+
+@section nullsink
+
+Null video sink, do absolutely nothing with the input video. It is
+mainly useful as a template and to be employed in analysis / debugging
+tools.
+
+@c man end VIDEO SINKS
+
diff --git a/lib/ffmpeg/doc/general.texi b/lib/ffmpeg/doc/general.texi
new file mode 100644
index 0000000000..0154b79654
--- /dev/null
+++ b/lib/ffmpeg/doc/general.texi
@@ -0,0 +1,1067 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle General Documentation
+@titlepage
+@sp 7
+@center @titlefont{General Documentation}
+@sp 3
+@end titlepage
+
+
+@chapter external libraries
+
+FFmpeg can be hooked up with a number of external libraries to add support
+for more formats. None of them are used by default, their use has to be
+explicitly requested by passing the appropriate flags to @file{./configure}.
+
+@section OpenCORE AMR
+
+FFmpeg can make use of the OpenCORE libraries for AMR-NB
+decoding/encoding and AMR-WB decoding.
+
+Go to @url{http://sourceforge.net/projects/opencore-amr/} and follow the instructions for
+installing the libraries. Then pass @code{--enable-libopencore-amrnb} and/or
+@code{--enable-libopencore-amrwb} to configure to enable the libraries.
+
+Note that OpenCORE is under the Apache License 2.0 (see
+@url{http://www.apache.org/licenses/LICENSE-2.0} for details), which is
+incompatible with the LGPL version 2.1 and GPL version 2. You have to
+upgrade FFmpeg's license to LGPL version 3 (or if you have enabled
+GPL components, GPL version 3) to use it.
+
+
+@chapter Supported File Formats and Codecs
+
+You can use the @code{-formats} and @code{-codecs} options to have an exhaustive list.
+
+@section File Formats
+
+FFmpeg supports the following file formats through the @code{libavformat}
+library:
+
+@multitable @columnfractions .4 .1 .1 .4
+@item Name @tab Encoding @tab Decoding @tab Comments
+@item 4xm @tab @tab X
+ @tab 4X Technologies format, used in some games.
+@item 8088flex TMV @tab @tab X
+@item Adobe Filmstrip @tab X @tab X
+@item Audio IFF (AIFF) @tab X @tab X
+@item American Laser Games MM @tab @tab X
+ @tab Multimedia format used in games like Mad Dog McCree.
+@item 3GPP AMR @tab X @tab X
+@item ASF @tab X @tab X
+@item AVI @tab X @tab X
+@item AVISynth @tab @tab X
+@item AVS @tab @tab X
+ @tab Multimedia format used by the Creature Shock game.
+@item Beam Software SIFF @tab @tab X
+ @tab Audio and video format used in some games by Beam Software.
+@item Bethesda Softworks VID @tab @tab X
+ @tab Used in some games from Bethesda Softworks.
+@item Bink @tab @tab X
+ @tab Multimedia format used by many games.
+@item Brute Force & Ignorance @tab @tab X
+ @tab Used in the game Flash Traffic: City of Angels.
+@item Interplay C93 @tab @tab X
+ @tab Used in the game Cyberia from Interplay.
+@item Delphine Software International CIN @tab @tab X
+ @tab Multimedia format used by Delphine Software games.
+@item CD+G @tab @tab X
+ @tab Video format used by CD+G karaoke disks
+@item Core Audio Format @tab @tab X
+ @tab Apple Core Audio Format
+@item CRC testing format @tab X @tab
+@item Creative Voice @tab X @tab X
+ @tab Created for the Sound Blaster Pro.
+@item CRYO APC @tab @tab X
+ @tab Audio format used in some games by CRYO Interactive Entertainment.
+@item D-Cinema audio @tab X @tab X
+@item Deluxe Paint Animation @tab @tab X
+@item DV video @tab X @tab X
+@item DXA @tab @tab X
+ @tab This format is used in the non-Windows version of the Feeble Files
+ game and different game cutscenes repacked for use with ScummVM.
+@item Electronic Arts cdata @tab @tab X
+@item Electronic Arts Multimedia @tab @tab X
+ @tab Used in various EA games; files have extensions like WVE and UV2.
+@item FFM (FFserver live feed) @tab X @tab X
+@item Flash (SWF) @tab X @tab X
+@item Flash 9 (AVM2) @tab X @tab X
+ @tab Only embedded audio is decoded.
+@item FLI/FLC/FLX animation @tab @tab X
+ @tab .fli/.flc files
+@item Flash Video (FLV) @tab @tab X
+ @tab Macromedia Flash video files
+@item framecrc testing format @tab X @tab
+@item FunCom ISS @tab @tab X
+ @tab Audio format used in various games from FunCom like The Longest Journey.
+@item GIF Animation @tab X @tab
+@item GXF @tab X @tab X
+ @tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley
+ playout servers.
+@item id Quake II CIN video @tab @tab X
+@item id RoQ @tab X @tab X
+ @tab Used in Quake III, Jedi Knight 2, other computer games.
+@item IEC61937 encapsulation @tab X @tab X
+@item IFF @tab @tab X
+ @tab Interchange File Format
+@item Interplay MVE @tab @tab X
+ @tab Format used in various Interplay computer games.
+@item IV8 @tab @tab X
+ @tab A format generated by IndigoVision 8000 video server.
+@item IVF (On2) @tab @tab X
+ @tab A format used by libvpx
+@item LMLM4 @tab @tab X
+ @tab Used by Linux Media Labs MPEG-4 PCI boards
+@item Matroska @tab X @tab X
+@item Matroska audio @tab X @tab
+@item MAXIS XA @tab @tab X
+ @tab Used in Sim City 3000; file extension .xa.
+@item MD Studio @tab @tab X
+@item Monkey's Audio @tab @tab X
+@item Motion Pixels MVI @tab @tab X
+@item MOV/QuickTime/MP4 @tab X @tab X
+ @tab 3GP, 3GP2, PSP, iPod variants supported
+@item MP2 @tab X @tab X
+@item MP3 @tab X @tab X
+@item MPEG-1 System @tab X @tab X
+ @tab muxed audio and video, VCD format supported
+@item MPEG-PS (program stream) @tab X @tab X
+ @tab also known as @code{VOB} file, SVCD and DVD format supported
+@item MPEG-TS (transport stream) @tab X @tab X
+ @tab also known as DVB Transport Stream
+@item MPEG-4 @tab X @tab X
+ @tab MPEG-4 is a variant of QuickTime.
+@item MIME multipart JPEG @tab X @tab
+@item MSN TCP webcam @tab @tab X
+ @tab Used by MSN Messenger webcam streams.
+@item MTV @tab @tab X
+@item Musepack @tab @tab X
+@item Musepack SV8 @tab @tab X
+@item Material eXchange Format (MXF) @tab X @tab X
+ @tab SMPTE 377M, used by D-Cinema, broadcast industry.
+@item Material eXchange Format (MXF), D-10 Mapping @tab X @tab X
+ @tab SMPTE 386M, D-10/IMX Mapping.
+@item NC camera feed @tab @tab X
+ @tab NC (AVIP NC4600) camera streams
+@item NTT TwinVQ (VQF) @tab @tab X
+ @tab Nippon Telegraph and Telephone Corporation TwinVQ.
+@item Nullsoft Streaming Video @tab @tab X
+@item NuppelVideo @tab @tab X
+@item NUT @tab X @tab X
+ @tab NUT Open Container Format
+@item Ogg @tab X @tab X
+@item TechnoTrend PVA @tab @tab X
+ @tab Used by TechnoTrend DVB PCI boards.
+@item QCP @tab @tab X
+@item raw ADTS (AAC) @tab X @tab X
+@item raw AC-3 @tab X @tab X
+@item raw Chinese AVS video @tab @tab X
+@item raw CRI ADX @tab X @tab X
+@item raw Dirac @tab X @tab X
+@item raw DNxHD @tab X @tab X
+@item raw DTS @tab X @tab X
+@item raw E-AC-3 @tab X @tab X
+@item raw FLAC @tab X @tab X
+@item raw GSM @tab @tab X
+@item raw H.261 @tab X @tab X
+@item raw H.263 @tab X @tab X
+@item raw H.264 @tab X @tab X
+@item raw Ingenient MJPEG @tab @tab X
+@item raw MJPEG @tab X @tab X
+@item raw MLP @tab @tab X
+@item raw MPEG @tab @tab X
+@item raw MPEG-1 @tab @tab X
+@item raw MPEG-2 @tab @tab X
+@item raw MPEG-4 @tab X @tab X
+@item raw NULL @tab X @tab
+@item raw video @tab X @tab X
+@item raw id RoQ @tab X @tab
+@item raw Shorten @tab @tab X
+@item raw TrueHD @tab X @tab X
+@item raw VC-1 @tab @tab X
+@item raw PCM A-law @tab X @tab X
+@item raw PCM mu-law @tab X @tab X
+@item raw PCM signed 8 bit @tab X @tab X
+@item raw PCM signed 16 bit big-endian @tab X @tab X
+@item raw PCM signed 16 bit little-endian @tab X @tab X
+@item raw PCM signed 24 bit big-endian @tab X @tab X
+@item raw PCM signed 24 bit little-endian @tab X @tab X
+@item raw PCM signed 32 bit big-endian @tab X @tab X
+@item raw PCM signed 32 bit little-endian @tab X @tab X
+@item raw PCM unsigned 8 bit @tab X @tab X
+@item raw PCM unsigned 16 bit big-endian @tab X @tab X
+@item raw PCM unsigned 16 bit little-endian @tab X @tab X
+@item raw PCM unsigned 24 bit big-endian @tab X @tab X
+@item raw PCM unsigned 24 bit little-endian @tab X @tab X
+@item raw PCM unsigned 32 bit big-endian @tab X @tab X
+@item raw PCM unsigned 32 bit little-endian @tab X @tab X
+@item raw PCM floating-point 32 bit big-endian @tab X @tab X
+@item raw PCM floating-point 32 bit little-endian @tab X @tab X
+@item raw PCM floating-point 64 bit big-endian @tab X @tab X
+@item raw PCM floating-point 64 bit little-endian @tab X @tab X
+@item RDT @tab @tab X
+@item REDCODE R3D @tab @tab X
+ @tab File format used by RED Digital cameras, contains JPEG 2000 frames and PCM audio.
+@item RealMedia @tab X @tab X
+@item Redirector @tab @tab X
+@item Renderware TeXture Dictionary @tab @tab X
+@item RL2 @tab @tab X
+ @tab Audio and video format used in some games by Entertainment Software Partners.
+@item RPL/ARMovie @tab @tab X
+@item RTMP @tab X @tab X
+ @tab Output is performed by publishing stream to RTMP server
+@item RTP @tab @tab X
+@item RTSP @tab X @tab X
+@item SDP @tab @tab X
+@item Sega FILM/CPK @tab @tab X
+ @tab Used in many Sega Saturn console games.
+@item Sierra SOL @tab @tab X
+ @tab .sol files used in Sierra Online games.
+@item Sierra VMD @tab @tab X
+ @tab Used in Sierra CD-ROM games.
+@item Smacker @tab @tab X
+ @tab Multimedia format used by many games.
+@item Sony OpenMG (OMA) @tab @tab X
+ @tab Audio format used in Sony Sonic Stage and Sony Vegas.
+@item Sony PlayStation STR @tab @tab X
+@item Sony Wave64 (W64) @tab @tab X
+@item SoX native format @tab X @tab X
+@item SUN AU format @tab X @tab X
+@item THP @tab @tab X
+ @tab Used on the Nintendo GameCube.
+@item Tiertex Limited SEQ @tab @tab X
+ @tab Tiertex .seq files used in the DOS CD-ROM version of the game Flashback.
+@item True Audio @tab @tab X
+@item VC-1 test bitstream @tab X @tab X
+@item WAV @tab X @tab X
+@item WavPack @tab @tab X
+@item WebM @tab X @tab X
+@item Wing Commander III movie @tab @tab X
+ @tab Multimedia format used in Origin's Wing Commander III computer game.
+@item Westwood Studios audio @tab @tab X
+ @tab Multimedia format used in Westwood Studios games.
+@item Westwood Studios VQA @tab @tab X
+ @tab Multimedia format used in Westwood Studios games.
+@item YUV4MPEG pipe @tab X @tab X
+@item Psygnosis YOP @tab @tab X
+@end multitable
+
+@code{X} means that encoding (resp. decoding) is supported.
+
+@section Image Formats
+
+FFmpeg can read and write images for each frame of a video sequence. The
+following image formats are supported:
+
+@multitable @columnfractions .4 .1 .1 .4
+@item Name @tab Encoding @tab Decoding @tab Comments
+@item .Y.U.V @tab X @tab X
+ @tab one raw file per component
+@item animated GIF @tab X @tab X
+ @tab Only uncompressed GIFs are generated.
+@item BMP @tab X @tab X
+ @tab Microsoft BMP image
+@item DPX @tab @tab X
+ @tab Digital Picture Exchange
+@item JPEG @tab X @tab X
+ @tab Progressive JPEG is not supported.
+@item JPEG 2000 @tab @tab E
+ @tab decoding supported through external library libopenjpeg
+@item JPEG-LS @tab X @tab X
+@item LJPEG @tab X @tab
+ @tab Lossless JPEG
+@item PAM @tab X @tab X
+ @tab PAM is a PNM extension with alpha support.
+@item PBM @tab X @tab X
+ @tab Portable BitMap image
+@item PCX @tab X @tab X
+ @tab PC Paintbrush
+@item PGM @tab X @tab X
+ @tab Portable GrayMap image
+@item PGMYUV @tab X @tab X
+ @tab PGM with U and V components in YUV 4:2:0
+@item PIC @tab @tab X
+ @tab Pictor/PC Paint
+@item PNG @tab X @tab X
+ @tab 2/4 bpp not supported yet
+@item PPM @tab X @tab X
+ @tab Portable PixelMap image
+@item PTX @tab @tab X
+ @tab V.Flash PTX format
+@item SGI @tab X @tab X
+ @tab SGI RGB image format
+@item Sun Rasterfile @tab @tab X
+ @tab Sun RAS image format
+@item TIFF @tab X @tab X
+ @tab YUV, JPEG and some extension is not supported yet.
+@item Truevision Targa @tab X @tab X
+ @tab Targa (.TGA) image format
+@end multitable
+
+@code{X} means that encoding (resp. decoding) is supported.
+
+@code{E} means that support is provided through an external library.
+
+@section Video Codecs
+
+@multitable @columnfractions .4 .1 .1 .4
+@item Name @tab Encoding @tab Decoding @tab Comments
+@item 4X Movie @tab @tab X
+ @tab Used in certain computer games.
+@item 8088flex TMV @tab @tab X
+@item 8SVX exponential @tab @tab X
+@item 8SVX fibonacci @tab @tab X
+@item American Laser Games MM @tab @tab X
+ @tab Used in games like Mad Dog McCree.
+@item AMV Video @tab @tab X
+ @tab Used in Chinese MP3 players.
+@item Apple MJPEG-B @tab @tab X
+@item Apple QuickDraw @tab @tab X
+ @tab fourcc: qdrw
+@item Asus v1 @tab X @tab X
+ @tab fourcc: ASV1
+@item Asus v2 @tab X @tab X
+ @tab fourcc: ASV2
+@item ATI VCR1 @tab @tab X
+ @tab fourcc: VCR1
+@item ATI VCR2 @tab @tab X
+ @tab fourcc: VCR2
+@item Auravision Aura @tab @tab X
+@item Auravision Aura 2 @tab @tab X
+@item Autodesk Animator Flic video @tab @tab X
+@item Autodesk RLE @tab @tab X
+ @tab fourcc: AASC
+@item AVS (Audio Video Standard) video @tab @tab X
+ @tab Video encoding used by the Creature Shock game.
+@item Beam Software VB @tab @tab X
+@item Bethesda VID video @tab @tab X
+ @tab Used in some games from Bethesda Softworks.
+@item Bink Video @tab @tab X
+ @tab Support for version 'b' is missing.
+@item Brute Force & Ignorance @tab @tab X
+ @tab Used in the game Flash Traffic: City of Angels.
+@item C93 video @tab @tab X
+ @tab Codec used in Cyberia game.
+@item CamStudio @tab @tab X
+ @tab fourcc: CSCD
+@item CD+G @tab @tab X
+ @tab Video codec for CD+G karaoke disks
+@item Chinese AVS video @tab @tab X
+ @tab AVS1-P2, JiZhun profile
+@item Delphine Software International CIN video @tab @tab X
+ @tab Codec used in Delphine Software International games.
+@item Cinepak @tab @tab X
+@item Cirrus Logic AccuPak @tab @tab X
+ @tab fourcc: CLJR
+@item Creative YUV (CYUV) @tab @tab X
+@item Dirac @tab E @tab E
+ @tab supported through external libdirac/libschroedinger libraries
+@item Deluxe Paint Animation @tab @tab X
+@item DNxHD @tab X @tab X
+ @tab aka SMPTE VC3
+@item Duck TrueMotion 1.0 @tab @tab X
+ @tab fourcc: DUCK
+@item Duck TrueMotion 2.0 @tab @tab X
+ @tab fourcc: TM20
+@item DV (Digital Video) @tab X @tab X
+@item Feeble Files/ScummVM DXA @tab @tab X
+ @tab Codec originally used in Feeble Files game.
+@item Electronic Arts CMV video @tab @tab X
+ @tab Used in NHL 95 game.
+@item Electronic Arts Madcow video @tab @tab X
+@item Electronic Arts TGV video @tab @tab X
+@item Electronic Arts TGQ video @tab @tab X
+@item Electronic Arts TQI video @tab @tab X
+@item Escape 124 @tab @tab X
+@item FFmpeg video codec #1 @tab X @tab X
+ @tab experimental lossless codec (fourcc: FFV1)
+@item Flash Screen Video v1 @tab X @tab X
+ @tab fourcc: FSV1
+@item Flash Video (FLV) @tab X @tab X
+ @tab Sorenson H.263 used in Flash
+@item Fraps @tab @tab X
+@item H.261 @tab X @tab X
+@item H.263 / H.263-1996 @tab X @tab X
+@item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X
+@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X
+ @tab encoding supported through external library libx264
+@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (VDPAU acceleration) @tab E @tab X
+@item HuffYUV @tab X @tab X
+@item HuffYUV FFmpeg variant @tab X @tab X
+@item IBM Ultimotion @tab @tab X
+ @tab fourcc: ULTI
+@item id Cinematic video @tab @tab X
+ @tab Used in Quake II.
+@item id RoQ video @tab X @tab X
+ @tab Used in Quake III, Jedi Knight 2, other computer games.
+@item IFF ILBM @tab @tab X
+ @tab IFF interlaved bitmap
+@item IFF ByteRun1 @tab @tab X
+ @tab IFF run length encoded bitmap
+@item Intel H.263 @tab @tab X
+@item Intel Indeo 2 @tab @tab X
+@item Intel Indeo 3 @tab @tab X
+@item Intel Indeo 5 @tab @tab X
+@item Interplay C93 @tab @tab X
+ @tab Used in the game Cyberia from Interplay.
+@item Interplay MVE video @tab @tab X
+ @tab Used in Interplay .MVE files.
+@item Karl Morton's video codec @tab @tab X
+ @tab Codec used in Worms games.
+@item Kega Game Video (KGV1) @tab @tab X
+ @tab Kega emulator screen capture codec.
+@item LCL (LossLess Codec Library) MSZH @tab @tab X
+@item LCL (LossLess Codec Library) ZLIB @tab E @tab E
+@item LOCO @tab @tab X
+@item lossless MJPEG @tab X @tab X
+@item Microsoft RLE @tab @tab X
+@item Microsoft Video 1 @tab @tab X
+@item Mimic @tab @tab X
+ @tab Used in MSN Messenger Webcam streams.
+@item Miro VideoXL @tab @tab X
+ @tab fourcc: VIXL
+@item MJPEG (Motion JPEG) @tab X @tab X
+@item Motion Pixels video @tab @tab X
+@item MPEG-1 video @tab X @tab X
+@item MPEG-1/2 video XvMC (X-Video Motion Compensation) @tab @tab X
+@item MPEG-1/2 video (VDPAU acceleration) @tab @tab X
+@item MPEG-2 video @tab X @tab X
+@item MPEG-4 part 2 @tab X @tab X
+ @ libxvidcore can be used alternatively for encoding.
+@item MPEG-4 part 2 Microsoft variant version 1 @tab X @tab X
+@item MPEG-4 part 2 Microsoft variant version 2 @tab X @tab X
+@item MPEG-4 part 2 Microsoft variant version 3 @tab X @tab X
+@item Nintendo Gamecube THP video @tab @tab X
+@item NuppelVideo/RTjpeg @tab @tab X
+ @tab Video encoding used in NuppelVideo files.
+@item On2 VP3 @tab @tab X
+ @tab still experimental
+@item On2 VP5 @tab @tab X
+ @tab fourcc: VP50
+@item On2 VP6 @tab @tab X
+ @tab fourcc: VP60,VP61,VP62
+@item VP8 @tab E @tab X
+ @tab fourcc: VP80, encoding supported through external library libvpx
+@item planar RGB @tab @tab X
+ @tab fourcc: 8BPS
+@item Q-team QPEG @tab @tab X
+ @tab fourccs: QPEG, Q1.0, Q1.1
+@item QuickTime 8BPS video @tab @tab X
+@item QuickTime Animation (RLE) video @tab X @tab X
+ @tab fourcc: 'rle '
+@item QuickTime Graphics (SMC) @tab @tab X
+ @tab fourcc: 'smc '
+@item QuickTime video (RPZA) @tab @tab X
+ @tab fourcc: rpza
+@item R210 Quicktime Uncompressed RGB 10-bit @tab @tab X
+@item Raw Video @tab X @tab X
+@item RealVideo 1.0 @tab X @tab X
+@item RealVideo 2.0 @tab X @tab X
+@item RealVideo 3.0 @tab @tab X
+ @tab still far from ideal
+@item RealVideo 4.0 @tab @tab X
+@item Renderware TXD (TeXture Dictionary) @tab @tab X
+ @tab Texture dictionaries used by the Renderware Engine.
+@item RL2 video @tab @tab X
+ @tab used in some games by Entertainment Software Partners
+@item Sierra VMD video @tab @tab X
+ @tab Used in Sierra VMD files.
+@item Smacker video @tab @tab X
+ @tab Video encoding used in Smacker.
+@item SMPTE VC-1 @tab @tab X
+@item Snow @tab X @tab X
+ @tab experimental wavelet codec (fourcc: SNOW)
+@item Sony PlayStation MDEC (Motion DECoder) @tab @tab X
+@item Sorenson Vector Quantizer 1 @tab X @tab X
+ @tab fourcc: SVQ1
+@item Sorenson Vector Quantizer 3 @tab @tab X
+ @tab fourcc: SVQ3
+@item Sunplus JPEG (SP5X) @tab @tab X
+ @tab fourcc: SP5X
+@item TechSmith Screen Capture Codec @tab @tab X
+ @tab fourcc: TSCC
+@item Theora @tab E @tab X
+ @tab encoding supported through external library libtheora
+@item Tiertex Limited SEQ video @tab @tab X
+ @tab Codec used in DOS CD-ROM FlashBack game.
+@item V210 Quicktime Uncompressed 4:2:2 10-bit @tab X @tab X
+@item VMware Screen Codec / VMware Video @tab @tab X
+ @tab Codec used in videos captured by VMware.
+@item Westwood Studios VQA (Vector Quantized Animation) video @tab @tab X
+@item Windows Media Video 7 @tab X @tab X
+@item Windows Media Video 8 @tab X @tab X
+@item Windows Media Video 9 @tab @tab X
+ @tab not completely working
+@item Wing Commander III / Xan @tab @tab X
+ @tab Used in Wing Commander III .MVE files.
+@item Winnov WNV1 @tab @tab X
+@item WMV7 @tab X @tab X
+@item YAMAHA SMAF @tab X @tab X
+@item Psygnosis YOP Video @tab @tab X
+@item ZLIB @tab X @tab X
+ @tab part of LCL, encoder experimental
+@item Zip Motion Blocks Video @tab X @tab X
+ @tab Encoder works only in PAL8.
+@end multitable
+
+@code{X} means that encoding (resp. decoding) is supported.
+
+@code{E} means that support is provided through an external library.
+
+@section Audio Codecs
+
+@multitable @columnfractions .4 .1 .1 .4
+@item Name @tab Encoding @tab Decoding @tab Comments
+@item 8SVX audio @tab @tab X
+@item AAC @tab E @tab X
+ @tab encoding supported through external library libfaac
+@item AC-3 @tab IX @tab X
+@item ADPCM 4X Movie @tab @tab X
+@item ADPCM CDROM XA @tab @tab X
+@item ADPCM Creative Technology @tab @tab X
+ @tab 16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2
+@item ADPCM Electronic Arts @tab @tab X
+ @tab Used in various EA titles.
+@item ADPCM Electronic Arts Maxis CDROM XS @tab @tab X
+ @tab Used in Sim City 3000.
+@item ADPCM Electronic Arts R1 @tab @tab X
+@item ADPCM Electronic Arts R2 @tab @tab X
+@item ADPCM Electronic Arts R3 @tab @tab X
+@item ADPCM Electronic Arts XAS @tab @tab X
+@item ADPCM G.726 @tab X @tab X
+@item ADPCM IMA AMV @tab @tab X
+ @tab Used in AMV files
+@item ADPCM IMA Electronic Arts EACS @tab @tab X
+@item ADPCM IMA Electronic Arts SEAD @tab @tab X
+@item ADPCM IMA Funcom @tab @tab X
+@item ADPCM IMA QuickTime @tab X @tab X
+@item ADPCM IMA Loki SDL MJPEG @tab @tab X
+@item ADPCM IMA WAV @tab X @tab X
+@item ADPCM IMA Westwood @tab @tab X
+@item ADPCM ISS IMA @tab @tab X
+ @tab Used in FunCom games.
+@item ADPCM IMA Duck DK3 @tab @tab X
+ @tab Used in some Sega Saturn console games.
+@item ADPCM IMA Duck DK4 @tab @tab X
+ @tab Used in some Sega Saturn console games.
+@item ADPCM Microsoft @tab X @tab X
+@item ADPCM MS IMA @tab X @tab X
+@item ADPCM Nintendo Gamecube THP @tab @tab X
+@item ADPCM QT IMA @tab X @tab X
+@item ADPCM SEGA CRI ADX @tab X @tab X
+ @tab Used in Sega Dreamcast games.
+@item ADPCM Shockwave Flash @tab X @tab X
+@item ADPCM SMJPEG IMA @tab @tab X
+ @tab Used in certain Loki game ports.
+@item ADPCM Sound Blaster Pro 2-bit @tab @tab X
+@item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X
+@item ADPCM Sound Blaster Pro 4-bit @tab @tab X
+@item ADPCM Westwood Studios IMA @tab @tab X
+ @tab Used in Westwood Studios games like Command and Conquer.
+@item ADPCM Yamaha @tab X @tab X
+@item AMR-NB @tab E @tab X
+ @tab encoding supported through external library libopencore-amrnb
+@item AMR-WB @tab @tab E
+ @tab decoding supported through external library libopencore-amrwb
+@item Apple lossless audio @tab X @tab X
+ @tab QuickTime fourcc 'alac'
+@item Atrac 1 @tab @tab X
+@item Atrac 3 @tab @tab X
+@item Bink Audio @tab @tab X
+ @tab Used in Bink and Smacker files in many games.
+@item Delphine Software International CIN audio @tab @tab X
+ @tab Codec used in Delphine Software International games.
+@item COOK @tab @tab X
+ @tab All versions except 5.1 are supported.
+@item DCA (DTS Coherent Acoustics) @tab @tab X
+@item DPCM id RoQ @tab X @tab X
+ @tab Used in Quake III, Jedi Knight 2, other computer games.
+@item DPCM Interplay @tab @tab X
+ @tab Used in various Interplay computer games.
+@item DPCM Sierra Online @tab @tab X
+ @tab Used in Sierra Online game audio files.
+@item DPCM Sol @tab @tab X
+@item DPCM Xan @tab @tab X
+ @tab Used in Origin's Wing Commander IV AVI files.
+@item DSP Group TrueSpeech @tab @tab X
+@item DV audio @tab @tab X
+@item Enhanced AC-3 @tab @tab X
+@item FLAC (Free Lossless Audio Codec) @tab X @tab IX
+@item G.729 @tab @tab X
+@item GSM @tab E @tab X
+ @tab encoding supported through external library libgsm
+@item GSM Microsoft variant @tab E @tab X
+ @tab encoding supported through external library libgsm
+@item IMC (Intel Music Coder) @tab @tab X
+@item MACE (Macintosh Audio Compression/Expansion) 3:1 @tab @tab X
+@item MACE (Macintosh Audio Compression/Expansion) 6:1 @tab @tab X
+@item MLP (Meridian Lossless Packing) @tab @tab X
+ @tab Used in DVD-Audio discs.
+@item Monkey's Audio @tab @tab X
+ @tab Only versions 3.97-3.99 are supported.
+@item MP1 (MPEG audio layer 1) @tab @tab IX
+@item MP2 (MPEG audio layer 2) @tab IX @tab IX
+@item MP3 (MPEG audio layer 3) @tab E @tab IX
+ @tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
+@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
+@item Musepack SV7 @tab @tab X
+@item Musepack SV8 @tab @tab X
+@item Nellymoser Asao @tab X @tab X
+@item PCM A-law @tab X @tab X
+@item PCM mu-law @tab X @tab X
+@item PCM 16-bit little-endian planar @tab @tab X
+@item PCM 32-bit floating point big-endian @tab X @tab X
+@item PCM 32-bit floating point little-endian @tab X @tab X
+@item PCM 64-bit floating point big-endian @tab X @tab X
+@item PCM 64-bit floating point little-endian @tab X @tab X
+@item PCM D-Cinema audio signed 24-bit @tab X @tab X
+@item PCM signed 8-bit @tab X @tab X
+@item PCM signed 16-bit big-endian @tab X @tab X
+@item PCM signed 16-bit little-endian @tab X @tab X
+@item PCM signed 24-bit big-endian @tab X @tab X
+@item PCM signed 24-bit little-endian @tab X @tab X
+@item PCM signed 32-bit big-endian @tab X @tab X
+@item PCM signed 32-bit little-endian @tab X @tab X
+@item PCM signed 16/20/24-bit big-endian in MPEG-TS @tab @tab X
+@item PCM unsigned 8-bit @tab X @tab X
+@item PCM unsigned 16-bit big-endian @tab X @tab X
+@item PCM unsigned 16-bit little-endian @tab X @tab X
+@item PCM unsigned 24-bit big-endian @tab X @tab X
+@item PCM unsigned 24-bit little-endian @tab X @tab X
+@item PCM unsigned 32-bit big-endian @tab X @tab X
+@item PCM unsigned 32-bit little-endian @tab X @tab X
+@item PCM Zork @tab X @tab X
+@item QCELP / PureVoice @tab @tab X
+@item QDesign Music Codec 2 @tab @tab X
+ @tab There are still some distortions.
+@item RealAudio 1.0 (14.4K) @tab X @tab X
+ @tab Real 14400 bit/s codec
+@item RealAudio 2.0 (28.8K) @tab @tab X
+ @tab Real 28800 bit/s codec
+@item RealAudio 3.0 (dnet) @tab IX @tab X
+ @tab Real low bitrate AC-3 codec
+@item RealAudio SIPR / ACELP.NET @tab @tab X
+@item Shorten @tab @tab X
+@item Sierra VMD audio @tab @tab X
+ @tab Used in Sierra VMD files.
+@item Smacker audio @tab @tab X
+@item Sonic @tab X @tab X
+ @tab experimental codec
+@item Sonic lossless @tab X @tab X
+ @tab experimental codec
+@item Speex @tab @tab E
+ @tab supported through external library libspeex
+@item True Audio (TTA) @tab @tab X
+@item TrueHD @tab @tab X
+ @tab Used in HD-DVD and Blu-Ray discs.
+@item TwinVQ (VQF flavor) @tab @tab X
+@item Vorbis @tab E @tab X
+ @tab A native but very primitive encoder exists.
+@item WavPack @tab @tab X
+@item Westwood Audio (SND1) @tab @tab X
+@item Windows Media Audio 1 @tab X @tab X
+@item Windows Media Audio 2 @tab X @tab X
+@item Windows Media Audio Pro @tab @tab X
+@item Windows Media Audio Voice @tab @tab X
+@end multitable
+
+@code{X} means that encoding (resp. decoding) is supported.
+
+@code{E} means that support is provided through an external library.
+
+@code{I} means that an integer-only version is available, too (ensures high
+performance on systems without hardware floating point support).
+
+@section Subtitle Formats
+
+@multitable @columnfractions .4 .1 .1 .1 .1
+@item Name @tab Muxing @tab Demuxing @tab Encoding @tab Decoding
+@item SSA/ASS @tab X @tab X
+@item DVB @tab X @tab X @tab X @tab X
+@item DVD @tab X @tab X @tab X @tab X
+@item PGS @tab @tab @tab @tab X
+@item XSUB @tab @tab @tab X @tab X
+@end multitable
+
+@code{X} means that the feature is supported.
+
+@section Network Protocols
+
+@multitable @columnfractions .4 .1
+@item Name @tab Support
+@item file @tab X
+@item Gopher @tab X
+@item HTTP @tab X
+@item MMS @tab X
+@item pipe @tab X
+@item RTP @tab X
+@item TCP @tab X
+@item UDP @tab X
+@end multitable
+
+@code{X} means that the protocol is supported.
+
+
+@section Input/Output Devices
+
+@multitable @columnfractions .4 .1 .1
+@item Name @tab Input @tab Output
+@item ALSA @tab X @tab X
+@item BKTR @tab X @tab
+@item DV1394 @tab X @tab
+@item JACK @tab X @tab
+@item LIBDC1394 @tab X @tab
+@item OSS @tab X @tab X
+@item Video4Linux @tab X @tab
+@item Video4Linux2 @tab X @tab
+@item VfW capture @tab X @tab
+@item X11 grabbing @tab X @tab
+@end multitable
+
+@code{X} means that input/output is supported.
+
+
+@chapter Platform Specific information
+
+@section DOS
+
+Using a cross-compiler is preferred for various reasons.
+
+@subsection DJGPP
+
+FFmpeg cannot be compiled because of broken system headers, add
+@code{--extra-cflags=-U__STRICT_ANSI__} to the configure options as a
+workaround.
+
+@section OS/2
+
+For information about compiling FFmpeg on OS/2 see
+@url{http://www.edm2.com/index.php/FFmpeg}.
+
+@section Unix-like
+
+Some parts of FFmpeg cannot be built with version 2.15 of the GNU
+assembler which is still provided by a few AMD64 distributions. To
+make sure your compiler really uses the required version of gas
+after a binutils upgrade, run:
+
+@example
+$(gcc -print-prog-name=as) --version
+@end example
+
+If not, then you should install a different compiler that has no
+hard-coded path to gas. In the worst case pass @code{--disable-asm}
+to configure.
+
+@subsection BSD
+
+BSD make will not build FFmpeg, you need to install and use GNU Make
+(@file{gmake}).
+
+@subsubsection FreeBSD
+
+FreeBSD will not compile out-of-the-box due to broken system headers.
+Passing @code{--extra-cflags=-D__BSD_VISIBLE} to configure will work
+around the problem. This may have unexpected sideeffects, so use it at
+your own risk. If you care about FreeBSD, please make an attempt at
+getting the system headers fixed.
+
+@subsection (Open)Solaris
+
+GNU Make is required to build FFmpeg, so you have to invoke (@file{gmake}),
+standard Solaris Make will not work. When building with a non-c99 front-end
+(gcc, generic suncc) add either @code{--extra-libs=/usr/lib/values-xpg6.o}
+or @code{--extra-libs=/usr/lib/64/values-xpg6.o} to the configure options
+since the libc is not c99-compliant by default. The probes performed by
+configure may raise an exception leading to the death of configure itself
+due to a bug in the system shell. Simply invoke a different shell such as
+bash directly to work around this:
+
+@example
+bash ./configure
+@end example
+
+@subsection Darwin (MacOS X, iPhone)
+
+MacOS X on PowerPC or ARM (iPhone) requires a preprocessor from
+@url{http://github.com/yuvi/gas-preprocessor} to build the optimized
+assembler functions.
+
+@section Windows
+
+To get help and instructions for building FFmpeg under Windows, check out
+the FFmpeg Windows Help Forum at
+@url{http://ffmpeg.arrozcru.org/}.
+
+@subsection Native Windows compilation
+
+FFmpeg can be built to run natively on Windows using the MinGW tools. Install
+the latest versions of MSYS and MinGW from @url{http://www.mingw.org/}.
+You can find detailed installation
+instructions in the download section and the FAQ.
+
+FFmpeg does not build out-of-the-box with the packages the automated MinGW
+installer provides. It also requires coreutils to be installed and many other
+packages updated to the latest version. The minimum version for some packages
+are listed below:
+
+@itemize
+@item bash 3.1
+@item msys-make 3.81-2 (note: not mingw32-make)
+@item w32api 3.13
+@item mingw-runtime 3.15
+@end itemize
+
+FFmpeg automatically passes @code{-fno-common} to the compiler to work around
+a GCC bug (see @url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37216}).
+
+Within the MSYS shell, configure and make with:
+
+@example
+./configure --enable-memalign-hack
+make
+make install
+@end example
+
+This will install @file{ffmpeg.exe} along with many other development files
+to @file{/usr/local}. You may specify another install path using the
+@code{--prefix} option in @file{configure}.
+
+Notes:
+
+@itemize
+
+@item Building natively using MSYS can be sped up by disabling implicit rules
+in the Makefile by calling @code{make -r} instead of plain @code{make}. This
+speed up is close to non-existent for normal one-off builds and is only
+noticeable when running make for a second time (for example in
+@code{make install}).
+
+@item In order to compile FFplay, you must have the MinGW development library
+of SDL. Get it from @url{http://www.libsdl.org}.
+Edit the @file{bin/sdl-config} script so that it points to the correct prefix
+where SDL was installed. Verify that @file{sdl-config} can be launched from
+the MSYS command line.
+
+@item By using @code{./configure --enable-shared} when configuring FFmpeg,
+you can build libavutil, libavcodec and libavformat as DLLs.
+
+@end itemize
+
+@subsection Microsoft Visual C++ compatibility
+
+As stated in the FAQ, FFmpeg will not compile under MSVC++. However, if you
+want to use the libav* libraries in your own applications, you can still
+compile those applications using MSVC++. But the libav* libraries you link
+to @emph{must} be built with MinGW. However, you will not be able to debug
+inside the libav* libraries, since MSVC++ does not recognize the debug
+symbols generated by GCC.
+We strongly recommend you to move over from MSVC++ to MinGW tools.
+
+This description of how to use the FFmpeg libraries with MSVC++ is based on
+Microsoft Visual C++ 2005 Express Edition. If you have a different version,
+you might have to modify the procedures slightly.
+
+@subsubsection Using static libraries
+
+Assuming you have just built and installed FFmpeg in @file{/usr/local}.
+
+@enumerate
+
+@item Create a new console application ("File / New / Project") and then
+select "Win32 Console Application". On the appropriate page of the
+Application Wizard, uncheck the "Precompiled headers" option.
+
+@item Write the source code for your application, or, for testing, just
+copy the code from an existing sample application into the source file
+that MSVC++ has already created for you. For example, you can copy
+@file{libavformat/output-example.c} from the FFmpeg distribution.
+
+@item Open the "Project / Properties" dialog box. In the "Configuration"
+combo box, select "All Configurations" so that the changes you make will
+affect both debug and release builds. In the tree view on the left hand
+side, select "C/C++ / General", then edit the "Additional Include
+Directories" setting to contain the path where the FFmpeg includes were
+installed (i.e. @file{c:\msys\1.0\local\include}).
+Do not add MinGW's include directory here, or the include files will
+conflict with MSVC's.
+
+@item Still in the "Project / Properties" dialog box, select
+"Linker / General" from the tree view and edit the
+"Additional Library Directories" setting to contain the @file{lib}
+directory where FFmpeg was installed (i.e. @file{c:\msys\1.0\local\lib}),
+the directory where MinGW libs are installed (i.e. @file{c:\mingw\lib}),
+and the directory where MinGW's GCC libs are installed
+(i.e. @file{C:\mingw\lib\gcc\mingw32\4.2.1-sjlj}). Then select
+"Linker / Input" from the tree view, and add the files @file{libavformat.a},
+@file{libavcodec.a}, @file{libavutil.a}, @file{libmingwex.a},
+@file{libgcc.a}, and any other libraries you used (i.e. @file{libz.a})
+to the end of "Additional Dependencies".
+
+@item Now, select "C/C++ / Code Generation" from the tree view. Select
+"Debug" in the "Configuration" combo box. Make sure that "Runtime
+Library" is set to "Multi-threaded Debug DLL". Then, select "Release" in
+the "Configuration" combo box and make sure that "Runtime Library" is
+set to "Multi-threaded DLL".
+
+@item Click "OK" to close the "Project / Properties" dialog box.
+
+@item MSVC++ lacks some C99 header files that are fundamental for FFmpeg.
+Get msinttypes from @url{http://code.google.com/p/msinttypes/downloads/list}
+and install it in MSVC++'s include directory
+(i.e. @file{C:\Program Files\Microsoft Visual Studio 8\VC\include}).
+
+@item MSVC++ also does not understand the @code{inline} keyword used by
+FFmpeg, so you must add this line before @code{#include}ing libav*:
+@example
+#define inline _inline
+@end example
+
+@item Build your application, everything should work.
+
+@end enumerate
+
+@subsubsection Using shared libraries
+
+This is how to create DLL and LIB files that are compatible with MSVC++:
+
+@enumerate
+
+@item Add a call to @file{vcvars32.bat} (which sets up the environment
+variables for the Visual C++ tools) as the first line of @file{msys.bat}.
+The standard location for @file{vcvars32.bat} is
+@file{C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat},
+and the standard location for @file{msys.bat} is @file{C:\msys\1.0\msys.bat}.
+If this corresponds to your setup, add the following line as the first line
+of @file{msys.bat}:
+
+@example
+call "C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat"
+@end example
+
+Alternatively, you may start the @file{Visual Studio 2005 Command Prompt},
+and run @file{c:\msys\1.0\msys.bat} from there.
+
+@item Within the MSYS shell, run @code{lib.exe}. If you get a help message
+from @file{Microsoft (R) Library Manager}, this means your environment
+variables are set up correctly, the @file{Microsoft (R) Library Manager}
+is on the path and will be used by FFmpeg to create
+MSVC++-compatible import libraries.
+
+@item Build FFmpeg with
+
+@example
+./configure --enable-shared --enable-memalign-hack
+make
+make install
+@end example
+
+Your install path (@file{/usr/local/} by default) should now have the
+necessary DLL and LIB files under the @file{bin} directory.
+
+@end enumerate
+
+To use those files with MSVC++, do the same as you would do with
+the static libraries, as described above. But in Step 4,
+you should only need to add the directory where the LIB files are installed
+(i.e. @file{c:\msys\usr\local\bin}). This is not a typo, the LIB files are
+installed in the @file{bin} directory. And instead of adding @file{libxx.a}
+files, you should add @file{avcodec.lib}, @file{avformat.lib}, and
+@file{avutil.lib}. There should be no need for @file{libmingwex.a},
+@file{libgcc.a}, and @file{wsock32.lib}, nor any other external library
+statically linked into the DLLs. The @file{bin} directory contains a bunch
+of DLL files, but the ones that are actually used to run your application
+are the ones with a major version number in their filenames
+(i.e. @file{avcodec-51.dll}).
+
+@subsection Cross compilation for Windows with Linux
+
+You must use the MinGW cross compilation tools available at
+@url{http://www.mingw.org/}.
+
+Then configure FFmpeg with the following options:
+@example
+./configure --target-os=mingw32 --cross-prefix=i386-mingw32msvc-
+@end example
+(you can change the cross-prefix according to the prefix chosen for the
+MinGW tools).
+
+Then you can easily test FFmpeg with Wine
+(@url{http://www.winehq.com/}).
+
+@subsection Compilation under Cygwin
+
+Please use Cygwin 1.7.x as the obsolete 1.5.x Cygwin versions lack
+llrint() in its C library.
+
+Install your Cygwin with all the "Base" packages, plus the
+following "Devel" ones:
+@example
+binutils, gcc4-core, make, subversion, mingw-runtime, texi2html
+@end example
+
+And the following "Utils" one:
+@example
+diffutils
+@end example
+
+Then run
+
+@example
+./configure --enable-static --disable-shared
+@end example
+
+to make a static build.
+
+The current @code{gcc4-core} package is buggy and needs this flag to build
+shared libraries:
+
+@example
+./configure --enable-shared --disable-static --extra-cflags=-fno-reorder-functions
+@end example
+
+If you want to build FFmpeg with additional libraries, download Cygwin
+"Devel" packages for Ogg and Vorbis from any Cygwin packages repository:
+@example
+libogg-devel, libvorbis-devel
+@end example
+
+These library packages are only available from Cygwin Ports
+(@url{http://sourceware.org/cygwinports/}) :
+
+@example
+yasm, libSDL-devel, libdirac-devel, libfaac-devel, libfaad-devel, libgsm-devel,
+libmp3lame-devel, libschroedinger1.0-devel, speex-devel, libtheora-devel,
+libxvidcore-devel
+@end example
+
+The recommendation for libnut and x264 is to build them from source by
+yourself, as they evolve too quickly for Cygwin Ports to be up to date.
+
+Cygwin 1.7.x has IPv6 support. You can add IPv6 to Cygwin 1.5.x by means
+of the @code{libgetaddrinfo-devel} package, available at Cygwin Ports.
+
+@subsection Crosscompilation for Windows under Cygwin
+
+With Cygwin you can create Windows binaries that do not need the cygwin1.dll.
+
+Just install your Cygwin as explained before, plus these additional
+"Devel" packages:
+@example
+gcc-mingw-core, mingw-runtime, mingw-zlib
+@end example
+
+and add some special flags to your configure invocation.
+
+For a static build run
+@example
+./configure --target-os=mingw32 --enable-memalign-hack --enable-static --disable-shared --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin
+@end example
+
+and for a build with shared libraries
+@example
+./configure --target-os=mingw32 --enable-memalign-hack --enable-shared --disable-static --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin
+@end example
+
+@bye
diff --git a/lib/ffmpeg/doc/issue_tracker.txt b/lib/ffmpeg/doc/issue_tracker.txt
new file mode 100644
index 0000000000..e5a74db001
--- /dev/null
+++ b/lib/ffmpeg/doc/issue_tracker.txt
@@ -0,0 +1,228 @@
+FFmpeg's bug/patch/feature request tracker manual
+=================================================
+
+NOTE: This is a draft.
+
+Overview:
+---------
+FFmpeg uses Roundup for tracking issues, new issues and changes to
+existing issues can be done through a web interface and through email.
+It is possible to subscribe to individual issues by adding yourself to the
+nosy list or to subscribe to the ffmpeg-issues mailing list which receives
+a mail for every change to every issue. Replies to such mails will also
+be properly added to the respective issue.
+(the above does all work already after light testing)
+The subscription URL for the ffmpeg-issues list is:
+http://live.polito/mailman/listinfo/ffmpeg-issues
+The URL of the webinterface of the tracker is:
+http(s)://roundup.ffmpeg/roundup/ffmpeg/
+Note the URLs in this document are obfuscated, you must append the top level
+domain for non-profit organizations to the tracker, and of Italy to the
+mailing list.
+
+Email Interface:
+----------------
+There is a mailing list to which all new issues and changes to existing issues
+are sent. You can subscribe through
+http://live.polito/mailman/listinfo/ffmpeg-issues
+Replies to messages there will have their text added to the specific issues.
+Attachments will be added as if they had been uploaded via the web interface.
+You can change the status, substatus, topic, ... by changing the subject in
+your reply like:
+Re: [issue94] register_avcodec and allcodecs.h [type=patch;status=open;substatus=approved]
+Roundup will then change things as you requested and remove the [...] from
+the subject before forwarding the mail to the mailing list.
+
+
+NOTE: issue = (bug report || patch || feature request)
+
+Type:
+-----
+bug
+ An error, flaw, mistake, failure, or fault in FFmpeg or libav* that
+ prevents it from behaving as intended.
+
+feature request
+ Request of support for encoding or decoding of a new codec, container
+ or variant.
+ Request of support for more, less or plain different output or behavior
+ where the current implementation cannot be considered wrong.
+
+patch
+ A patch as generated by diff which conforms to the patch submission and
+ development policy.
+
+
+Priority:
+---------
+critical
+ Bugs and patches which deal with data loss and security issues.
+ No feature request can be critical.
+
+important
+ Bugs which make FFmpeg unusable for a significant number of users, and
+ patches fixing them.
+ Examples here might be completely broken MPEG-4 decoding or a build issue
+ on Linux.
+ While broken 4xm decoding or a broken OS/2 build would not be important,
+ the separation to normal is somewhat fuzzy.
+ For feature requests this priority would be used for things many people
+ want.
+
+normal
+
+
+minor
+ Bugs and patches about things like spelling errors, "mp2" instead of
+ "mp3" being shown and such.
+ Feature requests about things few people want or which do not make a big
+ difference.
+
+wish
+ Something that is desirable to have but that there is no urgency at
+ all to implement, e.g. something completely cosmetic like a website
+ restyle or a personalized doxy template or the FFmpeg logo.
+ This priority is not valid for bugs.
+
+
+Status:
+-------
+new
+ initial state
+
+open
+ intermediate states
+
+closed
+ final state
+
+
+Type/Status/Substatus:
+----------
+*/new/new
+ Initial state of new bugs, patches and feature requests submitted by
+ users.
+
+*/open/open
+ Issues which have been briefly looked at and which did not look outright
+ invalid.
+ This implicates that no real more detailed state applies yet. Conversely,
+ the more detailed states below implicate that the issue has been briefly
+ looked at.
+
+*/closed/duplicate
+ Bugs, patches or feature requests which are duplicates.
+ Note that patches dealing with the same thing in a different way are not
+ duplicates.
+ Note, if you mark something as duplicate, do not forget setting the
+ superseder so bug reports are properly linked.
+
+*/closed/invalid
+ Bugs caused by user errors, random ineligible or otherwise nonsense stuff.
+
+*/closed/needs_more_info
+ Issues for which some information has been requested by the developers,
+ but which has not been provided by anyone within reasonable time.
+
+bug/open/reproduced
+ Bugs which have been reproduced.
+
+bug/open/analyzed
+ Bugs which have been analyzed and where it is understood what causes them
+ and which exact chain of events triggers them. This analysis should be
+ available as a message in the bug report.
+ Note, do not change the status to analyzed without also providing a clear
+ and understandable analysis.
+ This state implicates that the bug either has been reproduced or that
+ reproduction is not needed as the bug is already understood.
+
+bug/open/needs_more_info
+ Bug reports which are incomplete and or where more information is needed
+ from the submitter or another person who can provide it.
+ This state implicates that the bug has not been analyzed or reproduced.
+ Note, the idea behind needs_more_info is to offload work from the
+ developers to the users whenever possible.
+
+bug/closed/fixed
+ Bugs which have to the best of our knowledge been fixed.
+
+bug/closed/wont_fix
+ Bugs which we will not fix. Possible reasons include legality, high
+ complexity for the sake of supporting obscure corner cases, speed loss
+ for similarly esoteric purposes, et cetera.
+ This also means that we would reject a patch.
+ If we are just too lazy to fix a bug then the correct state is open
+ and unassigned. Closed means that the case is closed which is not
+ the case if we are just waiting for a patch.
+
+bug/closed/works_for_me
+ Bugs for which sufficient information was provided to reproduce but
+ reproduction failed - that is the code seems to work correctly to the
+ best of our knowledge.
+
+patch/open/approved
+ Patches which have been reviewed and approved by a developer.
+ Such patches can be applied anytime by any other developer after some
+ reasonable testing (compile + regression tests + does the patch do
+ what the author claimed).
+
+patch/open/needs_changes
+ Patches which have been reviewed and need changes to be accepted.
+
+patch/closed/applied
+ Patches which have been applied.
+
+patch/closed/rejected
+ Patches which have been rejected.
+
+feature_request/open/needs_more_info
+ Feature requests where it is not clear what exactly is wanted
+ (these also could be closed as invalid ...).
+
+feature_request/closed/implemented
+ Feature requests which have been implemented.
+
+feature_request/closed/wont_implement
+ Feature requests which will not be implemented. The reasons here could
+ be legal, philosophical or others.
+
+Note, please do not use type-status-substatus combinations other than the
+above without asking on ffmpeg-dev first!
+
+Note2, if you provide the requested info do not forget to remove the
+needs_more_info substate.
+
+Topic:
+------
+A topic is a tag you should add to your issue in order to make grouping them
+easier.
+
+avcodec
+ issues in libavcodec/*
+
+avformat
+ issues in libavformat/*
+
+avutil
+ issues in libavutil/*
+
+regression test
+ issues in tests/*
+
+ffmpeg
+ issues in or related to ffmpeg.c
+
+ffplay
+ issues in or related to ffplay.c
+
+ffserver
+ issues in or related to ffserver.c
+
+build system
+ issues in or related to configure/Makefile
+
+regression
+ bugs which were working in a past revision
+
+roundup
+ issues related to our issue tracker
diff --git a/lib/ffmpeg/doc/libavfilter.texi b/lib/ffmpeg/doc/libavfilter.texi
new file mode 100644
index 0000000000..8745928d40
--- /dev/null
+++ b/lib/ffmpeg/doc/libavfilter.texi
@@ -0,0 +1,104 @@
+\input texinfo @c -*- texinfo -*-
+
+@settitle Libavfilter Documentation
+@titlepage
+@sp 7
+@center @titlefont{Libavfilter Documentation}
+@sp 3
+@end titlepage
+
+
+@chapter Introduction
+
+Libavfilter is the filtering API of FFmpeg. It is the substitute of the
+now deprecated 'vhooks' and started as a Google Summer of Code project.
+
+Integrating libavfilter into the main FFmpeg repository is a work in
+progress. If you wish to try the unfinished development code of
+libavfilter then check it out from the libavfilter repository into
+some directory of your choice by:
+
+@example
+ svn checkout svn://svn.ffmpeg.org/soc/libavfilter
+@end example
+
+And then read the README file in the top directory to learn how to
+integrate it into ffmpeg and ffplay.
+
+But note that there may still be serious bugs in the code and its API
+and ABI should not be considered stable yet!
+
+@chapter Tutorial
+
+In libavfilter, it is possible for filters to have multiple inputs and
+multiple outputs.
+To illustrate the sorts of things that are possible, we can
+use a complex filter graph. For example, the following one:
+
+@example
+input --> split --> fifo -----------------------> overlay --> output
+ | ^
+ | |
+ +------> fifo --> crop --> vflip --------+
+@end example
+
+splits the stream in two streams, sends one stream through the crop filter
+and the vflip filter before merging it back with the other stream by
+overlaying it on top. You can use the following command to achieve this:
+
+@example
+./ffmpeg -i in.avi -s 240x320 -vf "[in] split [T1], fifo, [T2] overlay= 0:240 [out]; [T1] fifo, crop=0:0:-1:240, vflip [T2]
+@end example
+
+where input_video.avi has a vertical resolution of 480 pixels. The
+result will be that in output the top half of the video is mirrored
+onto the bottom half.
+
+Video filters are loaded using the @var{-vf} option passed to
+ffmpeg or to ffplay. Filters in the same linear chain are separated by
+commas. In our example, @var{split, fifo, overlay} are in one linear
+chain, and @var{fifo, crop, vflip} are in another. The points where
+the linear chains join are labeled by names enclosed in square
+brackets. In our example, that is @var{[T1]} and @var{[T2]}. The magic
+labels @var{[in]} and @var{[out]} are the points where video is input
+and output.
+
+Some filters take in input a list of parameters: they are specified
+after the filter name and an equal sign, and are separated each other
+by a semicolon.
+
+There exist so-called @var{source filters} that do not have a video
+input, and we expect in the future some @var{sink filters} that will
+not have video output.
+
+@chapter graph2dot
+
+The @file{graph2dot} program included in the FFmpeg @file{tools}
+directory can be used to parse a filter graph description and issue a
+corresponding textual representation in the dot language.
+
+Invoke the command:
+@example
+graph2dot -h
+@end example
+
+to see how to use @file{graph2dot}.
+
+You can then pass the dot description to the @file{dot} program (from
+the graphviz suite of programs) and obtain a graphical representation
+of the filter graph.
+
+For example the sequence of commands:
+@example
+echo @var{GRAPH_DESCRIPTION} | \
+tools/graph2dot -o graph.tmp && \
+dot -Tpng graph.tmp -o graph.png && \
+display graph.png
+@end example
+
+can be used to create and display an image representing the graph
+described by the @var{GRAPH_DESCRIPTION} string.
+
+@include filters.texi
+
+@bye
diff --git a/lib/ffmpeg/doc/optimization.txt b/lib/ffmpeg/doc/optimization.txt
new file mode 100644
index 0000000000..5469adc836
--- /dev/null
+++ b/lib/ffmpeg/doc/optimization.txt
@@ -0,0 +1,235 @@
+optimization Tips (for libavcodec):
+===================================
+
+What to optimize:
+-----------------
+If you plan to do non-x86 architecture specific optimizations (SIMD normally),
+then take a look in the x86/ directory, as most important functions are
+already optimized for MMX.
+
+If you want to do x86 optimizations then you can either try to finetune the
+stuff in the x86 directory or find some other functions in the C source to
+optimize, but there aren't many left.
+
+
+Understanding these overoptimized functions:
+--------------------------------------------
+As many functions tend to be a bit difficult to understand because
+of optimizations, it can be hard to optimize them further, or write
+architecture-specific versions. It is recommended to look at older
+revisions of the interesting files (for a web frontend try ViewVC at
+http://svn.ffmpeg.org/ffmpeg/trunk/).
+Alternatively, look into the other architecture-specific versions in
+the x86/, ppc/, alpha/ subdirectories. Even if you don't exactly
+comprehend the instructions, it could help understanding the functions
+and how they can be optimized.
+
+NOTE: If you still don't understand some function, ask at our mailing list!!!
+(http://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-devel)
+
+
+When is an optimization justified?
+----------------------------------
+Normally, clean and simple optimizations for widely used codecs are
+justified even if they only achieve an overall speedup of 0.1%. These
+speedups accumulate and can make a big difference after awhile. Also, if
+none of the following factors get worse due to an optimization -- speed,
+binary code size, source size, source readability -- and at least one
+factor improves, then an optimization is always a good idea even if the
+overall gain is less than 0.1%. For obscure codecs that are not often
+used, the goal is more toward keeping the code clean, small, and
+readable instead of making it 1% faster.
+
+
+WTF is that function good for ....:
+-----------------------------------
+The primary purpose of this list is to avoid wasting time optimizing functions
+which are rarely used.
+
+put(_no_rnd)_pixels{,_x2,_y2,_xy2}
+ Used in motion compensation (en/decoding).
+
+avg_pixels{,_x2,_y2,_xy2}
+ Used in motion compensation of B-frames.
+ These are less important than the put*pixels functions.
+
+avg_no_rnd_pixels*
+ unused
+
+pix_abs16x16{,_x2,_y2,_xy2}
+ Used in motion estimation (encoding) with SAD.
+
+pix_abs8x8{,_x2,_y2,_xy2}
+ Used in motion estimation (encoding) with SAD of MPEG-4 4MV only.
+ These are less important than the pix_abs16x16* functions.
+
+put_mspel8_mc* / wmv2_mspel8*
+ Used only in WMV2.
+ it is not recommended that you waste your time with these, as WMV2
+ is an ugly and relatively useless codec.
+
+mpeg4_qpel* / *qpel_mc*
+ Used in MPEG-4 qpel motion compensation (encoding & decoding).
+ The qpel8 functions are used only for 4mv,
+ the avg_* functions are used only for B-frames.
+ Optimizing them should have a significant impact on qpel
+ encoding & decoding.
+
+qpel{8,16}_mc??_old_c / *pixels{8,16}_l4
+ Just used to work around a bug in an old libavcodec encoder version.
+ Don't optimize them.
+
+tpel_mc_func {put,avg}_tpel_pixels_tab
+ Used only for SVQ3, so only optimize them if you need fast SVQ3 decoding.
+
+add_bytes/diff_bytes
+ For huffyuv only, optimize if you want a faster ffhuffyuv codec.
+
+get_pixels / diff_pixels
+ Used for encoding, easy.
+
+clear_blocks
+ easiest to optimize
+
+gmc
+ Used for MPEG-4 gmc.
+ Optimizing this should have a significant effect on the gmc decoding
+ speed.
+
+gmc1
+ Used for chroma blocks in MPEG-4 gmc with 1 warp point
+ (there are 4 luma & 2 chroma blocks per macroblock, so
+ only 1/3 of the gmc blocks use this, the other 2/3
+ use the normal put_pixel* code, but only if there is
+ just 1 warp point).
+ Note: DivX5 gmc always uses just 1 warp point.
+
+pix_sum
+ Used for encoding.
+
+hadamard8_diff / sse / sad == pix_norm1 / dct_sad / quant_psnr / rd / bit
+ Specific compare functions used in encoding, it depends upon the
+ command line switches which of these are used.
+ Don't waste your time with dct_sad & quant_psnr, they aren't
+ really useful.
+
+put_pixels_clamped / add_pixels_clamped
+ Used for en/decoding in the IDCT, easy.
+ Note, some optimized IDCTs have the add/put clamped code included and
+ then put_pixels_clamped / add_pixels_clamped will be unused.
+
+idct/fdct
+ idct (encoding & decoding)
+ fdct (encoding)
+ difficult to optimize
+
+dct_quantize_trellis
+ Used for encoding with trellis quantization.
+ difficult to optimize
+
+dct_quantize
+ Used for encoding.
+
+dct_unquantize_mpeg1
+ Used in MPEG-1 en/decoding.
+
+dct_unquantize_mpeg2
+ Used in MPEG-2 en/decoding.
+
+dct_unquantize_h263
+ Used in MPEG-4/H.263 en/decoding.
+
+FIXME remaining functions?
+BTW, most of these functions are in dsputil.c/.h, some are in mpegvideo.c/.h.
+
+
+
+Alignment:
+Some instructions on some architectures have strict alignment restrictions,
+for example most SSE/SSE2 instructions on x86.
+The minimum guaranteed alignment is written in the .h files, for example:
+ void (*put_pixels_clamped)(const DCTELEM *block/*align 16*/, UINT8 *pixels/*align 8*/, int line_size);
+
+
+General Tips:
+-------------
+Use asm loops like:
+__asm__(
+ "1: ....
+ ...
+ "jump_instruciton ....
+Do not use C loops:
+do{
+ __asm__(
+ ...
+}while()
+
+Use __asm__() instead of intrinsics. The latter requires a good optimizing compiler
+which gcc is not.
+
+
+Links:
+======
+http://www.aggregate.org/MAGIC/
+
+x86-specific:
+-------------
+http://developer.intel.com/design/pentium4/manuals/248966.htm
+
+The IA-32 Intel Architecture Software Developer's Manual, Volume 2:
+Instruction Set Reference
+http://developer.intel.com/design/pentium4/manuals/245471.htm
+
+http://www.agner.org/assem/
+
+AMD Athlon Processor x86 Code Optimization Guide:
+http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/22007.pdf
+
+
+ARM-specific:
+-------------
+ARM Architecture Reference Manual (up to ARMv5TE):
+http://www.arm.com/community/university/eulaarmarm.html
+
+Procedure Call Standard for the ARM Architecture:
+http://www.arm.com/pdfs/aapcs.pdf
+
+Optimization guide for ARM9E (used in Nokia 770 Internet Tablet):
+http://infocenter.arm.com/help/topic/com.arm.doc.ddi0240b/DDI0240A.pdf
+Optimization guide for ARM11 (used in Nokia N800 Internet Tablet):
+http://infocenter.arm.com/help/topic/com.arm.doc.ddi0211j/DDI0211J_arm1136_r1p5_trm.pdf
+Optimization guide for Intel XScale (used in Sharp Zaurus PDA):
+http://download.intel.com/design/intelxscale/27347302.pdf
+Intel Wireless MMX2 Coprocessor: Programmers Reference Manual
+http://download.intel.com/design/intelxscale/31451001.pdf
+
+PowerPC-specific:
+-----------------
+PowerPC32/AltiVec PIM:
+www.freescale.com/files/32bit/doc/ref_manual/ALTIVECPEM.pdf
+
+PowerPC32/AltiVec PEM:
+www.freescale.com/files/32bit/doc/ref_manual/ALTIVECPIM.pdf
+
+CELL/SPU:
+http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E/$file/Language_Extensions_for_CBEA_2.4.pdf
+http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/9F820A5FFA3ECE8C8725716A0062585F/$file/CBE_Handbook_v1.1_24APR2007_pub.pdf
+
+SPARC-specific:
+---------------
+SPARC Joint Programming Specification (JPS1): Commonality
+http://www.fujitsu.com/downloads/PRMPWR/JPS1-R1.0.4-Common-pub.pdf
+
+UltraSPARC III Processor User's Manual (contains instruction timings)
+http://www.sun.com/processors/manuals/USIIIv2.pdf
+
+VIS Whitepaper (contains optimization guidelines)
+http://www.sun.com/processors/vis/download/vis/vis_whitepaper.pdf
+
+GCC asm links:
+--------------
+official doc but quite ugly
+http://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
+
+a bit old (note "+" is valid for input-output, even though the next disagrees)
+http://www.cs.virginia.edu/~clc5q/gcc-inline-asm.pdf
diff --git a/lib/ffmpeg/doc/rate_distortion.txt b/lib/ffmpeg/doc/rate_distortion.txt
new file mode 100644
index 0000000000..a7d2c878b2
--- /dev/null
+++ b/lib/ffmpeg/doc/rate_distortion.txt
@@ -0,0 +1,61 @@
+A Quick Description Of Rate Distortion Theory.
+
+We want to encode a video, picture or piece of music optimally. What does
+"optimally" really mean? It means that we want to get the best quality at a
+given filesize OR we want to get the smallest filesize at a given quality
+(in practice, these 2 goals are usually the same).
+
+Solving this directly is not practical; trying all byte sequences 1
+megabyte in length and selecting the "best looking" sequence will yield
+256^1000000 cases to try.
+
+But first, a word about quality, which is also called distortion.
+Distortion can be quantified by almost any quality measurement one chooses.
+Commonly, the sum of squared differences is used but more complex methods
+that consider psychovisual effects can be used as well. It makes no
+difference in this discussion.
+
+
+First step: that rate distortion factor called lambda...
+Let's consider the problem of minimizing:
+
+ distortion + lambda*rate
+
+rate is the filesize
+distortion is the quality
+lambda is a fixed value choosen as a tradeoff between quality and filesize
+Is this equivalent to finding the best quality for a given max
+filesize? The answer is yes. For each filesize limit there is some lambda
+factor for which minimizing above will get you the best quality (using your
+chosen quality measurement) at the desired (or lower) filesize.
+
+
+Second step: splitting the problem.
+Directly splitting the problem of finding the best quality at a given
+filesize is hard because we do not know how many bits from the total
+filesize should be allocated to each of the subproblems. But the formula
+from above:
+
+ distortion + lambda*rate
+
+can be trivially split. Consider:
+
+ (distortion0 + distortion1) + lambda*(rate0 + rate1)
+
+This creates a problem made of 2 independent subproblems. The subproblems
+might be 2 16x16 macroblocks in a frame of 32x16 size. To minimize:
+
+ (distortion0 + distortion1) + lambda*(rate0 + rate1)
+
+we just have to minimize:
+
+ distortion0 + lambda*rate0
+
+and
+
+ distortion1 + lambda*rate1
+
+I.e, the 2 problems can be solved independently.
+
+Author: Michael Niedermayer
+Copyright: LGPL
diff --git a/lib/ffmpeg/doc/snow.txt b/lib/ffmpeg/doc/snow.txt
new file mode 100644
index 0000000000..f99133971c
--- /dev/null
+++ b/lib/ffmpeg/doc/snow.txt
@@ -0,0 +1,630 @@
+=============================================
+Snow Video Codec Specification Draft 20080110
+=============================================
+
+Introduction:
+=============
+This specification describes the Snow bitstream syntax and semantics as
+well as the formal Snow decoding process.
+
+The decoding process is described precisely and any compliant decoder
+MUST produce the exact same output for a spec-conformant Snow stream.
+For encoding, though, any process which generates a stream compliant to
+the syntactical and semantic requirements and which is decodable by
+the process described in this spec shall be considered a conformant
+Snow encoder.
+
+Definitions:
+============
+
+MUST the specific part must be done to conform to this standard
+SHOULD it is recommended to be done that way, but not strictly required
+
+ilog2(x) is the rounded down logarithm of x with basis 2
+ilog2(0) = 0
+
+Type definitions:
+=================
+
+b 1-bit range coded
+u unsigned scalar value range coded
+s signed scalar value range coded
+
+
+Bitstream syntax:
+=================
+
+frame:
+ header
+ prediction
+ residual
+
+header:
+ keyframe b MID_STATE
+ if(keyframe || always_reset)
+ reset_contexts
+ if(keyframe){
+ version u header_state
+ always_reset b header_state
+ temporal_decomposition_type u header_state
+ temporal_decomposition_count u header_state
+ spatial_decomposition_count u header_state
+ colorspace_type u header_state
+ chroma_h_shift u header_state
+ chroma_v_shift u header_state
+ spatial_scalability b header_state
+ max_ref_frames-1 u header_state
+ qlogs
+ }
+ if(!keyframe){
+ update_mc b header_state
+ if(update_mc){
+ for(plane=0; plane<2; plane++){
+ diag_mc b header_state
+ htaps/2-1 u header_state
+ for(i= p->htaps/2; i; i--)
+ |hcoeff[i]| u header_state
+ }
+ }
+ update_qlogs b header_state
+ if(update_qlogs){
+ spatial_decomposition_count u header_state
+ qlogs
+ }
+ }
+
+ spatial_decomposition_type s header_state
+ qlog s header_state
+ mv_scale s header_state
+ qbias s header_state
+ block_max_depth s header_state
+
+qlogs:
+ for(plane=0; plane<2; plane++){
+ quant_table[plane][0][0] s header_state
+ for(level=0; level < spatial_decomposition_count; level++){
+ quant_table[plane][level][1]s header_state
+ quant_table[plane][level][3]s header_state
+ }
+ }
+
+reset_contexts
+ *_state[*]= MID_STATE
+
+prediction:
+ for(y=0; y<block_count_vertical; y++)
+ for(x=0; x<block_count_horizontal; x++)
+ block(0)
+
+block(level):
+ mvx_diff=mvy_diff=y_diff=cb_diff=cr_diff=0
+ if(keyframe){
+ intra=1
+ }else{
+ if(level!=max_block_depth){
+ s_context= 2*left->level + 2*top->level + topleft->level + topright->level
+ leaf b block_state[4 + s_context]
+ }
+ if(level==max_block_depth || leaf){
+ intra b block_state[1 + left->intra + top->intra]
+ if(intra){
+ y_diff s block_state[32]
+ cb_diff s block_state[64]
+ cr_diff s block_state[96]
+ }else{
+ ref_context= ilog2(2*left->ref) + ilog2(2*top->ref)
+ if(ref_frames > 1)
+ ref u block_state[128 + 1024 + 32*ref_context]
+ mx_context= ilog2(2*abs(left->mx - top->mx))
+ my_context= ilog2(2*abs(left->my - top->my))
+ mvx_diff s block_state[128 + 32*(mx_context + 16*!!ref)]
+ mvy_diff s block_state[128 + 32*(my_context + 16*!!ref)]
+ }
+ }else{
+ block(level+1)
+ block(level+1)
+ block(level+1)
+ block(level+1)
+ }
+ }
+
+
+residual:
+ residual2(luma)
+ residual2(chroma_cr)
+ residual2(chroma_cb)
+
+residual2:
+ for(level=0; level<spatial_decomposition_count; level++){
+ if(level==0)
+ subband(LL, 0)
+ subband(HL, level)
+ subband(LH, level)
+ subband(HH, level)
+ }
+
+subband:
+ FIXME
+
+
+
+Tag description:
+----------------
+
+version
+ 0
+ this MUST NOT change within a bitstream
+
+always_reset
+ if 1 then the range coder contexts will be reset after each frame
+
+temporal_decomposition_type
+ 0
+
+temporal_decomposition_count
+ 0
+
+spatial_decomposition_count
+ FIXME
+
+colorspace_type
+ 0
+ this MUST NOT change within a bitstream
+
+chroma_h_shift
+ log2(luma.width / chroma.width)
+ this MUST NOT change within a bitstream
+
+chroma_v_shift
+ log2(luma.height / chroma.height)
+ this MUST NOT change within a bitstream
+
+spatial_scalability
+ 0
+
+max_ref_frames
+ maximum number of reference frames
+ this MUST NOT change within a bitstream
+
+update_mc
+ indicates that motion compensation filter parameters are stored in the
+ header
+
+diag_mc
+ flag to enable faster diagonal interpolation
+ this SHOULD be 1 unless it turns out to be covered by a valid patent
+
+htaps
+ number of half pel interpolation filter taps, MUST be even, >0 and <10
+
+hcoeff
+ half pel interpolation filter coefficients, hcoeff[0] are the 2 middle
+ coefficients [1] are the next outer ones and so on, resulting in a filter
+ like: ...eff[2], hcoeff[1], hcoeff[0], hcoeff[0], hcoeff[1], hcoeff[2] ...
+ the sign of the coefficients is not explicitly stored but alternates
+ after each coeff and coeff[0] is positive, so ...,+,-,+,-,+,+,-,+,-,+,...
+ hcoeff[0] is not explicitly stored but found by subtracting the sum
+ of all stored coefficients with signs from 32
+ hcoeff[0]= 32 - hcoeff[1] - hcoeff[2] - ...
+ a good choice for hcoeff and htaps is
+ htaps= 6
+ hcoeff={40,-10,2}
+ an alternative which requires more computations at both encoder and
+ decoder side and may or may not be better is
+ htaps= 8
+ hcoeff={42,-14,6,-2}
+
+
+ref_frames
+ minimum of the number of available reference frames and max_ref_frames
+ for example the first frame after a key frame always has ref_frames=1
+
+spatial_decomposition_type
+ wavelet type
+ 0 is a 9/7 symmetric compact integer wavelet
+ 1 is a 5/3 symmetric compact integer wavelet
+ others are reserved
+ stored as delta from last, last is reset to 0 if always_reset || keyframe
+
+qlog
+ quality (logarthmic quantizer scale)
+ stored as delta from last, last is reset to 0 if always_reset || keyframe
+
+mv_scale
+ stored as delta from last, last is reset to 0 if always_reset || keyframe
+ FIXME check that everything works fine if this changes between frames
+
+qbias
+ dequantization bias
+ stored as delta from last, last is reset to 0 if always_reset || keyframe
+
+block_max_depth
+ maximum depth of the block tree
+ stored as delta from last, last is reset to 0 if always_reset || keyframe
+
+quant_table
+ quantiztation table
+
+
+Highlevel bitstream structure:
+=============================
+ --------------------------------------------
+| Header |
+ --------------------------------------------
+| ------------------------------------ |
+| | Block0 | |
+| | split? | |
+| | yes no | |
+| | ......... intra? | |
+| | : Block01 : yes no | |
+| | : Block02 : ....... .......... | |
+| | : Block03 : : y DC : : ref index: | |
+| | : Block04 : : cb DC : : motion x : | |
+| | ......... : cr DC : : motion y : | |
+| | ....... .......... | |
+| ------------------------------------ |
+| ------------------------------------ |
+| | Block1 | |
+| ... |
+ --------------------------------------------
+| ------------ ------------ ------------ |
+|| Y subbands | | Cb subbands| | Cr subbands||
+|| --- --- | | --- --- | | --- --- ||
+|| |LL0||HL0| | | |LL0||HL0| | | |LL0||HL0| ||
+|| --- --- | | --- --- | | --- --- ||
+|| --- --- | | --- --- | | --- --- ||
+|| |LH0||HH0| | | |LH0||HH0| | | |LH0||HH0| ||
+|| --- --- | | --- --- | | --- --- ||
+|| --- --- | | --- --- | | --- --- ||
+|| |HL1||LH1| | | |HL1||LH1| | | |HL1||LH1| ||
+|| --- --- | | --- --- | | --- --- ||
+|| --- --- | | --- --- | | --- --- ||
+|| |HH1||HL2| | | |HH1||HL2| | | |HH1||HL2| ||
+|| ... | | ... | | ... ||
+| ------------ ------------ ------------ |
+ --------------------------------------------
+
+Decoding process:
+=================
+
+ ------------
+ | |
+ | Subbands |
+ ------------ | |
+ | | ------------
+ | Intra DC | |
+ | | LL0 subband prediction
+ ------------ |
+ \ Dequantizaton
+ ------------------- \ |
+| Reference frames | \ IDWT
+| ------- ------- | Motion \ |
+||Frame 0| |Frame 1|| Compensation . OBMC v -------
+| ------- ------- | --------------. \------> + --->|Frame n|-->output
+| ------- ------- | -------
+||Frame 2| |Frame 3||<----------------------------------/
+| ... |
+ -------------------
+
+
+Range Coder:
+============
+
+Binary Range Coder:
+-------------------
+The implemented range coder is an adapted version based upon "Range encoding:
+an algorithm for removing redundancy from a digitised message." by G. N. N.
+Martin.
+The symbols encoded by the Snow range coder are bits (0|1). The
+associated probabilities are not fix but change depending on the symbol mix
+seen so far.
+
+
+bit seen | new state
+---------+-----------------------------------------------
+ 0 | 256 - state_transition_table[256 - old_state];
+ 1 | state_transition_table[ old_state];
+
+state_transition_table = {
+ 0, 0, 0, 0, 0, 0, 0, 0, 20, 21, 22, 23, 24, 25, 26, 27,
+ 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 37, 38, 39, 40, 41, 42,
+ 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 56, 57,
+ 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
+ 74, 75, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88,
+ 89, 90, 91, 92, 93, 94, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
+104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 114, 115, 116, 117, 118,
+119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 133,
+134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
+150, 151, 152, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164,
+165, 166, 167, 168, 169, 170, 171, 171, 172, 173, 174, 175, 176, 177, 178, 179,
+180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 190, 191, 192, 194, 194,
+195, 196, 197, 198, 199, 200, 201, 202, 202, 204, 205, 206, 207, 208, 209, 209,
+210, 211, 212, 213, 215, 215, 216, 217, 218, 219, 220, 220, 222, 223, 224, 225,
+226, 227, 227, 229, 229, 230, 231, 232, 234, 234, 235, 236, 237, 238, 239, 240,
+241, 242, 243, 244, 245, 246, 247, 248, 248, 0, 0, 0, 0, 0, 0, 0};
+
+FIXME
+
+
+Range Coding of integers:
+-------------------------
+FIXME
+
+
+Neighboring Blocks:
+===================
+left and top are set to the respective blocks unless they are outside of
+the image in which case they are set to the Null block
+
+top-left is set to the top left block unless it is outside of the image in
+which case it is set to the left block
+
+if this block has no larger parent block or it is at the left side of its
+parent block and the top right block is not outside of the image then the
+top right block is used for top-right else the top-left block is used
+
+Null block
+y,cb,cr are 128
+level, ref, mx and my are 0
+
+
+Motion Vector Prediction:
+=========================
+1. the motion vectors of all the neighboring blocks are scaled to
+compensate for the difference of reference frames
+
+scaled_mv= (mv * (256 * (current_reference+1) / (mv.reference+1)) + 128)>>8
+
+2. the median of the scaled left, top and top-right vectors is used as
+motion vector prediction
+
+3. the used motion vector is the sum of the predictor and
+ (mvx_diff, mvy_diff)*mv_scale
+
+
+Intra DC Predicton:
+======================
+the luma and chroma values of the left block are used as predictors
+
+the used luma and chroma is the sum of the predictor and y_diff, cb_diff, cr_diff
+to reverse this in the decoder apply the following:
+block[y][x].dc[0] = block[y][x-1].dc[0] + y_diff;
+block[y][x].dc[1] = block[y][x-1].dc[1] + cb_diff;
+block[y][x].dc[2] = block[y][x-1].dc[2] + cr_diff;
+block[*][-1].dc[*]= 128;
+
+
+Motion Compensation:
+====================
+
+Halfpel interpolation:
+----------------------
+halfpel interpolation is done by convolution with the halfpel filter stored
+in the header:
+
+horizontal halfpel samples are found by
+H1[y][x] = hcoeff[0]*(F[y][x ] + F[y][x+1])
+ + hcoeff[1]*(F[y][x-1] + F[y][x+2])
+ + hcoeff[2]*(F[y][x-2] + F[y][x+3])
+ + ...
+h1[y][x] = (H1[y][x] + 32)>>6;
+
+vertical halfpel samples are found by
+H2[y][x] = hcoeff[0]*(F[y ][x] + F[y+1][x])
+ + hcoeff[1]*(F[y-1][x] + F[y+2][x])
+ + ...
+h2[y][x] = (H2[y][x] + 32)>>6;
+
+vertical+horizontal halfpel samples are found by
+H3[y][x] = hcoeff[0]*(H2[y][x ] + H2[y][x+1])
+ + hcoeff[1]*(H2[y][x-1] + H2[y][x+2])
+ + ...
+H3[y][x] = hcoeff[0]*(H1[y ][x] + H1[y+1][x])
+ + hcoeff[1]*(H1[y+1][x] + H1[y+2][x])
+ + ...
+h3[y][x] = (H3[y][x] + 2048)>>12;
+
+
+ F H1 F
+ | | |
+ | | |
+ | | |
+ F H1 F
+ | | |
+ | | |
+ | | |
+ F-------F-------F-> H1<-F-------F-------F
+ v v v
+ H2 H3 H2
+ ^ ^ ^
+ F-------F-------F-> H1<-F-------F-------F
+ | | |
+ | | |
+ | | |
+ F H1 F
+ | | |
+ | | |
+ | | |
+ F H1 F
+
+
+unavailable fullpel samples (outside the picture for example) shall be equal
+to the closest available fullpel sample
+
+
+Smaller pel interpolation:
+--------------------------
+if diag_mc is set then points which lie on a line between 2 vertically,
+horiziontally or diagonally adjacent halfpel points shall be interpolated
+linearls with rounding to nearest and halfway values rounded up.
+points which lie on 2 diagonals at the same time should only use the one
+diagonal not containing the fullpel point
+
+
+
+ F-->O---q---O<--h1->O---q---O<--F
+ v \ / v \ / v
+ O O O O O O O
+ | / | \ |
+ q q q q q
+ | / | \ |
+ O O O O O O O
+ ^ / \ ^ / \ ^
+ h2-->O---q---O<--h3->O---q---O<--h2
+ v \ / v \ / v
+ O O O O O O O
+ | \ | / |
+ q q q q q
+ | \ | / |
+ O O O O O O O
+ ^ / \ ^ / \ ^
+ F-->O---q---O<--h1->O---q---O<--F
+
+
+
+the remaining points shall be bilinearly interpolated from the
+up to 4 surrounding halfpel and fullpel points, again rounding should be to
+nearest and halfway values rounded up
+
+compliant Snow decoders MUST support 1-1/8 pel luma and 1/2-1/16 pel chroma
+interpolation at least
+
+
+Overlapped block motion compensation:
+-------------------------------------
+FIXME
+
+LL band prediction:
+===================
+Each sample in the LL0 subband is predicted by the median of the left, top and
+left+top-topleft samples, samples outside the subband shall be considered to
+be 0. To reverse this prediction in the decoder apply the following.
+for(y=0; y<height; y++){
+ for(x=0; x<width; x++){
+ sample[y][x] += median(sample[y-1][x],
+ sample[y][x-1],
+ sample[y-1][x]+sample[y][x-1]-sample[y-1][x-1]);
+ }
+}
+sample[-1][*]=sample[*][-1]= 0;
+width,height here are the width and height of the LL0 subband not of the final
+video
+
+
+Dequantizaton:
+==============
+FIXME
+
+Wavelet Transform:
+==================
+
+Snow supports 2 wavelet transforms, the symmetric biorthogonal 5/3 integer
+transform and a integer approximation of the symmetric biorthogonal 9/7
+daubechies wavelet.
+
+2D IDWT (inverse discrete wavelet transform)
+--------------------------------------------
+The 2D IDWT applies a 2D filter recursively, each time combining the
+4 lowest frequency subbands into a single subband until only 1 subband
+remains.
+The 2D filter is done by first applying a 1D filter in the vertical direction
+and then applying it in the horizontal one.
+ --------------- --------------- --------------- ---------------
+|LL0|HL0| | | | | | | | | | | |
+|---+---| HL1 | | L0|H0 | HL1 | | LL1 | HL1 | | | |
+|LH0|HH0| | | | | | | | | | | |
+|-------+-------|->|-------+-------|->|-------+-------|->| L1 | H1 |->...
+| | | | | | | | | | | |
+| LH1 | HH1 | | LH1 | HH1 | | LH1 | HH1 | | | |
+| | | | | | | | | | | |
+ --------------- --------------- --------------- ---------------
+
+
+1D Filter:
+----------
+1. interleave the samples of the low and high frequency subbands like
+s={L0, H0, L1, H1, L2, H2, L3, H3, ... }
+note, this can end with a L or a H, the number of elements shall be w
+s[-1] shall be considered equivalent to s[1 ]
+s[w ] shall be considered equivalent to s[w-2]
+
+2. perform the lifting steps in order as described below
+
+5/3 Integer filter:
+1. s[i] -= (s[i-1] + s[i+1] + 2)>>2; for all even i < w
+2. s[i] += (s[i-1] + s[i+1] )>>1; for all odd i < w
+
+\ | /|\ | /|\ | /|\ | /|\
+ \|/ | \|/ | \|/ | \|/ |
+ + | + | + | + | -1/4
+ /|\ | /|\ | /|\ | /|\ |
+/ | \|/ | \|/ | \|/ | \|/
+ | + | + | + | + +1/2
+
+
+Snow's 9/7 Integer filter:
+1. s[i] -= (3*(s[i-1] + s[i+1]) + 4)>>3; for all even i < w
+2. s[i] -= s[i-1] + s[i+1] ; for all odd i < w
+3. s[i] += ( s[i-1] + s[i+1] + 4*s[i] + 8)>>4; for all even i < w
+4. s[i] += (3*(s[i-1] + s[i+1]) )>>1; for all odd i < w
+
+\ | /|\ | /|\ | /|\ | /|\
+ \|/ | \|/ | \|/ | \|/ |
+ + | + | + | + | -3/8
+ /|\ | /|\ | /|\ | /|\ |
+/ | \|/ | \|/ | \|/ | \|/
+ (| + (| + (| + (| + -1
+\ + /|\ + /|\ + /|\ + /|\ +1/4
+ \|/ | \|/ | \|/ | \|/ |
+ + | + | + | + | +1/16
+ /|\ | /|\ | /|\ | /|\ |
+/ | \|/ | \|/ | \|/ | \|/
+ | + | + | + | + +3/2
+
+optimization tips:
+following are exactly identical
+(3a)>>1 == a + (a>>1)
+(a + 4b + 8)>>4 == ((a>>2) + b + 2)>>2
+
+16bit implementation note:
+The IDWT can be implemented with 16bits, but this requires some care to
+prevent overflows, the following list, lists the minimum number of bits needed
+for some terms
+1. lifting step
+A= s[i-1] + s[i+1] 16bit
+3*A + 4 18bit
+A + (A>>1) + 2 17bit
+
+3. lifting step
+s[i-1] + s[i+1] 17bit
+
+4. lifiting step
+3*(s[i-1] + s[i+1]) 17bit
+
+
+TODO:
+=====
+Important:
+finetune initial contexts
+flip wavelet?
+try to use the wavelet transformed predicted image (motion compensated image) as context for coding the residual coefficients
+try the MV length as context for coding the residual coefficients
+use extradata for stuff which is in the keyframes now?
+the MV median predictor is patented IIRC
+implement per picture halfpel interpolation
+try different range coder state transition tables for different contexts
+
+Not Important:
+compare the 6 tap and 8 tap hpel filters (psnr/bitrate and subjective quality)
+spatial_scalability b vs u (!= 0 breaks syntax anyway so we can add a u later)
+
+
+Credits:
+========
+Michael Niedermayer
+Loren Merritt
+
+
+Copyright:
+==========
+GPL + GFDL + whatever is needed to make this a RFC
diff --git a/lib/ffmpeg/doc/soc.txt b/lib/ffmpeg/doc/soc.txt
new file mode 100644
index 0000000000..8b4a86db80
--- /dev/null
+++ b/lib/ffmpeg/doc/soc.txt
@@ -0,0 +1,24 @@
+Google Summer of Code and similar project guidelines
+
+Summer of Code is a project by Google in which students are paid to implement
+some nice new features for various participating open source projects ...
+
+This text is a collection of things to take care of for the next soc as
+it's a little late for this year's soc (2006).
+
+The Goal:
+Our goal in respect to soc is and must be of course exactly one thing and
+that is to improve FFmpeg, to reach this goal, code must
+* conform to the svn policy and patch submission guidelines
+* must improve FFmpeg somehow (faster, smaller, "better",
+ more codecs supported, fewer bugs, cleaner, ...)
+
+for mentors and other developers to help students to reach that goal it is
+essential that changes to their codebase are publicly visible, clean and
+easy reviewable that again leads us to:
+* use of a revision control system like svn
+* separation of cosmetic from non-cosmetic changes (this is almost entirely
+ ignored by mentors and students in soc 2006 which might lead to a suprise
+ when the code will be reviewed at the end before a possible inclusion in
+ FFmpeg, individual changes were generally not reviewable due to cosmetics).
+* frequent commits, so that comments can be provided early
diff --git a/lib/ffmpeg/doc/swscale.txt b/lib/ffmpeg/doc/swscale.txt
new file mode 100644
index 0000000000..4c62e67321
--- /dev/null
+++ b/lib/ffmpeg/doc/swscale.txt
@@ -0,0 +1,99 @@
+ The official guide to swscale for confused developers.
+ ========================================================
+
+Current (simplified) Architecture:
+---------------------------------
+ Input
+ v
+ _______OR_________
+ / \
+ / \
+ special converter [Input to YUV converter]
+ | |
+ | (8bit YUV 4:4:4 / 4:2:2 / 4:2:0 / 4:0:0 )
+ | |
+ | v
+ | Horizontal scaler
+ | |
+ | (15bit YUV 4:4:4 / 4:2:2 / 4:2:0 / 4:1:1 / 4:0:0 )
+ | |
+ | v
+ | Vertical scaler and output converter
+ | |
+ v v
+ output
+
+
+Swscale has 2 scaler paths. Each side must be capable of handling
+slices, that is, consecutive non-overlapping rectangles of dimension
+(0,slice_top) - (picture_width, slice_bottom).
+
+special converter
+ These generally are unscaled converters of common
+ formats, like YUV 4:2:0/4:2:2 -> RGB12/15/16/24/32. Though it could also
+ in principle contain scalers optimized for specific common cases.
+
+Main path
+ The main path is used when no special converter can be used. The code
+ is designed as a destination line pull architecture. That is, for each
+ output line the vertical scaler pulls lines from a ring buffer. When
+ the ring buffer does not contain the wanted line, then it is pulled from
+ the input slice through the input converter and horizontal scaler.
+ The result is also stored in the ring buffer to serve future vertical
+ scaler requests.
+ When no more output can be generated because lines from a future slice
+ would be needed, then all remaining lines in the current slice are
+ converted, horizontally scaled and put in the ring buffer.
+ [This is done for luma and chroma, each with possibly different numbers
+ of lines per picture.]
+
+Input to YUV Converter
+ When the input to the main path is not planar 8 bits per component YUV or
+ 8-bit gray, it is converted to planar 8-bit YUV. Two sets of converters
+ exist for this currently: One performs horizontal downscaling by 2
+ before the conversion, the other leaves the full chroma resolution,
+ but is slightly slower. The scaler will try to preserve full chroma
+ when the output uses it. It is possible to force full chroma with
+ SWS_FULL_CHR_H_INP even for cases where the scaler thinks it is useless.
+
+Horizontal scaler
+ There are several horizontal scalers. A special case worth mentioning is
+ the fast bilinear scaler that is made of runtime-generated MMX2 code
+ using specially tuned pshufw instructions.
+ The remaining scalers are specially-tuned for various filter lengths.
+ They scale 8-bit unsigned planar data to 16-bit signed planar data.
+ Future >8 bits per component inputs will need to add a new horizontal
+ scaler that preserves the input precision.
+
+Vertical scaler and output converter
+ There is a large number of combined vertical scalers + output converters.
+ Some are:
+ * unscaled output converters
+ * unscaled output converters that average 2 chroma lines
+ * bilinear converters (C, MMX and accurate MMX)
+ * arbitrary filter length converters (C, MMX and accurate MMX)
+ And
+ * Plain C 8-bit 4:2:2 YUV -> RGB converters using LUTs
+ * Plain C 17-bit 4:4:4 YUV -> RGB converters using multiplies
+ * MMX 11-bit 4:2:2 YUV -> RGB converters
+ * Plain C 16-bit Y -> 16-bit gray
+ ...
+
+ RGB with less than 8 bits per component uses dither to improve the
+ subjective quality and low-frequency accuracy.
+
+
+Filter coefficients:
+--------------------
+There are several different scalers (bilinear, bicubic, lanczos, area,
+sinc, ...). Their coefficients are calculated in initFilter().
+Horizontal filter coefficients have a 1.0 point at 1 << 14, vertical ones at
+1 << 12. The 1.0 points have been chosen to maximize precision while leaving
+a little headroom for convolutional filters like sharpening filters and
+minimizing SIMD instructions needed to apply them.
+It would be trivial to use a different 1.0 point if some specific scaler
+would benefit from it.
+Also, as already hinted at, initFilter() accepts an optional convolutional
+filter as input that can be used for contrast, saturation, blur, sharpening
+shift, chroma vs. luma shift, ...
+
diff --git a/lib/ffmpeg/doc/tablegen.txt b/lib/ffmpeg/doc/tablegen.txt
new file mode 100644
index 0000000000..4c4f036e6a
--- /dev/null
+++ b/lib/ffmpeg/doc/tablegen.txt
@@ -0,0 +1,70 @@
+Writing a table generator
+
+This documentation is preliminary.
+Parts of the API are not good and should be changed.
+
+Basic concepts
+
+A table generator consists of two files, *_tablegen.c and *_tablegen.h.
+The .h file will provide the variable declarations and initialization
+code for the tables, the .c calls the initialization code and then prints
+the tables as a header file using the tableprint.h helpers.
+Both of these files will be compiled for the host system, so to avoid
+breakage with cross-compilation neither of them may include, directly
+or indirectly, config.h or avconfig.h.
+This means that e.g. libavutil/mathematics.h is ok but libavutil/libm.h is not.
+Due to this, the .c file or Makefile may have to provide additional defines
+or stubs, though if possible this should be avoided.
+In particular, CONFIG_HARDCODED_TABLES should always be defined to 0.
+
+The .c file
+
+This file should include the *_tablegen.h and tableprint.h files and
+anything else it needs as long as it does not depend on config.h or
+avconfig.h.
+In addition to that it must contain a main() function which initializes
+all tables by calling the init functions from the .h file and then prints
+them.
+The printing code typically looks like this:
+ write_fileheader();
+ printf("static const uint8_t my_array[100] = {\n");
+ write_uint8_t_array(my_array, 100);
+ printf("};\n");
+
+This is the more generic form, in case you need to do something special.
+Usually you should instead use the short form:
+ write_fileheader();
+ WRITE_ARRAY("static const", uint8_t, my_array);
+
+write_fileheader() adds some minor things like a "this is a generated file"
+comment and some standard includes.
+tablegen.h defines some write functions for one- and two-dimensional arrays
+for standard types - they print only the "core" parts so they are easier
+to reuse for multi-dimensional arrays so the outermost {} must be printed
+separately.
+If there's no standard function for printing the type you need, the
+WRITE_1D_FUNC_ARGV macro is a very quick way to create one.
+See libavcodec/dv_tablegen.c for an example.
+
+
+The .h file
+
+This file should contain:
+ - one or more initialization functions
+ - the table variable declarations
+If CONFIG_HARDCODED_TABLES is set, the initialization functions should
+not do anything, and instead of the variable declarations the
+generated *_tables.h file should be included.
+Since that will be generated in the build directory, the path must be
+included, i.e.
+#include "libavcodec/example_tables.h"
+not
+#include "example_tables.h"
+
+Makefile changes
+
+To make the automatic table creation work, you must manually declare the
+new dependency.
+For this add a line similar to this:
+$(SUBDIR)example.o: $(SUBDIR)example_tables.h
+under the "ifdef CONFIG_HARDCODED_TABLES" section in the Makefile.
diff --git a/lib/ffmpeg/doc/texi2pod.pl b/lib/ffmpeg/doc/texi2pod.pl
new file mode 100755
index 0000000000..fd3f02059d
--- /dev/null
+++ b/lib/ffmpeg/doc/texi2pod.pl
@@ -0,0 +1,423 @@
+#! /usr/bin/perl -w
+
+# Copyright (C) 1999, 2000, 2001 Free Software Foundation, Inc.
+
+# This file is part of GNU CC.
+
+# GNU CC is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+
+# GNU CC is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with GNU CC; see the file COPYING. If not, write to
+# the Free Software Foundation, 51 Franklin Street, Fifth Floor,
+# Boston, MA 02110-1301 USA
+
+# This does trivial (and I mean _trivial_) conversion of Texinfo
+# markup to Perl POD format. It's intended to be used to extract
+# something suitable for a manpage from a Texinfo document.
+
+$output = 0;
+$skipping = 0;
+%sects = ();
+@sects_sequence = ();
+$section = "";
+@icstack = ();
+@endwstack = ();
+@skstack = ();
+@instack = ();
+$shift = "";
+%defs = ();
+$fnno = 1;
+$inf = "";
+$ibase = "";
+
+while ($_ = shift) {
+ if (/^-D(.*)$/) {
+ if ($1 ne "") {
+ $flag = $1;
+ } else {
+ $flag = shift;
+ }
+ $value = "";
+ ($flag, $value) = ($flag =~ /^([^=]+)(?:=(.+))?/);
+ die "no flag specified for -D\n"
+ unless $flag ne "";
+ die "flags may only contain letters, digits, hyphens, dashes and underscores\n"
+ unless $flag =~ /^[a-zA-Z0-9_-]+$/;
+ $defs{$flag} = $value;
+ } elsif (/^-/) {
+ usage();
+ } else {
+ $in = $_, next unless defined $in;
+ $out = $_, next unless defined $out;
+ usage();
+ }
+}
+
+if (defined $in) {
+ $inf = gensym();
+ open($inf, "<$in") or die "opening \"$in\": $!\n";
+ $ibase = $1 if $in =~ m|^(.+)/[^/]+$|;
+} else {
+ $inf = \*STDIN;
+}
+
+if (defined $out) {
+ open(STDOUT, ">$out") or die "opening \"$out\": $!\n";
+}
+
+while(defined $inf) {
+while(<$inf>) {
+ # Certain commands are discarded without further processing.
+ /^\@(?:
+ [a-z]+index # @*index: useful only in complete manual
+ |need # @need: useful only in printed manual
+ |(?:end\s+)?group # @group .. @end group: ditto
+ |page # @page: ditto
+ |node # @node: useful only in .info file
+ |(?:end\s+)?ifnottex # @ifnottex .. @end ifnottex: use contents
+ )\b/x and next;
+
+ chomp;
+
+ # Look for filename and title markers.
+ /^\@setfilename\s+([^.]+)/ and $fn = $1, next;
+ /^\@settitle\s+([^.]+)/ and $tl = postprocess($1), next;
+
+ # Identify a man title but keep only the one we are interested in.
+ /^\@c\s+man\s+title\s+([A-Za-z0-9-]+)\s+(.+)/ and do {
+ if (exists $defs{$1}) {
+ $fn = $1;
+ $tl = postprocess($2);
+ }
+ next;
+ };
+
+ /^\@include\s+(.+)$/ and do {
+ push @instack, $inf;
+ $inf = gensym();
+
+ # Try cwd and $ibase.
+ open($inf, "<" . $1)
+ or open($inf, "<" . $ibase . "/" . $1)
+ or die "cannot open $1 or $ibase/$1: $!\n";
+ next;
+ };
+
+ # Look for blocks surrounded by @c man begin SECTION ... @c man end.
+ # This really oughta be @ifman ... @end ifman and the like, but such
+ # would require rev'ing all other Texinfo translators.
+ /^\@c\s+man\s+begin\s+([A-Za-z ]+)/ and $sect = $1, push (@sects_sequence, $sect), $output = 1, next;
+ /^\@c\s+man\s+end/ and do {
+ $sects{$sect} = "" unless exists $sects{$sect};
+ $sects{$sect} .= postprocess($section);
+ $section = "";
+ $output = 0;
+ next;
+ };
+
+ # handle variables
+ /^\@set\s+([a-zA-Z0-9_-]+)\s*(.*)$/ and do {
+ $defs{$1} = $2;
+ next;
+ };
+ /^\@clear\s+([a-zA-Z0-9_-]+)/ and do {
+ delete $defs{$1};
+ next;
+ };
+
+ next unless $output;
+
+ # Discard comments. (Can't do it above, because then we'd never see
+ # @c man lines.)
+ /^\@c\b/ and next;
+
+ # End-block handler goes up here because it needs to operate even
+ # if we are skipping.
+ /^\@end\s+([a-z]+)/ and do {
+ # Ignore @end foo, where foo is not an operation which may
+ # cause us to skip, if we are presently skipping.
+ my $ended = $1;
+ next if $skipping && $ended !~ /^(?:ifset|ifclear|ignore|menu|iftex)$/;
+
+ die "\@end $ended without \@$ended at line $.\n" unless defined $endw;
+ die "\@$endw ended by \@end $ended at line $.\n" unless $ended eq $endw;
+
+ $endw = pop @endwstack;
+
+ if ($ended =~ /^(?:ifset|ifclear|ignore|menu|iftex)$/) {
+ $skipping = pop @skstack;
+ next;
+ } elsif ($ended =~ /^(?:example|smallexample|display)$/) {
+ $shift = "";
+ $_ = ""; # need a paragraph break
+ } elsif ($ended =~ /^(?:itemize|enumerate|[fv]?table)$/) {
+ $_ = "\n=back\n";
+ $ic = pop @icstack;
+ } else {
+ die "unknown command \@end $ended at line $.\n";
+ }
+ };
+
+ # We must handle commands which can cause skipping even while we
+ # are skipping, otherwise we will not process nested conditionals
+ # correctly.
+ /^\@ifset\s+([a-zA-Z0-9_-]+)/ and do {
+ push @endwstack, $endw;
+ push @skstack, $skipping;
+ $endw = "ifset";
+ $skipping = 1 unless exists $defs{$1};
+ next;
+ };
+
+ /^\@ifclear\s+([a-zA-Z0-9_-]+)/ and do {
+ push @endwstack, $endw;
+ push @skstack, $skipping;
+ $endw = "ifclear";
+ $skipping = 1 if exists $defs{$1};
+ next;
+ };
+
+ /^\@(ignore|menu|iftex)\b/ and do {
+ push @endwstack, $endw;
+ push @skstack, $skipping;
+ $endw = $1;
+ $skipping = 1;
+ next;
+ };
+
+ next if $skipping;
+
+ # Character entities. First the ones that can be replaced by raw text
+ # or discarded outright:
+ s/\@copyright\{\}/(c)/g;
+ s/\@dots\{\}/.../g;
+ s/\@enddots\{\}/..../g;
+ s/\@([.!? ])/$1/g;
+ s/\@[:-]//g;
+ s/\@bullet(?:\{\})?/*/g;
+ s/\@TeX\{\}/TeX/g;
+ s/\@pounds\{\}/\#/g;
+ s/\@minus(?:\{\})?/-/g;
+ s/\\,/,/g;
+
+ # Now the ones that have to be replaced by special escapes
+ # (which will be turned back into text by unmunge())
+ s/&/&amp;/g;
+ s/\@\{/&lbrace;/g;
+ s/\@\}/&rbrace;/g;
+ s/\@\@/&at;/g;
+
+ # Inside a verbatim block, handle @var specially.
+ if ($shift ne "") {
+ s/\@var\{([^\}]*)\}/<$1>/g;
+ }
+
+ # POD doesn't interpret E<> inside a verbatim block.
+ if ($shift eq "") {
+ s/</&lt;/g;
+ s/>/&gt;/g;
+ } else {
+ s/</&LT;/g;
+ s/>/&GT;/g;
+ }
+
+ # Single line command handlers.
+
+ /^\@(?:section|unnumbered|unnumberedsec|center)\s+(.+)$/
+ and $_ = "\n=head2 $1\n";
+ /^\@subsection\s+(.+)$/
+ and $_ = "\n=head3 $1\n";
+
+ # Block command handlers:
+ /^\@itemize\s*(\@[a-z]+|\*|-)?/ and do {
+ push @endwstack, $endw;
+ push @icstack, $ic;
+ $ic = $1 ? $1 : "*";
+ $_ = "\n=over 4\n";
+ $endw = "itemize";
+ };
+
+ /^\@enumerate(?:\s+([a-zA-Z0-9]+))?/ and do {
+ push @endwstack, $endw;
+ push @icstack, $ic;
+ if (defined $1) {
+ $ic = $1 . ".";
+ } else {
+ $ic = "1.";
+ }
+ $_ = "\n=over 4\n";
+ $endw = "enumerate";
+ };
+
+ /^\@([fv]?table)\s+(\@[a-z]+)/ and do {
+ push @endwstack, $endw;
+ push @icstack, $ic;
+ $endw = $1;
+ $ic = $2;
+ $ic =~ s/\@(?:samp|strong|key|gcctabopt|option|env)/B/;
+ $ic =~ s/\@(?:code|kbd)/C/;
+ $ic =~ s/\@(?:dfn|var|emph|cite|i)/I/;
+ $ic =~ s/\@(?:file)/F/;
+ $_ = "\n=over 4\n";
+ };
+
+ /^\@((?:small)?example|display)/ and do {
+ push @endwstack, $endw;
+ $endw = $1;
+ $shift = "\t";
+ $_ = ""; # need a paragraph break
+ };
+
+ /^\@itemx?\s*(.+)?$/ and do {
+ if (defined $1) {
+ # Entity escapes prevent munging by the <> processing below.
+ $_ = "\n=item $ic\&LT;$1\&GT;\n";
+ } else {
+ $_ = "\n=item $ic\n";
+ $ic =~ y/A-Ya-y/B-Zb-z/;
+ $ic =~ s/(\d+)/$1 + 1/eg;
+ }
+ };
+
+ $section .= $shift.$_."\n";
+}
+# End of current file.
+close($inf);
+$inf = pop @instack;
+}
+
+die "No filename or title\n" unless defined $fn && defined $tl;
+
+$sects{NAME} = "$fn \- $tl\n";
+$sects{FOOTNOTES} .= "=back\n" if exists $sects{FOOTNOTES};
+
+unshift @sects_sequence, "NAME";
+for $sect (@sects_sequence) {
+ if(exists $sects{$sect}) {
+ $head = $sect;
+ $head =~ s/SEEALSO/SEE ALSO/;
+ print "=head1 $head\n\n";
+ print scalar unmunge ($sects{$sect});
+ print "\n";
+ }
+}
+
+sub usage
+{
+ die "usage: $0 [-D toggle...] [infile [outfile]]\n";
+}
+
+sub postprocess
+{
+ local $_ = $_[0];
+
+ # @value{foo} is replaced by whatever 'foo' is defined as.
+ while (m/(\@value\{([a-zA-Z0-9_-]+)\})/g) {
+ if (! exists $defs{$2}) {
+ print STDERR "Option $2 not defined\n";
+ s/\Q$1\E//;
+ } else {
+ $value = $defs{$2};
+ s/\Q$1\E/$value/;
+ }
+ }
+
+ # Formatting commands.
+ # Temporary escape for @r.
+ s/\@r\{([^\}]*)\}/R<$1>/g;
+ s/\@(?:dfn|var|emph|cite|i)\{([^\}]*)\}/I<$1>/g;
+ s/\@(?:code|kbd)\{([^\}]*)\}/C<$1>/g;
+ s/\@(?:gccoptlist|samp|strong|key|option|env|command|b)\{([^\}]*)\}/B<$1>/g;
+ s/\@sc\{([^\}]*)\}/\U$1/g;
+ s/\@file\{([^\}]*)\}/F<$1>/g;
+ s/\@w\{([^\}]*)\}/S<$1>/g;
+ s/\@(?:dmn|math)\{([^\}]*)\}/$1/g;
+
+ # Cross references are thrown away, as are @noindent and @refill.
+ # (@noindent is impossible in .pod, and @refill is unnecessary.)
+ # @* is also impossible in .pod; we discard it and any newline that
+ # follows it. Similarly, our macro @gol must be discarded.
+
+ s/\(?\@xref\{(?:[^\}]*)\}(?:[^.<]|(?:<[^<>]*>))*\.\)?//g;
+ s/\s+\(\@pxref\{(?:[^\}]*)\}\)//g;
+ s/;\s+\@pxref\{(?:[^\}]*)\}//g;
+ s/\@noindent\s*//g;
+ s/\@refill//g;
+ s/\@gol//g;
+ s/\@\*\s*\n?//g;
+
+ # @uref can take one, two, or three arguments, with different
+ # semantics each time. @url and @email are just like @uref with
+ # one argument, for our purposes.
+ s/\@(?:uref|url|email)\{([^\},]*)\}/&lt;B<$1>&gt;/g;
+ s/\@uref\{([^\},]*),([^\},]*)\}/$2 (C<$1>)/g;
+ s/\@uref\{([^\},]*),([^\},]*),([^\},]*)\}/$3/g;
+
+ # Turn B<blah I<blah> blah> into B<blah> I<blah> B<blah> to
+ # match Texinfo semantics of @emph inside @samp. Also handle @r
+ # inside bold.
+ s/&LT;/</g;
+ s/&GT;/>/g;
+ 1 while s/B<((?:[^<>]|I<[^<>]*>)*)R<([^>]*)>/B<$1>${2}B</g;
+ 1 while (s/B<([^<>]*)I<([^>]+)>/B<$1>I<$2>B</g);
+ 1 while (s/I<([^<>]*)B<([^>]+)>/I<$1>B<$2>I</g);
+ s/[BI]<>//g;
+ s/([BI])<(\s+)([^>]+)>/$2$1<$3>/g;
+ s/([BI])<([^>]+?)(\s+)>/$1<$2>$3/g;
+
+ # Extract footnotes. This has to be done after all other
+ # processing because otherwise the regexp will choke on formatting
+ # inside @footnote.
+ while (/\@footnote/g) {
+ s/\@footnote\{([^\}]+)\}/[$fnno]/;
+ add_footnote($1, $fnno);
+ $fnno++;
+ }
+
+ return $_;
+}
+
+sub unmunge
+{
+ # Replace escaped symbols with their equivalents.
+ local $_ = $_[0];
+
+ s/&lt;/E<lt>/g;
+ s/&gt;/E<gt>/g;
+ s/&lbrace;/\{/g;
+ s/&rbrace;/\}/g;
+ s/&at;/\@/g;
+ s/&amp;/&/g;
+ return $_;
+}
+
+sub add_footnote
+{
+ unless (exists $sects{FOOTNOTES}) {
+ $sects{FOOTNOTES} = "\n=over 4\n\n";
+ }
+
+ $sects{FOOTNOTES} .= "=item $fnno.\n\n"; $fnno++;
+ $sects{FOOTNOTES} .= $_[0];
+ $sects{FOOTNOTES} .= "\n\n";
+}
+
+# stolen from Symbol.pm
+{
+ my $genseq = 0;
+ sub gensym
+ {
+ my $name = "GEN" . $genseq++;
+ my $ref = \*{$name};
+ delete $::{$name};
+ return $ref;
+ }
+}
diff --git a/lib/ffmpeg/doc/viterbi.txt b/lib/ffmpeg/doc/viterbi.txt
new file mode 100644
index 0000000000..d9d924f621
--- /dev/null
+++ b/lib/ffmpeg/doc/viterbi.txt
@@ -0,0 +1,110 @@
+This is a quick description of the viterbi aka dynamic programing
+algorthm.
+
+Its reason for existence is that wikipedia has become very poor on
+describing algorithms in a way that makes it useable for understanding
+them or anything else actually. It tends now to describe the very same
+algorithm under 50 different names and pages with few understandable
+by even people who fully understand the algorithm and the theory behind.
+
+Problem description: (that is what it can solve)
+assume we have a 2d table, or you could call it a graph or matrix if you
+prefer
+
+ O O O O O O O
+
+ O O O O O O O
+
+ O O O O O O O
+
+ O O O O O O O
+
+
+That table has edges connecting points from each column to the next column
+and each edge has a score like: (only some edge and scores shown to keep it
+readable)
+
+
+ O--5--O-----O-----O-----O-----O
+ 2 / 7 / \ / \ / \ /
+ \ / \ / \ / \ / \ /
+ O7-/--O--/--O--/--O--/--O--/--O
+ \/ \/ 1/ \/ \/ \/ \/ \/ \/ \/
+ /\ /\ 2\ /\ /\ /\ /\ /\ /\ /\
+ O3-/--O--/--O--/--O--/--O--/--O
+ / \ / \ / \ / \ / \
+ 1 \ 9 \ / \ / \ / \
+ O--2--O--1--O--5--O--3--O--8--O
+
+
+
+Our goal is to find a path from left to right through it which
+minimizes the sum of the score of all edges.
+(and of course left/right is just a convention here it could be top down too)
+Similarly the minimum could be the maximum by just fliping the sign,
+Example of a path with scores:
+
+ O O O O O O O
+
+>---O. O O .O-2-O O O
+ 5. .7 .
+ O O-1-O O O 8 O O
+ .
+ O O O O O O-1-O---> (sum here is 24)
+
+
+The viterbi algorthm now solves this simply column by column
+For the previous column each point has a best path and a associated
+score:
+
+ O-----5 O
+ \
+ \
+ O \ 1 O
+ \/
+ /\
+ O / 2 O
+ /
+ /
+ O-----2 O
+
+
+To move one column forward we just need to find the best path and associated
+scores for the next column
+here are some edges we could choose from:
+
+
+ O-----5--3--O
+ \ \8
+ \ \
+ O \ 1--9--O
+ \/ \3
+ /\ \
+ O / 2--1--O
+ / \2
+ / \
+ O-----2--4--O
+
+Finding the new best pathes and scores for each point of our new column is
+trivial given we know the previous column best pathes and scores:
+
+ O-----0-----8
+ \
+ \
+ O \ 0----10
+ \/
+ /\
+ O / 0-----3
+ / \
+ / \
+ O 0 4
+
+
+the viterbi algorthm continues exactly like this column for column until the
+end and then just picks the path with the best score (above that would be the
+one with score 3)
+
+
+Author: Michael niedermayer
+Copyright LGPL
+