Difference between revisions of "FFmpeg"

From ArchWiki
Jump to: navigation, search
(Fix link)
(updated man page links (interactive))
(Tag: wiki-scripts)
 
(106 intermediate revisions by 26 users not shown)
Line 1: Line 1:
[[Category:Audio/Video]]
+
[[Category:Multimedia]]
{{Article summary start}}
+
[[ja:FFmpeg]]
{{Article summary text|This article attempts to walk users through the installation, usage and configuration of FFmpeg.}}
+
[[zh-hans:FFmpeg]]
{{Article summary end}}
 
 
 
 
From the project [http://www.ffmpeg.org/ home page]:
 
From the project [http://www.ffmpeg.org/ home page]:
 
:''FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. It includes libavcodec - the leading audio/video codec library.''
 
:''FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. It includes libavcodec - the leading audio/video codec library.''
  
 
== Package installation ==
 
== Package installation ==
 +
{{Note|You may encounter FFmpeg forks like ''libav'' and ''avconv'', see [http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html The FFmpeg/Libav situation] for a blog article about the differences between the project and the current status of FFmpeg.}}
  
[[pacman|Install]] {{Pkg|ffmpeg}} from the [[official repositories]].
+
[[Install]] the {{Pkg|ffmpeg}} package.
  
Notable variants are:
+
For the development version, install the {{AUR|ffmpeg-git}} package. There is also {{AUR|ffmpeg-full}}, which is built with as many optional features enabled as possible.
* {{AUR|ffmpeg-full}} - This version includes all codecs that due to license constraints are not in the official repositoies version, notably, the Fraunhofer AAC codec and the AAC+ codec.
+
 
* {{AUR|libav}} - Replacement fork. The binary it provides is called ''avconv'' instead of ''ffmpeg''.
+
=== Manual installation ===
 +
{{Expansion|Why would you want to do this?}}
 +
Some users may wish to skip the package or AUR and compile FFmpeg themselves from its git master directly, and add {{ic|/usr/local/lib}} to {{ic|/etc/ld.so.conf}}. This can cause the {{ic|/usr/bin/ffmpeg}} tool provided by {{Pkg|ffmpeg}} to crash or work improperly due to the library version mismatch. (Frequently {{Pkg|ffmpeg}} is still installed as a dependency.) One fix for this is to add {{ic|/usr/lib}} to the rpath of {{ic|/usr/bin/ffmpeg}} with a tool such as {{Pkg|patchelf}}:
 +
 
 +
# patchelf --set-rpath /usr/lib /usr/bin/ffmpeg
 +
 
 +
This will be revoked whenever ffmpeg is reinstalled or upgraded, so adding a pacman hook might be prudent:
 +
 
 +
{{hc|/etc/pacman.d/hooks/ffmpeg.hook|2=
 +
[Trigger]
 +
Operation=Install
 +
Operation=Upgrade
 +
Type=Package
 +
Target=ffmpeg
 +
 
 +
[Action]
 +
Description=Adding /usr/lib to ffmpeg rpath...
 +
Depends=patchelf
 +
When=PostTransaction
 +
Exec=/usr/bin/patchelf --set-rpath /usr/lib /usr/bin/ffmpeg
 +
}}
  
 
== Encoding examples ==
 
== Encoding examples ==
 +
{{Note|
 +
*It is important parameters are specified in the correct order (e.g. input, video, filters, audio, output), failing to do so may cause parameters being skipped or will prevent FFmpeg from executing.
 +
*FFmpeg should automatically choose the number of CPU threads available. However you may want to force the number of threads available by the parameter {{ic|-threads <number>}}.
 +
}}
 +
 +
=== Screen capture  ===
 +
 +
FFmpeg includes the [http://www.ffmpeg.org/ffmpeg-devices.html#x11grab x11grab] and [http://www.ffmpeg.org/ffmpeg-devices.html#alsa-1 ALSA] virtual devices that enable capturing the entire user display and audio input.
  
=== Screen cast to .webm ===
+
To take a screenshot {{ic|screen.png}}:
  
Using ''x11grab'' to video grab your display and using ''ALSA'' for sound. First we create lossless raw file {{ic|test.mkv}}.
+
$ ffmpeg -f x11grab -video_size 1920x1080 -i $DISPLAY -vframes 1 screen.png
  
$ ffmpeg -f x11grab -r 30 -i :0.0 -f alsa -i hw:0,0 -acodec flac -vcodec ffvhuff test.mkv
+
where {{ic|-video_size}} specifies the size of the area to capture.
  
Then we process this {{ic|test.mkv}} file into a smaller {{ic|test.webm}} end product. Complex switches like {{ic|c:a}} and {{ic|c:v}} convert the stream into what's needed for [http://en.wikipedia.org/wiki/WebM WebM].
+
To take a screencast {{ic|screen.mkv}} with lossless encoding and without audio:
  
  $ ffmpeg -y -i test.mkv -c:a libvorbis -q:a 3 -c:v libvpx -b:v 2000k test.webm
+
  $ ffmpeg -f x11grab -video_size 1920x1080 -framerate 25 -i $DISPLAY -c:v ffvhuff screen.mkv
  
See https://github.com/kaihendry/recordmydesktop2.0/ for a more fleshed out example.
+
Here, the Huffyuv codec used, which is fast, but procedures huge file size.
 +
 
 +
To take a screencast {{ic|screen.mp4}} with lossy encoding and with audio:
 +
 
 +
$ ffmpeg -f x11grab -video_size 1920x1080 -framerate 25 -i $DISPLAY -f alsa -i default -c:v libx264 -preset ultrafast -c:a aac screen.mp4
 +
 
 +
Here, the x264 codec with the fastest possible encoding speed is used. Other codecs can be used; if writing each frame is too slow (either due to inadequate disk performance or slow encoding), then frames will be dropped and video output will be choppy.
 +
 
 +
See also the [https://trac.ffmpeg.org/wiki/Capture/Desktop#Linux official documentation].
  
 
=== Recording webcam ===
 
=== Recording webcam ===
  
FFmpeg supports grabbing input from Video4Linux2 devices.  The following command will record a video from the webcam, assuming that the webcam is correctly recognized under {{ic|/dev/video0}}:
+
FFmpeg includes the [http://www.ffmpeg.org/ffmpeg-devices.html#video4linux2_002c-v4l2 video4linux2] and [http://www.ffmpeg.org/ffmpeg-devices.html#alsa-1 ALSA] input devices that enable capturing webcam and audio input.
  
$ ffmpeg -f v4l2 -s 640x480 -i /dev/video0 ''output''.mpg
+
The following command will record a video {{ic|webcam.mp4}} from the webcam without audio, assuming that the webcam is correctly recognized under {{ic|/dev/video0}}:
  
The above produces a silent video. It is also possible to include audio sources from a microphone. The following command will include a stream from the default [[Advanced Linux Sound Architecture|ALSA]] recording device into the video:
+
  $ ffmpeg -f v4l2 -video_size 640x480 -i /dev/video0 -c:v libx264 -preset ultrafast webcam.mp4
  
$ ffmpeg -f alsa -i default -f v4l2 -s 640x480 -i /dev/video0 ''output''.mpg
+
where {{ic|-video_size}} specifies the largest allowed image size from the webcam.
  
To use [[PulseAudio]] with an ALSA backend:
+
The above produces a silent video. To record a video {{ic|webcam.mp4}} from the webcam with audio:
  
  $ ffmpeg -f alsa -i pulse -f v4l2 -s 640x480 -i /dev/video0 ''output''.mpg
+
  $ ffmpeg -f v4l2 -video_size 640x480 -i /dev/video0 -f alsa -i default -c:v libx264 -preset ultrafast -c:a aac webcam.mp4
  
For a higher quality capture, try encoding the output using higher quality codecs:
+
Here, the x264 codec with the fastest possible encoding speed is used. Other codecs can be used; if writing each frame is too slow (either due to inadequate disk performance or slow encoding), then frames will be dropped and video output will be choppy.
  
$ ffmpeg -f alsa -i default -f v4l2 -s 640x480 -i /dev/video0 -acodec flac \
+
See also the [https://trac.ffmpeg.org/wiki/Capture/Webcam#Linux official documentation].
-vcodec libx264 ''output''.mkv
 
  
 
=== VOB to any container ===
 
=== VOB to any container ===
  
Concatenate the desired [[Wikipedia:VOB|VOB]] files into a single VOB file:
+
Concatenate the desired [[Wikipedia:VOB|VOB]] files into a single stream and mux them to MPEG-2:
$ cat ''video-1''.VOB ''video-2''.VOB ''video-3''.VOB > ''output''.VOB
+
  $ cat f0.VOB f1.VOB f2.VOB | ffmpeg -i - out.mp2
Concatenate and then pipe the output VOB to FFmpeg to use a different format:
 
  $ cat ''video-1''.VOB ''video-2''.VOB ''video-3''.VOB > ''output''.VOB | ffmpeg -i ...
 
  
 
=== x264 lossless ===
 
=== x264 lossless ===
  
 
The ''ultrafast'' preset will provide the fastest encoding and is useful for quick capturing (such as screencasting):
 
The ''ultrafast'' preset will provide the fastest encoding and is useful for quick capturing (such as screencasting):
  $ ffmpeg -i input -vcodec libx264 -preset ultrafast -qp 0 -acodec copy ''output.mkv''
+
  $ ffmpeg -i input -c:v libx264 -preset ultrafast -qp 0 -c:a copy output
 
On the opposite end of the preset spectrum is ''veryslow'' and will encode slower than ''ultrafast'' but provide a smaller output file size:
 
On the opposite end of the preset spectrum is ''veryslow'' and will encode slower than ''ultrafast'' but provide a smaller output file size:
  $ ffmpeg -i input -vcodec libx264 -preset veryslow -qp 0 -acodec copy ''output.mkv''
+
  $ ffmpeg -i input -c:v libx264 -preset veryslow -qp 0 -c:a copy output
 
Both examples will provide the same quality output.
 
Both examples will provide the same quality output.
 +
 +
{{Tip|If your computer is able to handle {{ic|-preset superfast}} in realtime, you should use that instead of {{ic|-preset ultrafast}}. Ultrafast is ''far'' less efficient compression than superfast.}}
 +
 +
=== x265 ===
 +
 +
In encoding x265 files, you may need to specify the aspect ratio of the file via {{ic|-aspect <width:height>}}. Example :
 +
{{bc|<nowiki> ffmpeg -i input -c:v libx265 -aspect 1920:1080 -preset veryslow -x265-params crf=20 output</nowiki>}}
  
 
=== Single-pass MPEG-2 (near lossless) ===
 
=== Single-pass MPEG-2 (near lossless) ===
  
Allow FFmpeg to automatically set DVD standardized parameters. Encode to DVD [[Wikipedia:MPEG-2|MPEG-2]] at a frame rate of 30 frames/second:
+
Allow FFmpeg to automatically set DVD standardized parameters. Encode to DVD [[Wikipedia:MPEG-2|MPEG-2]] at ~30 FPS:
  
  $ ffmpeg -i ''video''.VOB -target ntsc-dvd -sameq ''output''.mpg
+
  $ ffmpeg -i ''video''.VOB -target ntsc-dvd ''output''.mpg
  
Encode to DVD MPEG-2 at a frame rate of 24 frames/second:
+
Encode to DVD MPEG-2 at ~24 FPS:
  
  $ ffmpeg -i ''video''.VOB -target film-dvd -sameq ''output''.mpg
+
  $ ffmpeg -i ''video''.VOB -target film-dvd ''output''.mpg
  
 
=== x264: constant rate factor ===
 
=== x264: constant rate factor ===
  
Used when you want a specific quality output. General usage is to use the highest {{ic|-crf}} value that still provides an acceptable quality. A sane range is 18-28 and 23 is default. 18 is considered to be visually lossless. Use the slowest {{ic|-preset}} you have patience for. See the [https://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide x264 Encoding Guide] for more information.
+
Used when you want a specific quality output. General usage is to use the highest {{ic|-crf}} value that still provides an acceptable quality. Lower values are higher quality; 0 is lossless, 18 is visually lossless, and 23 is the default value. A sane range is between 18 and 28. Use the slowest {{ic|-preset}} you have patience for. See the [https://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide x264 Encoding Guide] for more information.
  $ ffmpeg -i ''video'' -vcodec libx264 -preset slow -crf 22 -acodec libmp3lame -aq 4 ''output''.mkv
+
  $ ffmpeg -i ''video'' -c:v libx264 -tune film -preset slow -crf 22 -x264opts fast_pskip=0 -c:a libmp3lame -aq 4 ''output''.mkv
 
{{ic|-tune}} option can be used to [http://forum.doom9.org/showthread.php?t=149394 match the type and content of the of media being encoded].
 
{{ic|-tune}} option can be used to [http://forum.doom9.org/showthread.php?t=149394 match the type and content of the of media being encoded].
 
=== YouTube ===
 
 
FFmpeg is very useful to encode videos and strip their size before you upload them on YouTube. The following single line of code takes an input file and outputs a mkv container.
 
 
$ ffmpeg -i ''video'' -c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a copy ''output''.mkv
 
 
For more information see the [https://bbs.archlinux.org/viewtopic.php?pid=1200667#p1200667 forums]. You can also create a custom alias {{ic|ytconvert}} which takes the name of the input file as first argument and the name of the .mkv container as second argument. To do so add the following to your {{ic|~/.bashrc}}:
 
 
{{bc|<nowiki>
 
youtubeConvert(){
 
        ffmpeg -i $1 -c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a copy $2.mkv
 
}
 
alias ytconvert=youtubeConvert
 
</nowiki>}}
 
See also [https://bbs.archlinux.org/viewtopic.php?pid=1200542#p1200542 Arch Linux forum thread].
 
  
 
=== Two-pass x264 (very high-quality) ===
 
=== Two-pass x264 (very high-quality) ===
Line 125: Line 148:
 
:* (3900 MB - 275 MB) = 3625 MB '''x''' 8192 kb/MB '''/''' 8830 s = 3363 kb/s required to achieve an approximate total output file size of 3900 MB
 
:* (3900 MB - 275 MB) = 3625 MB '''x''' 8192 kb/MB '''/''' 8830 s = 3363 kb/s required to achieve an approximate total output file size of 3900 MB
  
=== Adding subtitles ===
+
=== x264 video stabilization ===
 +
Video stablization using the vbid.stab plugin entails two passes.
 +
 
 +
==== First pass ====
 +
 
 +
The first pass records stabilization parameters to a file and/or a test video for visual analysis.
 +
 
 +
* Records stabilization parameters to a file only
 +
 
 +
$ ffmpeg -i input -vf vidstabdetect=stepsize=4:mincontrast=0:result=transforms.trf -f null -
 +
 
 +
* Records stabilization parameters to a file and create test video "output-stab" for visual analysis
 +
 
 +
$ ffmpeg -i input -vf vidstabdetect=stepsize=4:mincontrast=0:result=transforms.trf -f output-stab
 +
 
 +
==== Second pass ====
 +
 
 +
The second pass parses the stabilization parameters generated from the first pass and applies them to produce "output-stab_final".  You will want to apply any additional filters at this point so as to aboid subsequent transcoding to preserve as much video quality as possible.  The following example performs the following in addition to video stabilization:
 +
 
 +
* {{ic|unsharp}} is recommended by the author of vid.stab. Here we are simply using the defaults of 5:5:1.0:5:5:1.0
 +
* {{Tip|1=fade=t=in:st=0:d=4}} fade in from black starting from the beginning of the file for four seconds
 +
* {{Tip|1=fade=t=out:st=60:d=4}} fade out to black starting from sixty seconds into the video for four seconds
 +
* {{ic|-c:a pcm_s16le}} XAVC-S codec records in pcm_s16be which is losslessly transcoded to pcm_s16le
 +
 
 +
$  ffmpeg -i input -vf vidstabtransform=smoothing=30:interpol=bicubic:input=transforms.trf,unsharp,fade=t=in:st=0:d=4,fade=t=out:st=60:d=4 -c:v libx264 -tune film -preset veryslow -crf 8 -x264opts fast_pskip=0 -c:a pcm_s16le output-stab_final
 +
 
 +
=== Subtitles ===
 +
 
 +
==== Extracting ====
 +
 
 +
Subtitles embedded in container files, such as MPEG-2 and Matroska, can be extracted and converted into SRT, SSA, among other subtitle formats.
 +
 
 +
* Inspect a file to determine if it contains a subtitle stream:
 +
 
 +
{{hc|$ ffprobe foo.mkv|
 +
...
 +
Stream #0:0(und): Video: h264 (High), yuv420p, 1920x800 [SAR 1:1 DAR 12:5], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
 +
  Metadata:
 +
  CREATION_TIME  : 2012-06-05 05:04:15
 +
  LANGUAGE        : und
 +
Stream #0:1(und): Audio: aac, 44100 Hz, stereo, fltp (default)
 +
Metadata:
 +
CREATION_TIME  : 2012-06-05 05:10:34
 +
LANGUAGE        : und
 +
HANDLER_NAME    : GPAC ISO Audio Handler
 +
'''Stream #0:2: Subtitle: ssa (default)}}
  
See http://trac.ffmpeg.org/wiki/How%20to%20burn%20subtitles%20into%20the%20video
+
* {{ic|foo.mkv}} has an embedded SSA subtitle which can be extracted into an independent file:
  
==== Softsubs to hardsubs ====
+
$ ffmpeg -i foo.mkv foo.ssa
  
If have a softsubbed video (eg. ASS/SSA subs in a mkv container like most anime) you can 'burn' these subs into a new file to be played on a device which does not support subs or is to weak to display complex subs.
+
When dealing with multiple subtitles, you may need to specify the stream that needs to be extracted using the {{ic|-map <key>:<stream>}} parameter:
  
* Install {{Pkg|mkvtoolnix-cli}} to pull out {{ic|.ass}} files from {{ic|.mkv}} files.
+
$ ffmpeg -i foo.mkv -map 0:2 foo.ssa
  
* Recompile FFmpeg with {{ic|--enable-libass}} if it is not already enabled in your FFmpeg build. See [[ABS]] for easy recompiling.
+
==== Hardsubbing ====
  
* Pull out subs from your file. This command assumes that track #2 is the ASS/SSA track. Use {{ic|mkvinfo}} if it is not.
+
(instructions based on an [http://trac.ffmpeg.org/wiki/How%20to%20burn%20subtitles%20into%20the%20video FFmpeg wiki article])
$ mkvextract tracks ''your file''.mkv 2:''your file''.ass
 
  
* Recode file with ffmpeg. See sections above for suitable options. It is very important to disable sub-recording and enable sub-rendering:
+
[[Wikipedia:Hardsub|Hardsubbing]] entails merging subtitles with the video. Hardsubs can't be disabled, nor language switched.
$ ffmpeg ... -sn -vf ass=''subtitles''.ass
 
  
Output is set as *.mp4 since the default Android 4.2 player dislikes *.mkv. (But VLC on Android works with mkv). Example:
+
* Overlay {{ic|foo.mpg}} with the subtitles in {{ic|foo.ssa}}:
  $ ffmpeg -i ''video''.mkv -sn -vcodec libx264 -crf 18 -preset slow -vf ass=''subtitles''.ass -acodec copy ''output''.mp4
+
 
 +
  $ ffmpeg -i foo.mpg -c copy -vf subtitles=foo.ssa out.mpg
  
 
=== Volume gain ===
 
=== Volume gain ===
Line 156: Line 223:
  
 
To double the volume '''(512 = 200%)''' of an [[Wikipedia:Mp3|MP3]] file:
 
To double the volume '''(512 = 200%)''' of an [[Wikipedia:Mp3|MP3]] file:
  $ ffmpeg -i ''file''.mp3 -vol 512 ''louder file''.mp3
+
  $ ffmpeg -i ''file''.mp3 -vol 512 ''louder_file''.mp3
  
 
To quadruple the volume '''(1024 = 400%)''' of an [[Wikipedia:Ogg|Ogg]] file:
 
To quadruple the volume '''(1024 = 400%)''' of an [[Wikipedia:Ogg|Ogg]] file:
  $ ffmpeg -i ''file''.ogg -vol 1024 ''louder file''.ogg
+
  $ ffmpeg -i ''file''.ogg -vol 1024 ''louder_file''.ogg
  
 
Note that gain metadata is only written to the output file. Unlike mp3gain or ogggain, the source sound file is untouched.
 
Note that gain metadata is only written to the output file. Unlike mp3gain or ogggain, the source sound file is untouched.
Line 202: Line 269:
  
 
{{Note|Removing undesired audio streams allows for additional bits to be allocated towards improving video quality.}}
 
{{Note|Removing undesired audio streams allows for additional bits to be allocated towards improving video quality.}}
 +
 +
=== Splitting files ===
 +
 +
You can use the {{ic|copy}} codec to perform operations on a file without changing the encoding. For example, this allows you to easily split any kind of media file into two:
 +
 +
$ ffmpeg -i file.ext -t 00:05:30 -c copy part1.ext -ss 00:05:30 -c copy part2.ext
 +
 +
=== Hardware acceleration ===
 +
{{Expansion|Missing VDPAU, Intel QSV.}}
 +
{{Warning|Encoding may fail when using hardware acceleration, use software encoding as a fallback.}}
 +
 +
Encoding performance may be improved by using [https://trac.ffmpeg.org/wiki/HWAccelIntro hardware acceleration API's], however only a specific kind of codec(s) are allowed and/or may not always produce the same result when using software encoding.
 +
 +
==== VA-API ====
 +
[[VA-API]] can be used for encoding and decoding on Intel CPUs (requires {{Pkg|libva-intel-driver}}) and on certain AMD GPUs when using the open-source [[AMDGPU]] driver (requires {{Pkg|libva-mesa-driver}}).
 +
See the following [https://gist.github.com/Brainiarc7/95c9338a737aa36d9bb2931bed379219 GitHub gist] and [https://wiki.libav.org/Hardware/vaapi Libav documentation] for information about available parameters and supported platforms.
 +
 +
An example of encoding using the supported H.264 codec:
 +
 +
$ ffmpeg -threads 1 -i file.ext -vaapi_device /dev/dri/renderD128 -vcodec h264_vaapi -vf format='nv12|vaapi,hwupload' output.mp4
 +
 +
==== Nvidia NVENC ====
 +
NVENC can be used for encoding when using the proprietary [[NVIDIA]] driver with the {{Pkg|nvidia-utils}} package installed. Minimum supported GPUs are from 600 series (see [[w:Nvidia NVENC]] and [[Hardware video acceleration#Formats]]).
 +
 +
See [https://gist.github.com/Brainiarc7/8b471ff91319483cdb725f615908286e this gist] for some techniques. NVENC is somewhat similar to [[CUDA]], thus it works even from terminal session. Depending on hardware NVENC is several times faster than Intel's VA-API encoders.
 +
 +
To print available options execute ({{ic|hevc_nvenc}} may also be available):
 +
 +
$ ffmpeg -help encoder=h264_nvenc
 +
 +
Example usage:
 +
 +
$ ffmpeg -i source.ext -c:v h264_nvenc -rc constqp -qp 28 output.mkv
 +
 +
==== Nvidia NVDEC ====
 +
NVDEC can be used for decoding when using the proprietary [[NVIDIA]] driver with the {{Pkg|nvidia-utils}} package installed. Minimum supported GPUs are from 600 series (see [[w:Nvidia NVDEC]] and [[Hardware video acceleration#Formats]]).
  
 
== Preset files ==
 
== Preset files ==
  
Populate {{ic|~/.ffmpeg}} with the default [http://ffmpeg.org/ffmpeg-doc.html#SEC13 preset files]:  
+
Populate {{ic|~/.ffmpeg}} with the default [http://ffmpeg.org/ffmpeg.html#Preset-files preset files]:  
  
 
  $ cp -iR /usr/share/ffmpeg ~/.ffmpeg
 
  $ cp -iR /usr/share/ffmpeg ~/.ffmpeg
Line 250: Line 353:
 
:* {{ic|-aq 8}} = 256 kb/s
 
:* {{ic|-aq 8}} = 256 kb/s
  
* [http://www.geocities.jp/aoyoume/aotuv/ aoTuV] is generally preferred over [http://vorbis.com/ libvorbis] provided by [http://www.xiph.org/ Xiph.Org] and is provided by [https://aur.archlinux.org/packages.php?ID=6155 libvorbis-aotuv] in the [[AUR]].
+
* [http://www.geocities.jp/aoyoume/aotuv/ aoTuV] is generally preferred over [http://vorbis.com/ libvorbis] provided by [http://www.xiph.org/ Xiph.Org] and is provided by {{AUR|libvorbis-aotuv}}.
 +
 
 +
== FFserver ==
 +
 
 +
The FFmpeg package includes FFserver, which can be used to stream media over a network. To use it, you first need to create the config file {{ic|/etc/ffserver.conf}} to define your ''feeds'' and ''streams''. Each feed specifies how the media will be sent to ffserver and each stream specifies how a particular feed will be transcoded for streaming over the network. You can start with the [https://www.ffmpeg.org/sample.html sample configuration file] or check {{man|1|ffserver}}{{Dead link|2018|05|19}} for feed and stream examples. Here is a simple configuration file for streaming flash video:
 +
 
 +
{{hc|/etc/ffserver.conf|<nowiki>HTTPPort 8090
 +
HTTPBindAddress 0.0.0.0
 +
MaxHTTPConnections 2000
 +
MaxClients 1000
 +
MaxBandwidth 10000
 +
CustomLog -
 +
 
 +
<Feed av_feed.ffm>
 +
        File /tmp/av_feed.ffm
 +
        FileMaxSize 1G
 +
        ACL allow 127.0.0.1
 +
</Feed>
 +
 
 +
<Stream av_stream.flv>
 +
        Feed av_feed.ffm
 +
        Format flv
 +
 
 +
        VideoCodec libx264
 +
        VideoFrameRate 25
 +
        VideoSize hd1080
 +
        VideoBitRate 400
 +
        AVOptionVideo qmin 10
 +
        AVOptionVideo qmax 42
 +
        AVOptionVideo flags +global_header
 +
 
 +
        AudioCodec libmp3lame
 +
        AVOptionAudio flags +global_header
 +
 
 +
        Preroll 15
 +
</Stream>
 +
 
 +
<Stream stat.html>
 +
        Format status
 +
        ACL allow localhost
 +
        ACL allow 192.168.0.0 192.168.255.255
 +
</Stream>
 +
 
 +
<Redirect index.html>
 +
        URL http://www.ffmpeg.org/
 +
</Redirect></nowiki>}}
 +
 
 +
Once you have created your config file, you can start the server and send media to your feeds. For the previous config example, this would look like
 +
 
 +
$ ffserver &
 +
$ ffmpeg -i myvideo.mkv <nowiki>http://localhost:8090/av_feed.ffm</nowiki>
 +
 
 +
You can then stream your media using the URL {{ic|<nowiki>http://yourserver.net:8090/av_stream.flv</nowiki>}}.
 +
 
 +
== Tips and tricks ==
 +
 
 +
=== Output the duration of a video ===
 +
 
 +
$ ffprobe -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 file.ext
 +
 
 +
=== Output stream information as JSON ===
 +
 
 +
$ ffprobe -v quiet -print_format json -show_format -show_streams file.ext
  
== Package removal ==
+
=== Create a screen of the video every X frames ===
  
[[pacman]] will not [[Pacman#Removing_packages|remove]] configuration files outside of the defaults that were created during package installation. This includes user-created preset files.
+
$ ffmpeg -i file.ext -an -s 319x180 -vf fps=1/'''100''' -qscale:v 75 %03d.jpg
  
 
== See also ==
 
== See also ==

Latest revision as of 21:03, 19 May 2018

From the project home page:

FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. It includes libavcodec - the leading audio/video codec library.

Package installation

Note: You may encounter FFmpeg forks like libav and avconv, see The FFmpeg/Libav situation for a blog article about the differences between the project and the current status of FFmpeg.

Install the ffmpeg package.

For the development version, install the ffmpeg-gitAUR package. There is also ffmpeg-fullAUR, which is built with as many optional features enabled as possible.

Manual installation

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Why would you want to do this? (Discuss in Talk:FFmpeg#)

Some users may wish to skip the package or AUR and compile FFmpeg themselves from its git master directly, and add /usr/local/lib to /etc/ld.so.conf. This can cause the /usr/bin/ffmpeg tool provided by ffmpeg to crash or work improperly due to the library version mismatch. (Frequently ffmpeg is still installed as a dependency.) One fix for this is to add /usr/lib to the rpath of /usr/bin/ffmpeg with a tool such as patchelf:

# patchelf --set-rpath /usr/lib /usr/bin/ffmpeg

This will be revoked whenever ffmpeg is reinstalled or upgraded, so adding a pacman hook might be prudent:

/etc/pacman.d/hooks/ffmpeg.hook
[Trigger]
Operation=Install
Operation=Upgrade
Type=Package
Target=ffmpeg

[Action]
Description=Adding /usr/lib to ffmpeg rpath...
Depends=patchelf
When=PostTransaction
Exec=/usr/bin/patchelf --set-rpath /usr/lib /usr/bin/ffmpeg

Encoding examples

Note:
  • It is important parameters are specified in the correct order (e.g. input, video, filters, audio, output), failing to do so may cause parameters being skipped or will prevent FFmpeg from executing.
  • FFmpeg should automatically choose the number of CPU threads available. However you may want to force the number of threads available by the parameter -threads <number>.

Screen capture

FFmpeg includes the x11grab and ALSA virtual devices that enable capturing the entire user display and audio input.

To take a screenshot screen.png:

$ ffmpeg -f x11grab -video_size 1920x1080 -i $DISPLAY -vframes 1 screen.png

where -video_size specifies the size of the area to capture.

To take a screencast screen.mkv with lossless encoding and without audio:

$ ffmpeg -f x11grab -video_size 1920x1080 -framerate 25 -i $DISPLAY -c:v ffvhuff screen.mkv

Here, the Huffyuv codec used, which is fast, but procedures huge file size.

To take a screencast screen.mp4 with lossy encoding and with audio:

$ ffmpeg -f x11grab -video_size 1920x1080 -framerate 25 -i $DISPLAY -f alsa -i default -c:v libx264 -preset ultrafast -c:a aac screen.mp4

Here, the x264 codec with the fastest possible encoding speed is used. Other codecs can be used; if writing each frame is too slow (either due to inadequate disk performance or slow encoding), then frames will be dropped and video output will be choppy.

See also the official documentation.

Recording webcam

FFmpeg includes the video4linux2 and ALSA input devices that enable capturing webcam and audio input.

The following command will record a video webcam.mp4 from the webcam without audio, assuming that the webcam is correctly recognized under /dev/video0:

$ ffmpeg -f v4l2 -video_size 640x480 -i /dev/video0 -c:v libx264 -preset ultrafast webcam.mp4

where -video_size specifies the largest allowed image size from the webcam.

The above produces a silent video. To record a video webcam.mp4 from the webcam with audio:

$ ffmpeg -f v4l2 -video_size 640x480 -i /dev/video0 -f alsa -i default -c:v libx264 -preset ultrafast -c:a aac webcam.mp4

Here, the x264 codec with the fastest possible encoding speed is used. Other codecs can be used; if writing each frame is too slow (either due to inadequate disk performance or slow encoding), then frames will be dropped and video output will be choppy.

See also the official documentation.

VOB to any container

Concatenate the desired VOB files into a single stream and mux them to MPEG-2:

$ cat f0.VOB f1.VOB f2.VOB | ffmpeg -i - out.mp2

x264 lossless

The ultrafast preset will provide the fastest encoding and is useful for quick capturing (such as screencasting):

$ ffmpeg -i input -c:v libx264 -preset ultrafast -qp 0 -c:a copy output

On the opposite end of the preset spectrum is veryslow and will encode slower than ultrafast but provide a smaller output file size:

$ ffmpeg -i input -c:v libx264 -preset veryslow -qp 0 -c:a copy output

Both examples will provide the same quality output.

Tip: If your computer is able to handle -preset superfast in realtime, you should use that instead of -preset ultrafast. Ultrafast is far less efficient compression than superfast.

x265

In encoding x265 files, you may need to specify the aspect ratio of the file via -aspect <width:height>. Example :

 ffmpeg -i input -c:v libx265 -aspect 1920:1080 -preset veryslow -x265-params crf=20 output

Single-pass MPEG-2 (near lossless)

Allow FFmpeg to automatically set DVD standardized parameters. Encode to DVD MPEG-2 at ~30 FPS:

$ ffmpeg -i video.VOB -target ntsc-dvd output.mpg

Encode to DVD MPEG-2 at ~24 FPS:

$ ffmpeg -i video.VOB -target film-dvd output.mpg

x264: constant rate factor

Used when you want a specific quality output. General usage is to use the highest -crf value that still provides an acceptable quality. Lower values are higher quality; 0 is lossless, 18 is visually lossless, and 23 is the default value. A sane range is between 18 and 28. Use the slowest -preset you have patience for. See the x264 Encoding Guide for more information.

$ ffmpeg -i video -c:v libx264 -tune film -preset slow -crf 22 -x264opts fast_pskip=0 -c:a libmp3lame -aq 4 output.mkv

-tune option can be used to match the type and content of the of media being encoded.

Two-pass x264 (very high-quality)

Audio deactivated as only video statistics are recorded during the first of multiple pass runs:

$ ffmpeg -i video.VOB -an -vcodec libx264 -pass 1  -preset veryslow \
-threads 0 -b 3000k -x264opts frameref=15:fast_pskip=0 -f rawvideo -y /dev/null

Container format is automatically detected and muxed into from the output file extenstion (.mkv):

$ ffmpeg -i video.VOB -acodec libvo-aacenc -ab 256k -ar 96000 -vcodec libx264 \
-pass 2 -preset veryslow -threads 0 -b 3000k -x264opts frameref=15:fast_pskip=0 video.mkv
Tip: If you receive Unknown encoder 'libvo-aacenc' error (given the fact that your ffmpeg is compiled with libvo-aacenc enabled), you may want to try -acodec libvo_aacenc, an underscore instead of hyphen.

Two-pass MPEG-4 (very high-quality)

Audio deactivated as only video statistics are logged during the first of multiple pass runs:

$ ffmpeg -i video.VOB -an -vcodec mpeg4 -pass 1 -mbd 2 -trellis 2 -flags +cbp+mv0 \
-pre_dia_size 4 -dia_size 4 -precmp 4 -cmp 4 -subcmp 4 -preme 2 -qns 2 -b 3000k \
-f rawvideo -y /dev/null

Container format is automatically detected and muxed into from the output file extenstion (.avi):

$ ffmpeg -i video.VOB -acodec copy -vcodec mpeg4 -vtag DX50 -pass 2 -mbd 2 -trellis 2 \
-flags +cbp+mv0 -pre_dia_size 4 -dia_size 4 -precmp 4 -cmp 4 -subcmp 4 -preme 2 -qns 2 \
-b 3000k video.avi
  • Introducing threads=n>1 for -vcodec mpeg4 may skew the effects of motion estimation and lead to reduced video quality and compression efficiency.
  • The two-pass MPEG-4 example above also supports output to the MP4 container (replace .avi with .mp4).

Determining bitrates with fixed output file sizes

  • (Desired File Size in MB - Audio File Size in MB) x 8192 kb/MB / Length of Media in Seconds (s) = Bitrate in kb/s
  • (3900 MB - 275 MB) = 3625 MB x 8192 kb/MB / 8830 s = 3363 kb/s required to achieve an approximate total output file size of 3900 MB

x264 video stabilization

Video stablization using the vbid.stab plugin entails two passes.

First pass

The first pass records stabilization parameters to a file and/or a test video for visual analysis.

  • Records stabilization parameters to a file only
$ ffmpeg -i input -vf vidstabdetect=stepsize=4:mincontrast=0:result=transforms.trf -f null -
  • Records stabilization parameters to a file and create test video "output-stab" for visual analysis
$ ffmpeg -i input -vf vidstabdetect=stepsize=4:mincontrast=0:result=transforms.trf -f output-stab

Second pass

The second pass parses the stabilization parameters generated from the first pass and applies them to produce "output-stab_final". You will want to apply any additional filters at this point so as to aboid subsequent transcoding to preserve as much video quality as possible. The following example performs the following in addition to video stabilization:

  • unsharp is recommended by the author of vid.stab. Here we are simply using the defaults of 5:5:1.0:5:5:1.0
  • Tip: fade=t=in:st=0:d=4
    fade in from black starting from the beginning of the file for four seconds
  • Tip: fade=t=out:st=60:d=4
    fade out to black starting from sixty seconds into the video for four seconds
  • -c:a pcm_s16le XAVC-S codec records in pcm_s16be which is losslessly transcoded to pcm_s16le
$  ffmpeg -i input -vf vidstabtransform=smoothing=30:interpol=bicubic:input=transforms.trf,unsharp,fade=t=in:st=0:d=4,fade=t=out:st=60:d=4 -c:v libx264 -tune film -preset veryslow -crf 8 -x264opts fast_pskip=0 -c:a pcm_s16le output-stab_final

Subtitles

Extracting

Subtitles embedded in container files, such as MPEG-2 and Matroska, can be extracted and converted into SRT, SSA, among other subtitle formats.

  • Inspect a file to determine if it contains a subtitle stream:
$ ffprobe foo.mkv
...
Stream #0:0(und): Video: h264 (High), yuv420p, 1920x800 [SAR 1:1 DAR 12:5], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
  Metadata:
  CREATION_TIME   : 2012-06-05 05:04:15
  LANGUAGE        : und
Stream #0:1(und): Audio: aac, 44100 Hz, stereo, fltp (default)
 Metadata:
 CREATION_TIME   : 2012-06-05 05:10:34
 LANGUAGE        : und
 HANDLER_NAME    : GPAC ISO Audio Handler
Stream #0:2: Subtitle: ssa (default)
  • foo.mkv has an embedded SSA subtitle which can be extracted into an independent file:
$ ffmpeg -i foo.mkv foo.ssa

When dealing with multiple subtitles, you may need to specify the stream that needs to be extracted using the -map <key>:<stream> parameter:

$ ffmpeg -i foo.mkv -map 0:2 foo.ssa

Hardsubbing

(instructions based on an FFmpeg wiki article)

Hardsubbing entails merging subtitles with the video. Hardsubs can't be disabled, nor language switched.

  • Overlay foo.mpg with the subtitles in foo.ssa:
$ ffmpeg -i foo.mpg -c copy -vf subtitles=foo.ssa out.mpg

Volume gain

Change the audio volume in multiples of 256 where 256 = 100% (normal) volume. Additional values such as 400 are also valid options.

-vol 256  = 100%
-vol 512  = 200%
-vol 768  = 300%
-vol 1024 = 400%
-vol 2048 = 800%

To double the volume (512 = 200%) of an MP3 file:

$ ffmpeg -i file.mp3 -vol 512 louder_file.mp3

To quadruple the volume (1024 = 400%) of an Ogg file:

$ ffmpeg -i file.ogg -vol 1024 louder_file.ogg

Note that gain metadata is only written to the output file. Unlike mp3gain or ogggain, the source sound file is untouched.

Extracting audio

$ ffmpeg -i video.mpg
...
Input #0, avi, from 'video.mpg':
  Duration: 01:58:28.96, start: 0.000000, bitrate: 3000 kb/s
    Stream #0.0: Video: mpeg4, yuv420p, 720x480 [PAR 1:1 DAR 16:9], 29.97 tbr, 29.97 tbn, 29.97 tbc
    Stream #0.1: Audio: ac3, 48000 Hz, stereo, s16, 384 kb/s
    Stream #0.2: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
    Stream #0.3: Audio: dts, 48000 Hz, 5.1 768 kb/s
...

Extract the first (-map 0:1) AC-3 encoded audio stream exactly as it was multiplexed into the file:

$ ffmpeg -i video.mpg -map 0:1 -acodec copy -vn video.ac3

Convert the third (-map 0:3) DTS audio stream to an AAC file with a bitrate of 192 kb/s and a sampling rate of 96000 Hz:

$ ffmpeg -i video.mpg -map 0:3 -acodec libvo-aacenc -ab 192k -ar 96000 -vn output.aac

-vn disables the processing of the video stream.

Extract audio stream with certain time interval:

$ ffmpeg -ss 00:01:25 -t 00:00:05 -i video.mpg -map 0:1 -acodec copy -vn output.ac3

-ss specifies the start point, and -t specifies the duration.

Stripping audio

  1. Copy the first video stream (-map 0:0) along with the second AC-3 audio stream (-map 0:2).
  2. Convert the AC-3 audio stream to two-channel MP3 with a bitrate of 128 kb/s and a sampling rate of 48000 Hz.
$ ffmpeg -i video.mpg -map 0:0 -map 0:2 -vcodec copy -acodec libmp3lame \
-ab 128k -ar 48000 -ac 2 video.mkv
$ ffmpeg -i video.mkv
...
Input #0, avi, from 'video.mpg':
  Duration: 01:58:28.96, start: 0.000000, bitrate: 3000 kb/s
    Stream #0.0: Video: mpeg4, yuv420p, 720x480 [PAR 1:1 DAR 16:9], 29.97 tbr, 29.97 tbn, 29.97 tbc
    Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s
Note: Removing undesired audio streams allows for additional bits to be allocated towards improving video quality.

Splitting files

You can use the copy codec to perform operations on a file without changing the encoding. For example, this allows you to easily split any kind of media file into two:

$ ffmpeg -i file.ext -t 00:05:30 -c copy part1.ext -ss 00:05:30 -c copy part2.ext

Hardware acceleration

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Missing VDPAU, Intel QSV. (Discuss in Talk:FFmpeg#)
Warning: Encoding may fail when using hardware acceleration, use software encoding as a fallback.

Encoding performance may be improved by using hardware acceleration API's, however only a specific kind of codec(s) are allowed and/or may not always produce the same result when using software encoding.

VA-API

VA-API can be used for encoding and decoding on Intel CPUs (requires libva-intel-driver) and on certain AMD GPUs when using the open-source AMDGPU driver (requires libva-mesa-driver). See the following GitHub gist and Libav documentation for information about available parameters and supported platforms.

An example of encoding using the supported H.264 codec:

$ ffmpeg -threads 1 -i file.ext -vaapi_device /dev/dri/renderD128 -vcodec h264_vaapi -vf format='nv12|vaapi,hwupload' output.mp4

Nvidia NVENC

NVENC can be used for encoding when using the proprietary NVIDIA driver with the nvidia-utils package installed. Minimum supported GPUs are from 600 series (see w:Nvidia NVENC and Hardware video acceleration#Formats).

See this gist for some techniques. NVENC is somewhat similar to CUDA, thus it works even from terminal session. Depending on hardware NVENC is several times faster than Intel's VA-API encoders.

To print available options execute (hevc_nvenc may also be available):

$ ffmpeg -help encoder=h264_nvenc

Example usage:

$ ffmpeg -i source.ext -c:v h264_nvenc -rc constqp -qp 28 output.mkv

Nvidia NVDEC

NVDEC can be used for decoding when using the proprietary NVIDIA driver with the nvidia-utils package installed. Minimum supported GPUs are from 600 series (see w:Nvidia NVDEC and Hardware video acceleration#Formats).

Preset files

Populate ~/.ffmpeg with the default preset files:

$ cp -iR /usr/share/ffmpeg ~/.ffmpeg

Create new and/or modify the default preset files:

~/.ffmpeg/libavcodec-vhq.ffpreset
vtag=DX50
mbd=2
trellis=2
flags=+cbp+mv0
pre_dia_size=4
dia_size=4
precmp=4
cmp=4
subcmp=4
preme=2
qns=2

Using preset files

Enable the -vpre option after declaring the desired -vcodec

libavcodec-vhq.ffpreset

  • libavcodec = Name of the vcodec/acodec
  • vhq = Name of specific preset to be called out
  • ffpreset = FFmpeg preset filetype suffix
Two-pass MPEG-4 (very high quality)

First pass of a multipass (bitrate) ratecontrol transcode:

$ ffmpeg -i video.mpg -an -vcodec mpeg4 -pass 1 -vpre vhq -f rawvideo -y /dev/null

Ratecontrol based on the video statistics logged from the first pass:

$ ffmpeg -i video.mpg -acodec libvorbis -aq 8 -ar 48000 -vcodec mpeg4 \
-pass 2 -vpre vhq -b 3000k output.mp4
  • libvorbis quality settings (VBR)
  • -aq 4 = 128 kb/s
  • -aq 5 = 160 kb/s
  • -aq 6 = 192 kb/s
  • -aq 7 = 224 kb/s
  • -aq 8 = 256 kb/s

FFserver

The FFmpeg package includes FFserver, which can be used to stream media over a network. To use it, you first need to create the config file /etc/ffserver.conf to define your feeds and streams. Each feed specifies how the media will be sent to ffserver and each stream specifies how a particular feed will be transcoded for streaming over the network. You can start with the sample configuration file or check ffserver(1)[dead link 2018-05-19] for feed and stream examples. Here is a simple configuration file for streaming flash video:

/etc/ffserver.conf
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000
CustomLog -

<Feed av_feed.ffm>
        File /tmp/av_feed.ffm
        FileMaxSize 1G
        ACL allow 127.0.0.1
</Feed>

<Stream av_stream.flv>
        Feed av_feed.ffm
        Format flv

        VideoCodec libx264
        VideoFrameRate 25
        VideoSize hd1080
        VideoBitRate 400
        AVOptionVideo qmin 10
        AVOptionVideo qmax 42
        AVOptionVideo flags +global_header

        AudioCodec libmp3lame
        AVOptionAudio flags +global_header

        Preroll 15
</Stream>

<Stream stat.html>
        Format status
        ACL allow localhost
        ACL allow 192.168.0.0 192.168.255.255
</Stream>

<Redirect index.html>
        URL http://www.ffmpeg.org/
</Redirect>

Once you have created your config file, you can start the server and send media to your feeds. For the previous config example, this would look like

$ ffserver &
$ ffmpeg -i myvideo.mkv http://localhost:8090/av_feed.ffm

You can then stream your media using the URL http://yourserver.net:8090/av_stream.flv.

Tips and tricks

Output the duration of a video

$ ffprobe -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 file.ext

Output stream information as JSON

$ ffprobe -v quiet -print_format json -show_format -show_streams file.ext

Create a screen of the video every X frames

$ ffmpeg -i file.ext -an -s 319x180 -vf fps=1/100 -qscale:v 75 %03d.jpg

See also