___ ___ ___ ___ ___ ___ / / / / / / / __ / / / / / / / / / / / / / / _ / / : / _ | | : : \ / / : : \ / / : / __ / / / _ / / : / / / / / / / \ | | | : \ / / : / \ \ / / / / / / / / / / / / / / / / / / / / __ | __ | | \ : \ / / : / ~ / / / / / / / _ / / : / _ / : : \ / __ / : / / : / / __ / : / / / / __ / : : : : : : | \ \ / __ / : / / : / / __ / / / / / / / __ / / __ \ \ ": \ \ \ \ / : : / \ \ \ / : / \ \ : \ \ ~ ~ __ \ / \ \ : \ / : : / \ \ \ / : / / : / \ \ : \ ~ ~ / / : / \ \ "/ \ \ : : : : / \ \ \ \ \ : : / \ \ : : / / : / \ \ : \ / : / \ \ : \ \ \" : \ \ \ \ \ \ ": \ \ \ \ / : : / \ \ \ / : / \ \ : \ \ \" : \ \ \ \ \ \ : \ \ \ : : / \ \ ": \ \ / \ (\ / \ \ / \ (\ / \ \ / \ (\ /

Cnblog original address

0. Know FFmpeg

0.1 What is FFmpeg?

FFmpeg is a leading open source multimedia framework that can handle almost any multimedia file in a variety of ways. Includes libavCodec, a decoder library for audio and video in multiple projects, and libavFormat, an audio and video format conversion library.

0.2 composition of FFmpeg

Command line application:

  • ffmpeg: Used to convert video files or audio files.
  • ffplay: a simple player, based on SDL and FFmpeg library.
  • ffprobe: Displays information about media files.

Libraries:

  • libswresample
  • libavresample
  • libavcodec: contains all FFmpeg audio/video codec libraries
  • libavformat: contains demuxers and Muxer libraries
  • libavutil: contains some tool libraries
  • libpostproc: a library for video preprocessing
  • libswscale: library for scaling images
  • libavfilter

0.3 Installing FFmpeg on Linux

$ sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next
$ sudo apt update
$ sudo apt install ffmpeg
Copy the code

Check whether the installation is successful

$ ffmpeg -version
Copy the code

If information similar to the following is displayed, the installation is successful:

Ffmpeg Version 3.4.8-0Ubuntu0.2 Copyright (C) 2000-2020 The FFMPEG developers built with GCC 7 (Ubuntu) 7.5.0-3 ubuntu1 ~ 18.04)Copy the code

1. Basic use

1.1 Basic Concepts

Before you can use FFmpeg, there are a few very basic concepts that need to be understood. These concepts will help you understand and use FFmpeg as a tool.

Multimedia file

On the computer level, a multimedia file can be composed of containers and streams. Streams can be divided into several categories, such as Audio stream, video stream, etc. So in the past, multimedia also called streaming media, nokia users must be familiar with. These streams are encoded when they are generated and decoded when they are read. There are many decoders for different scenarios (file size, video quality, etc.), and they all have their advantages and disadvantages. For example the codec.

A Container is a container that holds these streams. It exposes a separate interface for the media player to read and write media files. There are many different implementations that require containers, some of which are capable of holding multiple streams. For example, it can contain subtitle, audio and video information at the same time.

When people talk about audio/video format conversion, they are really converting containers.

1.2 Basic Syntax

$ ffmpeg [global_options] {[input_file_options] -i input_url} ... {[output_file_options] output_url} ...
Copy the code

This is the basic syntax format of FFMPEG. See help for options, or see [more] below (#3. More (learn more about FFMPEG and use)) start with a basic understanding.

FFmpeg is pretty smart, it has a lot of default Settings, and usually it can automatically specify the correct codecs and containers for you behind the scenes even if you don’t specify all kinds of complicated parameters.

For example, suppose you want to convert an MP3 file to an OGG file, with the following basic instructions:

$ ffmpeg -i input.mp3 output.ogg
Copy the code

Or convert MP4 to WEBM

$ ffmpeg -i input.mp4 output.webm
Copy the code

This is because with well-defined formats like WebM, FFmpeg can automatically know what audio and video is supported and how to handle it, and can convert those streams into valid WebM files.

Depending on the type of container you are working with, for example, Matroska (.mkv file), which is designed to hold various streams, there is no way to know what you are trying to accomplish, so the following command probably won’t output the desired. MKV file

$ ffmpeg -i input.mp4 output.mkv
Copy the code

So you need to configure some processing so that FFmpeg knows what to do.

1.3 Choose your Decoder (CoDECs)

FFmpeg provides the -C option that allows you to specify various decoders. It allows you to specify a separate decoder for each stream. Such as:

$ ffmpeg -i input.mp3 -c:a libvorbis output.ogg
Copy the code
$ ffmpeg -i input.mp4 -c:v vp9 -c:a libvorbis output.mkv
Copy the code

Make a Matroska container whose video streams are unencoded with VP9 rules and audio streams are unencoded with Vorbis rules.

Using FFMPEG-Codecs will list all ffMPEG supported decoders.

1.4 Modifying a Single Stream

As mentioned earlier, containers generally support multiple types of streams. FFmpeg supports modifying a stream individually, for example:

$ ffmpeg -i input.webm -c:v copy -c:a flac output.mkv
Copy the code

This instruction copies the video stream directly from input.webm to the new container output.mkv, and encodes the audio stream according to FLAC rules.

1.5 Modifying Container (Container/Format)

As mentioned above, we can simply change the container, in effect, converting the format:

$ ffmpeg -i input.webm -c:av copy output.mkv
Copy the code

Copy the audio and video encoding rules of input.webm to the new container output.mkv, which is converted losslessly because no stream-level changes have been made to it.

1.6 Set the quality for each stream

When we have multimedia file conversion requirements, often have requirements on the quality of the video, so how to modify the quality of the stream?

Bit rate

The simplest way to do this is to change the bitrate, also known as the “bitrate,” which is the number of bits per frame of image stored and the rate at which data is transmitted.

Assuming a movie has a bit rate of 600kbps for 90 minutes, then its size is 395MB (600kb/s =75KB/s (75KB/s = 5400s), then 75KB/s*5400s = 405,000 KB =395MB), roughly 400 + MB with audio. So at the same resolution, the larger the capacity of the video (the higher the bit rate), the better the quality of the video, of course, there are other factors that determine the quality of the video. In addition, when the bit rate exceeds a certain value, the image quality does not have much impact, so an appropriate value is important. You can see this answer in Zhihu.

To set the bitrate for each stream, you need to specify a -b option. Similar to specifying the code -c, you also need to specify a: colon. Such as:

$ ffmpeg -i input.webm -c:a copy -c:v vp9 -b:v 1M output.mkv
Copy the code

Copy audio encoding (-c:a copy) from input.webm, convert video to VP9 encoding rule (-c:v vp9), specify video bit rate to 1M/s(-b:v), Then package the output into Matroska Container (output.mkv). If you want to specify the bit rate of audio it is -b:a.

Frame rate

$ ffmpeg -i input.webm -c:a copy -c:v vp9 -r 30 output.mkv
Copy the code

Copy the audio encoding type from input.webm, set the video encoding to VP9, set the video frame rate to 30fps, and output to output.mkv

Demension (Size/resolution)

You can use FFmpeg to resize your video. The easiest way is to use a predefined video size, such as 720p for a video:

$ ffmpeg -i input.mkv -c:a copy -s hd720 output.mkv
Copy the code

You can also customize the screen width and height:

$ ffmpeg -i input.mkv -c:a copy -s 1280x720 output.mkv
Copy the code

:warning: Specify the screen size, that is, size, the specified format of the parameter is -s width x height.

FFmepg has a lot of predefined video sizes, attachment 1 at the end of this article,

1.7 Intercept (crop) multimedia files

In the case of video, if you need to cut a video, it’s probably more convenient to use video clipping software first, but if you know where to cut from, FFmpeg can do it with a single command.

$ ffmpeg -i input.mkv -c:av copy -ss 00:01:00 -t 10 output.mkv
Copy the code

Copy the audio and video stream coding rules from input.mkv, (-c:av copy), then clip from 00:01:00 (-ss 00:01:00), cut back 10s of length (-t 10), then output the 10s video to output.mkv file.

1.8 Audio Extraction

$ ffmpeg -i input.mkv -vn outputaudio.ogg
Copy the code

– VN line selection refers to only audio processing. No audio encoding is specified here; Vorbis is used by default. Output the audio from inpu.mkv to the outputaudio.ogg file with the default Vorbis encoding.

1.9 Video to GIF

One interesting way to do this is to convert a video into a Gif.

$ ffmpeg -i input.mkv output.gif
Copy the code

When transferring Gif, the file size is worth studying. The simplest instructions above will maximize the details of the source video. I tried, and a 15min 70MB video can be converted to Gif with 513MB, and I can watch it smoothly only when I open it through Firefox. Normal picture-viewing software can’t do it yet. So in general, converting to Gif works for short videos.


2. Common examples

2.0 Viewing file information

View file details:

$ ffprobe -i abc.MOV -hide_banner 
Copy the code
$ ffmpeg -i abc.MOV -hide_banner
Copy the code

2.1 Viewing supported decoding and encoding

$ ffmpeg -decoders
$ ffmpeg -encoders
Copy the code

2.2 Common format conversion

2.2.1 video:

mp4 -> webm

$ ffmpeg -i input.mp4 output.webm
Copy the code

mov -> mp4

$ ffmpeg -i input.mov output.mp4
Copy the code

Specify encoding format:

$ ffmpeg -i input.mov -vcodec h264 -acodec acc output.mp4
Copy the code

mp4 -> flv

$ ffmpeg -i input.mp4 -acodec copy -vcodec copy -f output.flv
Copy the code

2.2.2 audio:

$ ffmpeg -i input.mp3 output.ogg
Copy the code

Specify encoding format:

$ ffmpeg -i input.mp3 -c:a libopus output.ogg
Copy the code

2.3 Video rotation

$ ffmpeg -i input.mp4 -metadata:s:v rotate="90" output.mp4
Copy the code

2.4 Setting video bit rate

$ ffmpeg -i input.avi -b:v 64k  output.avi
Copy the code

2.5 Setting the Video Frame Rate

$ ffmpeg -i input.avi -r 24 output.avi
Copy the code

Force the frame rate of the input file (raw format only) to be 1 FPS and the frame rate of the output file to be 24 FPS:

$ ffmpeg -r 1 -i input.m2v -r 24 output.avi
Copy the code

2.6 Modify the screen size

$ ffmpeg -vcodec mpeg4 -b 1000 -r 10 -g 300 -i input.mp4 -s 800x600 output.mp4
Copy the code

2.7 Limit file size

$ ffmpeg -i input.mp4 -fs 70M output.mp4
Copy the code

This capability looks like nb, but it can be very problematic. It does output the file size you want, but if you set the value too small, the output file will automatically cut out the excess. For example, for an 80M file, if you specify output of 10M, the video might be reduced from 15min to tens of seconds. If you want to adjust the size of a video file, the most effective way is to modify the resolution, bit rate, frame rate. See the section above [jump](#1.6 Set the quality for each stream)

3. More (Learn more about FFMPEG and use)

FFmpeg is very powerful, almost all multimedia files in the file level processing operations. And making it very complicated, optical documents are divided into:

  • Command Line Tools Documentation

  • Libraries Documentation

  • API Documentation

  • Components Documentation

  • General Documentation

  • Community Contributed Documentation

But the average non-professional, the most common use is video transcoding. So I’ll just focus on the basic command line operations related to the video. The contents of the corresponding document are as follows:

The Command Line Tools Documentation | ____ffmpeg Documentation | ____1. The Synopsis | ____5. The Options | ____5. 2 Generic Options | ____5. Four Main Options | ____5. 5 Video Options | ____5. 6 the Advanced Video OptionsCopy the code

3.1 Syntax Format

$ ffmpeg [global_options] {[input_file_options] -i input_url} ... {[output_file_options] output_url} ...
Copy the code

Brief description: [] Contents of the package refer to optional.

3.2 Transcoding process Diagram:

 _______              ______________
|       |            |              |
| input |  demuxer   | encoded data |   decoder
| file  | ---------> | packets      | -----+
|_______|            |______________|      |
                                           v
                                       _________
                                      |         |
                                      | decoded |
                                      | frames  |
                                      |_________|
 ________             ______________       |
|        |           |              |      |
| output | <-------- | encoded data | <----+
| file   |   muxer   | packets      |   encoder
|________|           |______________|


Copy the code

Simple description: FFmpeg calls the libavFormat library (including demuxers and Muxer libraries, demuxer refers to audio and video separation) to read the file and get the package containing encoded data. If there are multiple input files, FFMPEG will try to read them synchronously based on the timestamp.

The encoded packets are then passed to the decoder, which processes the uncompressed frames and then to the encoder after Filtering. The encoder encodes these frames and outputs the encoded packets, which are finally transmitted to muxer, where audio and video are combined and written to the output file.

About filters

Before encoding, the decoded frames are filtered. Filter processing is a number of additional processing, consisting of a filter filter diagram. According to the input and output, it can be divided into simple and complex filter diagrams. Filters come from the libavFilter library.

About stream replication and stream selection

An important concept in FFMPEG is flow. Processing video is actually processing video stream, processing audio is actually processing audio stream.

Stream copy is an alternative to the -codec decoder — copy. This option causes FFMPEG to ignore decoding and encoding for a specified range of streams, doing only demuxing and muxing, that is, splitting audio and video only at read time. Then it is repackaged and processed. This is useful when you only need to modify the container format and container-level metadata information. In FFMPEG, the video file format is called the container format, which is ABC. Mp4 and ABC. Mov, but the container format is different. There is no need to decode and encode at all, only need to decompose the audio and video demuxing processing, in the merger through muxing processing, write to the new container format to complete the format conversion.

Something like this:

 _______              ______________            ________
|       |            |              |          |        |
| input |  demuxer   | encoded data |  muxer   | output |
| file  | ---------> | packets      | -------> | file   |
|_______|            |______________|          |________|
Copy the code

Since there is no decoding -> encoding process, the processing is very fast and there is no loss of quality. However, there are many reasons why it may not work properly. Adding a filter doesn’t work either, because the filter works based on decompressed data.

Stream selection: It was mentioned that FFMEPG is actually dealing with streams when dealing with audio and video, so when dealing with streams, you usually need to select the stream range first. Ffmpeg provides the -map option for manually selecting streams for each output file. Users can skip the -map option and let FFMPEG stream selection automatically. The specific -vn / -an / -sn / -dn options can be used to skip video, audio, subtitle, and data stream selections respectively.

3.3 options

Generally, all numeric options, if not specified, take a number as input, which can be followed by a unit, either K, M, G, or KB MiB B…. , for example, 200M.

If an option takes no arguments, the argument is of a Boolean type. If this parameter is specified, the default value is set to true. If you need to set this parameter to false, add no. For example, -foo is foo = true, then -nofoo is foo = false.

3.3.1 Stream specifiers

There are options that are applied by stream, such as bitrate or codec. The stream specifier is used to specify exactly which stream the given option belongs to.

A stream specifier is usually a colon-delimited string appended to the option name.

For example: -codec:a:1 AC3 contains the a:1 stream specifier, which will match the second audio stream. In addition, he will choose ac3 uncoder to handle it.

A stream specifier can match multiple streams, so the options specified will be applied to all matched streams. For example -b:a 128K will match the left and right audio streams.

An empty stream specifier will match all streams, such as -codec copy or -codec: copy will copy all streams without reencoding.

The possible form of a stream specifier is as follows:

  • Stream_index: a stream matching the specified index. For example -threads:1, 4 will set the number of threads for the second stream to 4. If stream_index is used as another stream specifier (see below), it will select the stream with the corresponding number of stream_index from the matching stream. The flow number is based on the order in which the flow is detected.

  • Stream_type [:additional_stream_specifier]: Stream_type is one of the following:

    stream_type meaning
    v or V Video,v: Matches all videos,VMatches only video streams without attached images, video thumbnails or cover art.
    a audio
    s subtitle
    d data
    t attachment

    [:additional_stream_specifier] refers to optional

    If additional_stream_specifier is specified, the flow specifying additional_stream_specifier will match both the specified type and the specified additional_stream_specifier condition. Otherwise, unspecified streams will match all streams specified for that type.

    For example: -codec:a:1 AC3 contains the a:1 stream specifier, which will match the second audio stream. In addition, he will choose ac3 uncoder to handle it.

  • And a few that are used less:

    • p:program_id[:additional_stream_specifier]

    • stream_id or i:stream_id

    • u

3.3.2 General Options

There are many common options, and these options are shared between ff * tools. Most of the options are for help or assistance. Here are some possible options and some brief information.

  • -h, –help

    $ ffmpeg -h decoder=decoder_name
    $ ffmpeg -h protocol=protocol_name
    Copy the code
  • -hide_banner Hide banner:

    When the default ff* is executed, it prints out the copyright of ffMPEG, version, and a bunch of other information. If you do not want to see it, you can use -hide_banner not to display it. Such as:

    $ ffprobe -i abc.MOV # banners will be displayed
    $ ffprobe -i abc.MOV -hide_banner # Do not display banner information
    Copy the code

    Ffprobe Is used to view file details.

#Simple list section-L #Show license. -h, -? , -help, --help [arg] #Show help -version #Show version. -buildconf #Show the build configuration, one option per line. -formats #Show available formats (including devices). -demuxers #Show available demuxers. -muxers #Show available muxers. -devices #Show available devices. -codecs #Show all codecs known to libavcodec. -decoders #Show available decoders. -encoders #Show all available encoders. -bsfs #Show available bitstream filters. -protocols #Show available protocols. -filters #Show available libavfilter filters. -pix_fmts #Show available pixel formats. -sample_fmts  #Show available sample formats. -layouts #Show channel names and standard channel layouts. -colors #Show recognized -hide_banner #Suppress printing banner. -nohide_banner: show banner......Copy the code

For more, see here. Don’t say too much.

3.3.3 primary option

  • -i: Sets the input file name.
  • -f: Sets the output format.
  • -y: Overwrites the output file if it already exists.
  • -fs: The conversion ends when the specified file size is exceeded.
  • -t: Specifies the duration of the output file, in seconds.
  • -ss: Converts from the specified time, in seconds.
  • -t-ssThe time starts to change, (e.g. ‘-ss 00:00:01.00 -t 00:00:10.00 that is, from 00:00:01.00 to 00:00:11.00).
  • -title: Sets the title.
  • -timestamp: Sets the timestamp.
  • -vsync: Add or subtract Frame to synchronize audio and video.
  • -c: Specifies the encoding of the output file.
  • -metadata: Changes the metadata of the output file.

3.3.4 Video options

Image parameters:

  • -b:v: Set image flow, default is 200kit/s.
  • -r[:stream_specifier] fps (input/output,per-stream): Sets the frame rate. The default is 25.
  • -fpsmax[:stream_specifier] fps (output,per-stream): Sets the maximum frame rate
  • -s[:stream_specifier] size (input/output,per-stream): Set the width and height of the screen.
  • -aspect[:stream_specifier] aspect (output,per-stream): Sets the scale of the screen.
  • -vn: Does not process images, used only for sound processing.
  • -vcodec( -c:v ): Sets the image codec. If not, use the same codec as the input file.

Attachment 1

Short for predetermined video size The actual value
sqcif 128×96
qcif 176×144
cif 352×288
4cif 704×576
16cif 1408×1152
qqvga 160×120
qvga 320×240
vga 640×480
svga 800×600
xga 1024×768
uxga 1600×1200
qxga 2048×1536
sxga 1280×1024
qsxga 2560×2048
hsxga 5120×4096
wvga 852×480
wxga 1366×768
wsxga 1600×1024
wuxga 1920×1200
woxga 2560×1600
wqsxga 3200×2048
wquxga 3840×2400
whsxga 6400×4096
whuxga 7680×4800
cga 320×200
ega 640×350
hd480 852×480
hd720 1280×720
hd1080 1920×1080

Reference:

ffmpeg.org/ffmpeg.html

www.cnblogs.com/yuancr/p/72…

Opensource.com/article/17/…

Gist.github.com/ViktorNova/…

en.wikipedia.org/wiki/FFmpeg

Unix.stackexchange.com/questions/2…

Zhidao.baidu.com/question/17…

Zhidao.baidu.com/question/32…

Linux.die.net/man/1/ffmpe…

.