FFmpeg4.2.1 series

  • [-ffMPEG4.2.1 -] CLion integration, Xcode integration, Android integration
  • [-iFFmPEG4.2.1 -] Decode – extract video data H264 and YUV

It’s been 202 years and this article is based on FFmpeg4.2.1, using the latest version of the API. To hell with av_register_all()! FFmpeg articles are overwhelmingly 3.x, and many methods are outdated. For the cleanliness of the old, full screen yellow warning, running is all outdated warning is how dross. This article is adapted according to exsample in the source code, simplified, for null, check return value, print error, etc., pay attention to the use of their own, their own processing.


1. Tell a story

To give you an idea of what this article is about, here’s a quick story:

Jett has two protective gods: the White emperor and the black emperor. A thousand feet long, wings to block the sun. Black emperor good eyes, eyes recorded in the brain, eternal. Tall enough to hold a cloud in his hand you can't carry a monster like this around. Male master is not beaten to lock blood, is not called out. How to highlight the special leading role: in the animation of the bridge is MOE into two black and white dress beautiful girls to protect the male protagonist. But two people dangling in front of it is not easy to take home, seal the two in a pendant, use to summon. How to hit BOSS: Jet holding the pendant --> call black and white meng Huang --> Meng Huang huge --> hit BOSSCopy the code

The above story contains audio and video data concepts:

Very large original data: audio PCM --> Giant Beast White Emperor after coding small data: audio AAC --> Giant beast Black Emperor after coding small data: video H264 --> Giant beast Black Emperor MP4, TS, AVI and other packaging size: Aac + h264 ----> Seal pendant player how to play: unpack format -> call find code stream -> decode stream -> Render renderCopy the code

The goal of this post is to summon the Black Lady (SH.h264) from the pendant (SH.ts) and transform into the black Emperor (SH_768x432.yuv).

Why YUV is a monster can be seen in the data below. In 3 minutes and 30 seconds, YUV data shot up to 2.48 GIGABytes. Maybe you will think why there will be YUV such a retrograde existence, in fact, the rendering layer needs YUV, to its compressed data people do not know. It’s decoded when it’s played, and you can store hundreds of movies on your phone, thanks to the compressed format.


2. Minimal code

Here introduces the main process, this plain words notes you want to read again do not understand I also have no way.

#include <iostream>

extern "C"
{
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
};
using namespace std;

int main() {
    const char *rec_path = "/Volumes/coder/Projects/Media/TolyFF/res/sh.ts"; AVFormatContext *fmt_ctx; Fmt_ctx = avformat_alloc_context(); Avformat_open_input (&FMT_CTx, rec_PATH, NULlptr, NULlptr); // Open the seal avformat_find_stream_info(FMT_ctx, nullptr); Int v_idx = av_find_best_stream(AVMEDIA_TYPE_VIDEO,-1, -1, nullptr, 0); AVCodecParameters *c_par; // declare - the summoner argument AVCodecContext *cc_ctx; // declaration - summoner environment const AVCodec *codec; // declare - callerc_par = fmt_ctx->streams[v_idx]-> codecPAR; Codec = avcodec_find_decoder(c_par->codec_id); // Instantiate the codec context with c_PAR, and open the codec cc_ctx = AVCODEC_alloc_context3 (codec); Avcodec_parameters_to_context (cc_ctx, c_PAR); // The summoner environment parameter is filled with avCoDEC_open2 (CC_CTx, CODEC, NULlptr); // open AVPacket * PKT; // declare black - packet AVFrame *frame; PKT = av_packet_alloc(); Frame = av_frame_alloc(); FILE *dst_f=fopen("/Volumes/coder/Projects/Media/TolyFF/res/sh.h264"."wb+"); Sh.h264 FILE *dst_f_yuv=fopen("/Volumes/coder/Projects/Media/TolyFF/res/sh_768x432.yuv"."wb+"); // Black emperor entity sh_768x432.yuvwhile(av_read_frame(FMt_ctx, PKT) >= 0)if(pkt->stream_index == v_idx) { avcodec_send_packet(cc_ctx, pkt); Fwrite (PKT ->data,1, PKT ->size,dst_f); Avcodec_receive_frame (CC_ctx, frame); Fwrite (frame->data[0],1,cc_ctx->width* CC_ctx ->height, dST_f_YUv); // frame->data[1],1,cc_ctx->width/2* CC_ctx ->height/2, dST_f_yuv; // frame->data[2],1,cc_ctx->width/2* CC_CTx ->height/2, dST_f_yuv; }} fclose(dst_f); Fclose (dST_f_yuv); // Unplug avCOdec_free_context (&cc_ctx); // Close the environment av_frame_free(&frame); // Free av_packet_free(&pkt); // Release the carrier}Copy the code

After running will generate H264 and YUV two files, through FFplay can play the two are pure video compression stream and the original stream, so it is no sound to play. You may ask why to decode these two streams, there is a magic technique called fusion, and an operation called transformation. Boy, you know nothing about strength.

ffplay sh.h264
ffplay -video_size 768x432 sh_768x432.yuv
Copy the code

2. Several obsolete methods and replacement explanations

Here is the old and new schematic:

Registration method, go ahead

attribute_deprecated
void av_register_all(void);
Copy the code

The codec method in AVStream gets the AVCodecContext, which is outdated. Instead, the AVCodecParameters of codecPAR create a context based on the parameters. As above

cc_ctx = fmt_ctx->streams[v_idx]->codec;

    /**
     * @deprecated use the codecpar struct instead
     */
    attribute_deprecated
    AVCodecContext *codec;
Copy the code

The avCODEC_decode_videO2 method is split into two: avCODEC_send_packet () and AVCODEC_receive_frame () for packet and frame processing, respectively. Full disclosure: Packet is used to summon cute chicks, Frame is used to activate giant beasts.

 * @deprecated Use avcodec_send_packet() and avcodec_receive_frame().
 */
attribute_deprecated
int avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture,
                         int *got_picture_ptr,
                         const AVPacket *avpkt);
Copy the code

3. Several major structures

AVFormatContext: Encapsulates the format context, containing some introductory information and, most importantly, a stream, which is the source of the data. AVInputFormat records information about the encapsulation format.


AVCodecParameters: Codec parameters, taken from the CodecPAR of AVStream, in place of the COdec property. You can get stream parameters, such as video stream width, height, codec type, codec ID, and so on.


AVCodecContext: a codec context that can be populated with AVCodecParameters via avCODEC_parameterS_to_context. It also records video stream information, except that it contains the codec object.


AVCodec: Codec context, the equivalent of Daiku’s magic light stick, something that turns Daiku into light.


AVPacket: encoded bit stream, corresponding to live video, data field and compressed H264 data. Other information beyond the first time:


AVFrame: Original stream after decoding, YUV components merge to form a huge YUV god beast.


@zhang Fengjie Special strength 2019.11.26 not allowed to transfer my public number: king of programming ~ END ~