This article is participating in the Java Theme Month – Java Debug Notes EventActive link

preface

A month ago, FFmpeg was used to record RTSP stream video. However, the use of Frame to record the video consumes a lot of CPU and memory (about 200%+ CPU, about 2.3GB memory). According to research, grabber. GrabFrame () will get Frame through decoding, and will generate corresponding video file through encoding in record(Frame). If AvPacket (transencapsulation) is used to achieve, on the basis of transencapsulation, multithreading is also used to deal with DORA stream and push stream respectively. A 20Min video takes up about 5% of the CPU and 200M of memory

It took about a week, but I finally got it done. Note here the following implementation methods and ideas ~Copy the code

Process monitoring screenshots, performance improvement is still very obvious!


What are JavaCV and FFmpeg?

JavaCV: Java visual processing library, which contains many tools, including audio and video related FFmpeg. Methods FFmpeg can be called directly from JNI: Fmpeg is the leading multimedia framework capable of decoding, encoding, transcoding, mixing, decrypting, streaming, filtering, and playing almost everything created by humans and machines. Key FFmpeg open source!

How to achieve recording and streaming?

The RTSP stream is used as an example for recording and pushing streams

  • Record: Pull stream -> record, so that you can convert RTSP to MP4 or AVI video files
  • Push RTMP to nginx streaming server. Push RTMP to NGINx streaming server. This allows you to play live video in the browser via flv.js.

Nginx does not support streaming media, so install nginx-http-flV-module 2. Try not to do time-consuming operations while pulling, which can lead to very serious FPS 3. Both Frame and AvPacket can be implemented when using FFmpeg’s recorder to push the stream. Frame is convenient and simple (there is no need to care about PTS, DTS and Frame types), and can be pushed directly. However, the performance of the codec process is poor due to the number of codecs. AvPacket has good performance, but it is necessary to consider the alignment of Packet PTS and DTS, otherwise the stream cannot be pushed correctly

3. Problems encountered

1. The recorded video cannot be played

A: 1. There is A high probability that Grabber or Recoder is not closed correctly. It is necessary to ensure that Grabber is closed first and then Recoder is closed after recording.

2. Non-formal increasing DTS to muxer in stream

First: call grabber.flush() after grabber.start().

View the source code can be found, in fact, is a multiple capture frame initialization grab method implementation:

public void flush(a) throws FrameGrabber.Exception {
        for(int i = 0; i < this.numBuffers + 1; ++i) {
            this.grab(); }}Copy the code
The grabFrame(true, true, true, false, true) method in FFmpegFrameGrabber is actually called, which causes the first i-frame keyframe to be lost and the screen to be lost.Copy the code

Second method: after getting AvPacket, deal with PTS and DTS by yourself. At present I use this way, concrete implementation can see the following code

Four, how to achieve

Code I think is still more standard, should not need to comment can also understand ~ ~ in order not to affect the pull stream due to the processing of the video frame, here the use of multithreading optimization

1. Local variables

    ExecutorService threadPool = Executors.newFixedThreadPool(3);
    Semaphore semaphore = new Semaphore(0);
    private static final ArrayBlockingQueue<AVPacket> blockingQueue = new ArrayBlockingQueue<>(60);
    private volatile boolean stopRecording = false;
    private volatile boolean stopPull = false;
Copy the code

2. Start recording

    public void startSave(String userName, String psw, String ip, String videoOutPath) {
        this.stopRecording = false;
        this.stopPull = false;
        String url = Tools.generateRtspUrl(userName, psw, ip);
        // pull first!
        threadPool.execute(() -> startPull(url, videoOutPath));
        threadPool.execute(this::startRecord);
    }

/** * start pull stream */
    private void startPull(String url, String outPath) {
        Map<Long, Long> timestampMap = new HashMap<>();
        try {
            VideoUtil.packetGrabberInit(url, outPath);
            AVPacket packet = null;
            int errIndex = 0;
            while(! stopPull && errIndex <10){
                packet = VideoUtil.grabPacket();
                // skip empty packet
                if (packet == null || packet.size() <= 0 || packet.data() == null) {
                    log.info("discard empty packet");
                    errIndex++;
                    continue;
                }
                // check
                checkPacket(timestampMap, packet);
                AVPacket retPacket = avcodec.av_packet_alloc();
                avcodec.av_packet_ref(retPacket, packet);
                blockingQueue.put(retPacket);
                log.trace(String.format("Get a frame: current size is %s, PTS :%s, DTS :%s, timestamp:%s", blockingQueue.size(), packet.pts(), packet.dts(), packet.duration()));
                avcodec.av_packet_unref(packet);
            }
            avcodec.av_packet_free(packet);
        } catch (IOException | InterruptedException e) {
            e.printStackTrace();
            throw new DHCameraException("pull rtsp error:"+ e.getMessage()); }}private void startRecord( ) {
        AVPacket avPacket;
        try {
            while(! stopRecording) { avPacket = blockingQueue.poll(500, TimeUnit.MILLISECONDS);
                if (avPacket == null) {
                    log.trace("queue is empty...");
                    continue;
                }
                log.trace("add one frame"); VideoUtil.recordPacket(avPacket); }}catch (InterruptedException | IOException e) {
            e.printStackTrace();
            throw new DHCameraException("record error:" + e.getMessage());
        } finally {
            try {
                VideoUtil.release();
                semaphore.release();
                log.trace("discard frame size:" + blockingQueue.size());
            } catch(FrameRecorder.Exception | FrameGrabber.Exception e) { e.printStackTrace(); }}}Copy the code

3. Stop recording

    public void stopSave(a) {
        // stop pull first
        this.stopPull = true;
        try {
            log.trace("acquire semaphore");
            this.stopRecording = true;
            semaphore.acquire();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally{ blockingQueue.clear(); }}Copy the code

4. Test (record for 20 minutes)

    @Test
    public void saveVideoTest(a) throws InterruptedException {
        c.startSave(userName, psw, ip, new Date() + "_rtsp.mp4");
        Thread.sleep(20 * 60 * 1000);
        c.stopSave();
    }
Copy the code

4. Actual effect (20 minutes recording)

The effect of the actual test is good. The screenshot of the recorded video is shown below (encoded).

Five, the summary

Thanks for the following blog post, link below:

  1. www.banmajio.com/post/5b6e30…
  2. www.cnblogs.com/yangxiayi19…
  3. Blog.csdn.net/leixiaohua1…
  4. zhuanlan.zhihu.com/p/61747783
  5. Blog.csdn.net/u012587637/…
  6. Blog.csdn.net/BrookIcv/ar…
  7. Blog.csdn.net/eguid_1/art…
  8. Blog.csdn.net/asd54090/ar…
  9. Juejin. Cn/post / 684490…