A video is composed of sound track and video track. Generally, the data of sound track is relatively small, so we generally do not need to worry about the problem of sound analysis. But the video track data is very large, the video track decoding is the bottleneck of the whole video decoding.

Render execution process:Audio and video decoding is done by default using MediaCodec, video decoding is done in Render: MediaCodecVideoRenderer, audio decoding is done in MediaCodecAudioRenderer.

  • 1.MediaCodec has synchronous and asynchronous modes. Is the synchronous or asynchronous mode used in ExoPlayer?

SetCallback (….) is not set to mediacodec.setcallback (….). Callback, rely on the codec. DequeueInputBuffer and codec. DequeueOutputBuffer available for the input buffer and the output buffer. In this case, the input buffer is constantly taking decoded data from the Codec and putting it into the output buffer, which is sent out for display.

  • 2. The core processing method is the render function in MediaCodecRenderer:
public void render(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException { if (pendingOutputEndOfStream) { pendingOutputEndOfStream = false; processEndOfStream(); } try { if (outputStreamEnded) { renderToEndOfStream(); return; } if (inputFormat == null && ! readToFlagsOnlyBuffer(/* requireFormat= */ true)) { // We still don't have a format and can't make progress without one.  return; } // We have a format. maybeInitCodec(); if (codec ! = null) { long drainStartTimeMs = SystemClock.elapsedRealtime(); TraceUtil.beginSection("drainAndFeed"); while (drainOutputBuffer(positionUs, elapsedRealtimeUs)) {} while (feedInputBuffer() && shouldContinueFeeding(drainStartTimeMs)) {} TraceUtil.endSection(); } else { decoderCounters.skippedInputBufferCount += skipSource(positionUs); // We need to read any format changes despite not having a codec so that drmSession can be // updated, and so that we have the most recent format should the codec be initialized. We // may also reach the end of the stream. Note that readSource will not read a sample into a // flags-only buffer. readToFlagsOnlyBuffer(/* requireFormat= */ false); } decoderCounters.ensureUpdated(); } catch (IllegalStateException e) { if (isMediaCodecException(e)) { throw createRendererException(e, inputFormat); } throw e; }}Copy the code

MaybeInitCodec creates the CODEC based on the MimeType, and then configates the surface inside the CODEC. Subsequent output Buffers data will be continuously sent to the surface. DrainOutputBuffer consumes data in the Output Buffers queue in coDEC; FeedInputBuffer populates the input Buffers queue in COdec.

ShouldContinueFeeding (drainStartTimeMs) is used for app and app synchronization. In the next chapter we’ll analyze app and app synchronization. In this chapter we’ll just focus on the app and the key points involved. The discussion of frame loss is analyzed in audio and video synchronization

For video playback, you can set the surface in Mediacodec. configure. For audio playback, AudioTrack or OpenSL ES is used, of course AudioTrack is used in ExoPlayer

Audio playback

The buildAudioRenderers function in DefaultRenderersFactory is used to initialize AudioRender

    out.add(
        new MediaCodecAudioRenderer(
            context,
            mediaCodecSelector,
            drmSessionManager,
            playClearSamplesWithoutKeys,
            enableDecoderFallback,
            eventHandler,
            eventListener,
            new DefaultAudioSink(AudioCapabilities.getCapabilities(context), audioProcessors)));
Copy the code

Focus on the last parameter DefaultAudioSink object. The AudioProcessor is a base class for processing audio, extending the way it implements many audio processes. DefaultAudioSink. The initialize function initializes a AudioTrack object:

    audioTrack =
        Assertions.checkNotNull(configuration)
            .buildAudioTrack(tunneling, audioAttributes, audioSessionId);
Copy the code

Under the function of a AudioTrackPositionTracker object, the object is audio synchronization auxiliary class, is very important.

    audioTrackPositionTracker.setAudioTrack(
        audioTrack,
        configuration.outputEncoding,
        configuration.outputPcmFrameSize,
        configuration.bufferSize);
Copy the code

MediaCodecAudioRenderer. ProcessOutputBuffer will decode the original audio data to the AudioSink rendering: it’s a cycle, the process of continuously from the output buffers to remove decoding of the raw data.

      if (audioSink.handleBuffer(buffer, bufferPresentationTimeUs)) {
        codec.releaseOutputBuffer(bufferIndex, false);
        decoderCounters.renderedOutputBufferCount++;
        return true;
      }
Copy the code

DefaultAudioSink. HandleBuffer in processing block at the core of the method is:

    if (configuration.processingEnabled) {
      processBuffers(presentationTimeUs);
    } else {
      writeBuffer(inputBuffer, presentationTimeUs);
    }
Copy the code

Write the decoded raw data to AudioTrack:

if (Util.SDK_INT < 21) { // isInputPcm == true // Work out how many bytes we can write without the risk of blocking. int  bytesToWrite = audioTrackPositionTracker.getAvailableBufferSize(writtenPcmBytes); if (bytesToWrite > 0) { bytesToWrite = Math.min(bytesRemaining, bytesToWrite); bytesWritten = audioTrack.write(preV21OutputBuffer, preV21OutputBufferOffset, bytesToWrite); if (bytesWritten > 0) { preV21OutputBufferOffset += bytesWritten; buffer.position(buffer.position() + bytesWritten); } } } else if (tunneling) { Assertions.checkState(avSyncPresentationTimeUs ! = C.TIME_UNSET); bytesWritten = writeNonBlockingWithAvSyncV21(audioTrack, buffer, bytesRemaining, avSyncPresentationTimeUs); } else { bytesWritten = writeNonBlockingV21(audioTrack, buffer, bytesRemaining); }Copy the code
audioTrack.write(buffer, size, WRITE_NON_BLOCKING, presentationTimeUs * 1000);
Copy the code

The blocking type of AudioTrack is WRITE_NON_BLOCKING, which does not affect the playback thread

AudioTrack will clearly inform the system of the current buffer timestamp when writing buffer, and control the playback progress according to this timestamp when playing.

  • There’s a lot of industry code in The Render section of the ExoPlayer, but that doesn’t stop us from understanding the overall flow
  • Audio and video this piece of the core function or audio and video synchronization

Q: How do I introduce other Audio or Video Render modules? Think about it and we’ll find out later