preface

During real-time audio transmission, the default audio module is usually started for collection and rendering. However, in the following scenarios, you may find that the default audio module does not meet your development requirements:

  1. App already has its own audio module
  2. Custom pre-processing libraries are required
  3. Some audio acquisition devices are exclusive to the system. Flexible device management policies are required to avoid conflicts with other services
  4. Multiple SDKS are mixed and share a set of audio acquisition module

So the SDK supports using custom audio sources. Implement related scenarios. This article describes how to implement custom audio collection.

Implementation method

The steps are as follows

  1. Call enterRoom before entering the room, call the enableCustomAudioCapture(True) interface, where you need to use custom audio collection
  2. After this function is enabled, developers can manage audio data collection and processing by themselves.
  3. After the audio data is processed, sendCustomAudioData is sent to the SDK for subsequent operations.

Note: sendCustomAudioData needs to be called evenly and ata frequency of 20ms ata time, otherwise the sound will be stuttering.

Call sequence diagram

Data flow diagram

Reference code

Refer to the following code to implement custom audio capture in your target. The code section based on Android AudioRecord to achieve custom audio collection

/** * Private class AudioRecordThread extends Thread {private AudioRecord; private int bufferSize; / / parameters initialization AudioRecordThread () {bufferSize = AudioRecord. GetMinBufferSize (currentConfig. GetSampleRate (), currentConfig.getChannelConfig(), currentConfig.getEncodingConfig()) ; Log.d(TAG, "record buffer size = " + bufferSize); audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, currentConfig.getSampleRate(), currentConfig.getChannelConfig(), currentConfig.getEncodingConfig(), bufferSize); } @Override public void run() { super.run(); startPcmRecorder(); } private void startPcmRecorder() { state = RecordState.RECORDING; notifyState(); Log.d(TAG, "Start recording Pcm"); try { audioRecord.startRecording(); byte[] byteBuffer = new byte[bufferSize]; while (state == RecordState.RECORDING) { audioRecord.read(byteBuffer, 0, byteBuffer.length); // Here the data is sent to the SDK notifyData(byteBuffer); } audioRecord.stop(); If (state == recordstate.stop) {log. I (TAG, "STOP recording!" ); } else {log. I (TAG, "pause!") ); } } catch (Exception e) { Log.e(TAG, Log.getStackTraceString(e)); NotifyError (" recording failure "); } if (state ! = RecordState.PAUSE) { state = RecordState.IDLE; notifyState(); Log.d(TAG, "End of recording "); }}}Copy the code

Detailed code Reference