This article is sponsored by Yugang Said writing platform. The copyright belongs to Yugang said wechat public account. All rights reserved

Today we learn audio collection, coding, file generation, transcoding and other operations, we generate three formats of file format, PCM, WAV, AAC three formats, and we use AudioStack to play audio, finally we play this audio.

In this article you will learn:

  • AudioRecord Collection of audio
  • Generate the PCM file
  • PCM is converted to A WAV file
  • PCM is converted to AAC file
  • Attached is the normal running demo source code

Use AudioRecord to generate PCM files

AudioRecord is a function class provided by the Android system for recording. To understand the specific description and usage of this class, we can go to the official documentation:

The main function of the AndioRecord class is to enable various Java applications to manage audio resources that they collect through such sound-recording hardware. Pulling is done by reading the sound data from the AudioRecord object. During recording, all the application needs to do is get the recording data from the AudioRecord object in time through one of the following three class methods. The AudioRecord class provides three methods for retrieving sound data: read(byte[], int, int), read(short[], int, int), read(ByteBuffer, int). Whichever method you choose to use must be set up in advance to store the sound data in a user-friendly format.

To start a recording, the AudioRecord initializes an associated sound buffer, which is used to store new sound data. The size of this buffer can be specified during object construction. It indicates how long an AudioRecord object can record (that is, the volume of sound that can be recorded at one time) before the sound data has been read (synchronized). The audio data is read from the audio hardware. The data size does not exceed the size of the entire recording data (it can be read multiple times), that is, the initial buffer capacity is read at a time.

1.1 First declare some global variables and constant parameters

Mainly declare some parameters used, specific explanation can see the comment.

/ / specify the audio source This and the MediaRecorder is same MediaRecorder. AudioSource. MIC refers to the microphone

private static final int mAudioSource = MediaRecorder.AudioSource.MIC;

// specify the sampling rate (MediaRecoder is usually 8000Hz, AAC is usually 44100Hz. Set the sampling rate to 44100, currently the usual sampling rate, official documentation indicates that this value is compatible with all Settings)

private static final int mSampleRateInHz = 44100;

// Specifies the number of channels to capture audio. Specifies the constant, mono, used for this in the AudioFormat class

private static final int mChannelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO; 

// Specifies the audio quantization number. The following possible constants are specified in the AudioFormaat class. Usually we choose ENCODING_PCM_16BIT and ENCODING_PCM_8BIT and PCM stands for pulse code modulation, which is actually the original audio sample.

// Therefore, you can set the resolution of each sample to 16 or 8 bits. 16 bits takes up more space and processing power, and the audio is more realistic.

private static final int mAudioFormat = AudioFormat.ENCODING_PCM_16BIT;

// Specify the buffer size. You can get this by calling the getMinBufferSize method of the AudioRecord class.

private int mBufferSizeInBytes;



// Declare an AudioRecord object

private AudioRecord mAudioRecord = null;

Copy the code

1.2 Get the buffer size and create an AudioRecord

// Initialize the data and calculate the minimum buffer

mBufferSizeInBytes = AudioRecord.getMinBufferSize(mSampleRateInHz, mChannelConfig, mAudioFormat);

// Create an AudioRecorder object

mAudioRecord = new AudioRecord(mAudioSource, mSampleRateInHz, mChannelConfig,

                               mAudioFormat, mBufferSizeInBytes);

Copy the code

1.3 Creating a subthread Enable thread recording and write file files

 @Override

    public void run(a) {

        // mark the start of collection

        isRecording = true;

        // Create a file

        createFile();



        try {



            // The AudioRecord is uninitialized and released when it stops recording. The state is STATE_UNINITIALIZED

            if (mAudioRecord.getState() == mAudioRecord.STATE_UNINITIALIZED) {

                initData();

            }



            // Minimum buffer

            byte[] buffer = new byte[mBufferSizeInBytes];

            // Get the data stream of the file

            mDataOutputStream = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(mRecordingFile)));



            // Start recording

            mAudioRecord.startRecording();

            //getRecordingState to obtain the status of the current AudioReroding whether data is being collected

            while (isRecording && mAudioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {

                int bufferReadResult = mAudioRecord.read(buffer, 0, mBufferSizeInBytes);

                for (int i = 0; i < bufferReadResult; i++) {

                    mDataOutputStream.write(buffer[i]);

                }

            }

        } catch (Exception e) {

            Log.e(TAG, "Recording Failed");

        } finally {

            // Stop recording

            stopRecord();

            IOUtil.close(mDataOutputStream);

        }

    }

Copy the code

1.4 Permission and Collection Summary

Note: Permission requirements are WRITE_EXTERNAL_STORAGE and RECORD_AUDIO

So far the basic recording process has been covered, but here comes the question:

1) I output all the audio data to the file according to the process. After stopping recording, I open the file and find that it cannot be played. What is the reason?

Answer: according to the process covered, data is entered, but now the inside of the file content is only the original audio data, termed raw (Chinese explanation is that the “raw material” or “raw”), at this time, you let the player to open, it does not know what is the format of the saved, and don’t know how to decode operations. Of course not.

2) How can I play my recorded content in the player?

A: Just add AAC HEAD or AAC data at the beginning of the file, that is, the file header. Only by adding the data in the header of the file, can the player know exactly what the content is inside, and then can parse and play the content normally.

This section describes PCM, WAV, and AAC file headers

I will briefly introduce the basic introduction of these three formats. Specifically, I have added specific access links. Click the details to view them.

PCM:PCM (Pulse Code Modulation—-). The so-called PCM recording is to convert analog signals such as sound into symbolic pulse train and record them. PCM signal is a digital signal composed of [1], [0] and other symbols, without any encoding and compression processing. Compared with analog signal, it is less susceptible to transmission system clutter distortion. Wide dynamic range, sound quality can be quite good effect.

WAV: WAV is a lossless audio File Format that complies with the Resource Interchange File Format (PIFF) specifications. All WAVs have a file header that contains the encoding parameters of the audio stream. WAV has no hard and fast rules for encoding audio streams. In addition to PCM, almost all codings that support the ACM specification can encode WAV audio streams.

Simply put: WAV is a lossless audio file format, and PCM is encoding without compression

AAC: AAC (Advanced Audio Coding), called “Advanced Audio Coding” in Chinese, appeared in 1997, based on MPEG-2 Audio Coding technology. It was developed by Fraunhofer IIS, Dolby LABS, AT&T, Sony and others to replace the MP3 format. In 2000, after the emergence of the MPEG-4 standard, AAC reintegrated its features, adding SBR technology and PS technology, in order to distinguish the traditional MPEG-2 AAC, also known as MPEG-4 AAC. It is a file compression format designed for audio data, similar to Mp3. With the AAC format, the sound file can be significantly reduced without any perceived degradation of sound quality.

PCM is converted to WAV

Just add the WAVE HEAD or AAC data at the beginning of the file. Only by adding the data in the header of the file, can the player know exactly what the content is inside, and then can parse and play the content normally. Play a WAV File on an AudioTrack.

public class WAVUtil {



    / * *

* Convert PCM files to WAV files

     *

     * @paramInPcmFilePath Enter the path of the PCM file

     * @paramOutWavFilePath Path of the output WAV file

     * @paramSampleRate sampleRate, for example, 44100

     * @paramChannels Number of Sound channels Single-channel: 1 or dual-channel: 2

     * @paramBitNum Indicates the number of bits sampled, 8 or 16

* /


    public static void convertPcm2Wav(String inPcmFilePath, String outWavFilePath, int sampleRate,int channels, int bitNum) {



        FileInputStream in = null;

        FileOutputStream out = null;

        byte[] data = new byte[1024];



        try {

            // Sampling bytes Byte rate

            long byteRate = sampleRate * channels * bitNum / 8;



            in = new FileInputStream(inPcmFilePath);

            out = new FileOutputStream(outWavFilePath);



            //PCM file size

            long totalAudioLen = in.getChannel().size();



            // Total size, since RIFF and WAV are not included, is 44-8 = 36, plus PCM file size

            long totalDataLen = totalAudioLen + 36;



            writeWaveFileHeader(out, totalAudioLen, totalDataLen, sampleRate, channels, byteRate);



            int length = 0;

            while ((length = in.read(data)) > 0) {

                out.write(data, 0, length);

            }

        } catch (Exception e) {

            e.printStackTrace();

        } finally {

            IOUtil.close(in,out);

        }

    }





    / * *

* Output WAV files

     *

     * @paramOut WAV Indicates the output file flow

     * @paramTotalAudioLen Total audio PCM data size

     * @paramTotalDataLen Total data size

     * @paramSampleRate sampling rate

     * @paramChannels track number

     * @paramByteRate Indicates the byte rate of sampling bytes

     * @throws IOException

* /


    private static void writeWaveFileHeader(FileOutputStream out, long totalAudioLen,long totalDataLen, int sampleRate, int channels, long byteRate) throws IOException {

        byte[] header = new byte[44];

        header[0] = 'R'// RIFF

        header[1] = 'I';

        header[2] = 'F';

        header[3] = 'F';

        header[4] = (byte) (totalDataLen & 0xff);// Data size

        header[5] = (byte) ((totalDataLen >> 8) & 0xff);

        header[6] = (byte) ((totalDataLen >> 16) & 0xff);

        header[7] = (byte) ((totalDataLen >> 24) & 0xff);

        header[8] = 'W';//WAVE

        header[9] = 'A';

        header[10] = 'V';

        header[11] = 'E';

        //FMT Chunk

        header[12] = 'f'// 'fmt '

        header[13] = 'm';

        header[14] = 't';

        header[15] = ' ';// Transition bytes

        // Data size

        header[16] = 16// 4 bytes: size of 'fmt ' chunk

        header[17] = 0;

        header[18] = 0;

        header[19] = 0;

        // Encoding mode 10H is the PCM encoding format

        header[20] = 1// format = 1

        header[21] = 0;

        / / channel number

        header[22] = (byte) channels;

        header[23] = 0;

        // Sampling rate, playback speed per channel

        header[24] = (byte) (sampleRate & 0xff);

        header[25] = (byte) ((sampleRate >> 8) & 0xff);

        header[26] = (byte) ((sampleRate >> 16) & 0xff);

        header[27] = (byte) ((sampleRate >> 24) & 0xff);

        // Audio data transfer rate, sampling rate * number of channels * sampling depth /8

        header[28] = (byte) (byteRate & 0xff);

        header[29] = (byte) ((byteRate >> 8) & 0xff);

        header[30] = (byte) ((byteRate >> 16) & 0xff);

        header[31] = (byte) ((byteRate >> 24) & 0xff);

        // Determine how many such bytes of data the system needs to process at a time, determine the buffer, number of channels * number of sample bits

        header[32] = (byte) (channels * 16 / 8);

        header[33] = 0;

        // The number of bits per sample

        header[34] = 16;

        header[35] = 0;

        //Data chunk

        header[36] = 'd';//data

        header[37] = 'a';

        header[38] = 't';

        header[39] = 'a';

        header[40] = (byte) (totalAudioLen & 0xff);

        header[41] = (byte) ((totalAudioLen >> 8) & 0xff);

        header[42] = (byte) ((totalAudioLen >> 16) & 0xff);

        header[43] = (byte) ((totalAudioLen >> 24) & 0xff);

        out.write(header, 0.44);

    }

}

Copy the code

As shown in the figure below, we generated a relative WAV file, which could play normally when we opened it with the native player of the machine. However, we found that its size was relatively large. We saw that it was only as big as a few minutes, and we usually used MP3 and AAC formats. Here we continue to look at how the MP3 format can be generated.

PCM is converted to AAC file format

Generate AAC file for playback

public class AACUtil {



.



    / * *

Initialize the AAC encoder

* /


    private void initAACMediaEncode(a) {

        try {



            // The parameters correspond to -> MIME type, sample rate, and number of channels

            MediaFormat encodeFormat = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 16000.1);

            encodeFormat.setInteger(MediaFormat.KEY_BIT_RATE, 64000);/ / bitrate

            encodeFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);

            encodeFormat.setInteger(MediaFormat.KEY_CHANNEL_MASK, AudioFormat.CHANNEL_IN_MONO);

            encodeFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);

            encodeFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 1024);// The size of the inputBuffer



            mediaEncode = MediaCodec.createEncoderByType(encodeType);

            mediaEncode.configure(encodeFormat, null.null, MediaCodec.CONFIGURE_FLAG_ENCODE);

        } catch (IOException e) {

            e.printStackTrace();

        }



        if (mediaEncode == null) {

            LogUtil.e("create mediaEncode failed");

            return;

        }



        mediaEncode.start();

        encodeInputBuffers = mediaEncode.getInputBuffers();

        encodeOutputBuffers = mediaEncode.getOutputBuffers();

        encodeBufferInfo = new MediaCodec.BufferInfo();

    }



    private boolean codeOver = false;



    / * *

* Start transcoding

* Audio data {@linkPCM data is then encoded in mediaformat.mimeType_audio_AAC audio format

     * mp3->PCM->aac

* /


    public void startAsync(a) {

        LogUtil.w("start");

        new Thread(new DecodeRunnable()).start();

    }





    / * *

* decoding {@link#srcPath} Audio files get PCM blocks

     *

     * @returnWhether all data has been decoded

* /


    private void srcAudioFormatToPCM(a) {

        File file = new File(srcPath);// Specify the file to read

        FileInputStream fio = null;

        try {

            fio = new FileInputStream(file);

            byte[] bb = new byte[1024];

            while(! codeOver) {

                if(fio.read(bb) ! = -1) {

                    LogUtil.e("============ putPCMData ============" + bb.length);

                    dstAudioFormatFromPCM(bb);

                } else {

                    codeOver = true;

                }

            }



            fio.close();

        } catch (Exception e) {

            e.printStackTrace();

        }



    }





    private byte[] chunkAudio = new byte[0];



    / * *

* Encode PCM data to get AAC format audio files

* /


    private void dstAudioFormatFromPCM(byte[] pcmData) {



        int inputIndex;

        ByteBuffer inputBuffer;

        int outputIndex;

        ByteBuffer outputBuffer;



        int outBitSize;

        int outPacketSize;

        byte[] PCMAudio;

        PCMAudio = pcmData;



        encodeInputBuffers = mediaEncode.getInputBuffers();

        encodeOutputBuffers = mediaEncode.getOutputBuffers();

        encodeBufferInfo = new MediaCodec.BufferInfo();





        inputIndex = mediaEncode.dequeueInputBuffer(0);

        inputBuffer = encodeInputBuffers[inputIndex];

        inputBuffer.clear();

        inputBuffer.limit(PCMAudio.length);

        inputBuffer.put(PCMAudio);//PCM data is populated to inputBuffer

        mediaEncode.queueInputBuffer(inputIndex, 0, PCMAudio.length, 0.0);// Notification encoder encoding





        outputIndex = mediaEncode.dequeueOutputBuffer(encodeBufferInfo, 0);



        while (outputIndex > 0) {



            outBitSize = encodeBufferInfo.size;

            outPacketSize = outBitSize + 7;//7 is the size of the ADT header

            outputBuffer = encodeOutputBuffers[outputIndex];// Get the output Buffer

            outputBuffer.position(encodeBufferInfo.offset);

            outputBuffer.limit(encodeBufferInfo.offset + outBitSize);

            chunkAudio = new byte[outPacketSize];

            addADTStoPacket(chunkAudio, outPacketSize);/ / add ADTS

            outputBuffer.get(chunkAudio, 7, outBitSize);// Fetch the encoded AAC data into byte[]



            try {

                // Record aAC audio files and save them in the phone's memory

                bos.write(chunkAudio, 0, chunkAudio.length);

                bos.flush();

            } catch (FileNotFoundException e) {

                e.printStackTrace();

            } catch (IOException e) {

                e.printStackTrace();

            }



            outputBuffer.position(encodeBufferInfo.offset);

            mediaEncode.releaseOutputBuffer(outputIndex, false);

            outputIndex = mediaEncode.dequeueOutputBuffer(encodeBufferInfo, 0);



        }



    }



    / * *

* Add ADTS header

     *

     * @param packet

     * @param packetLen

* /


    private void addADTStoPacket(byte[] packet, int packetLen) {

        int profile = 2// AAC LC

        int freqIdx = 8// 16KHz

        int chanCfg = 1// CPE



        // fill in ADTS data

        packet[0] = (byte0xFF;

        packet[1] = (byte0xF1;

        packet[2] = (byte) (((profile - 1) < <6) + (freqIdx << 2) + (chanCfg >> 2));

        packet[3] = (byte) (((chanCfg & 3) < <6) + (packetLen >> 11));

        packet[4] = (byte) ((packetLen & 0x7FF) > >3);

        packet[5] = (byte) (((packetLen & 7) < <5) + 0x1F);

        packet[6] = (byte0xFC;



    }



    / * *

* Release resources

* /


    public void release(a) {

.

    }



    / * *

* Decoding thread

* /


    private class DecodeRunnable implements Runnable {



        @Override

        public void run(a) {

            srcAudioFormatToPCM();

        }

    }



}

Copy the code

AudioStack play

AudioTrack class can complete the audio data output tasks on the Android platform. AudioTrack has two data loading modes (MODE_STREAM and MODE_STATIC), corresponding to the data loading mode and the audio stream type, corresponding to two completely different usage scenarios.

  • MODE_STREAM: In this mode, audio data is written to AudioTrack again and again by write. This is similar to writing data to a file using a write system call, but this works by copying the data from the user-provided Buffer to the Buffer inside AudioTrack each time, which causes a certain amount of latency. To solve this problem, AudioTrack introduced a second mode.
  • MODE_STATIC: In this mode, all data needs to be passed to the internal buffer in the AudioTrack through a single write call before play, and no further data needs to be passed. This mode is suitable for files with small memory footprint and high latency requirements, such as ringtones. One drawback is that you can’t write too much data at a time, or the system won’t be able to allocate enough memory to store all the data.

Sound can be played using MediaPlayer and AudioTrack, both of which provide Java apis for developers to use. Although both can play sound, there are significant differences between the two. The biggest difference is that MediaPlayer can play multiple sound files, such as MP3, AAC, WAV, OGG, MIDI, etc. MediaPlayer creates the corresponding audio decoder in the framework layer. AudioTrack can only play decoded PCM streams. If you compare the supported file formats, AudioTrack only supports WAV audio files, because most WAV audio files are PCM streams. AudioTrack does not create decoders, so it can only play waV files that do not require decoding.

3.1 Types of audio streams

In the AudioTrack constructor, the audiomanager.stream_music parameter is touched. Its meaning has to do with how the Android system manages and classifies audio streams.

Android classifies system sounds into several stream types. Here are a few common ones:

· STREAM_ALARM: warning sound

· STREAM_MUSIC: music sounds, such as music, etc

· STREAM_RING: Ringtone

· STREAM_SYSTEM: system sound, such as low tone, screen lock sound, etc

· STREAM_VOCIE_CALL: Call sound

Note: These types of partitioning have nothing to do with the audio data itself. For example, the MUSIC and RING types can both be an MP3 song. In addition, there is no set standard for the selection of sound stream type. For example, the ringtone in ringtone preview can be set to MUSIC type. The classification of Audio stream types is related to the Audio management strategy of the Audio system.

3.2 Concepts of Buffer allocation and Frame

One method we often use to calculate the size of a Buffer is getMinBufferSize. This function determines the size of the Buffer allocated by the application layer.

AudioTrack.getMinBufferSize(8000.// 8K sampling points per second

AudioFormat CHANNEL_CONFIGURATION_STEREO,/ / double track

        AudioFormat.ENCODING_PCM_16BIT);

Copy the code

From AudioTrack. GetMinBufferSize began to trace the code, the code can be found on the bottom there is a very important concept: Frame (Frame). Frame is a unit that describes the amount of data. A Frame unit is equal to the number of bytes of a sampling point x the number of sound channels (for example, PCM16, a Frame unit of dual sound channels is equal to 2 x 2=4 bytes). A sampling point is for only one channel, when in reality there may be one or more channels. The concept of Frame is derived from the fact that a single unit cannot be used to represent the data amount sampled at a time for all sound channels. The size of the Frame is the number of bytes of a sampling point x the number of sound channels. In addition, in the current sound card driver, its internal buffer is also allocated and managed using Frame as a unit.

GetMinBufSize takes into account hardware considerations such as sampling rate support, latency of the hardware itself, and comes up with a minimum buffer size. Usually we allocate buffers that are integer multiples of that.

3.3 Construction Process

Each audio stream corresponds to an instance of the AudioTrack class. Each AudioTrack will be registered in the AudioFlinger when it is created, and the AudioFlinger will Mixer all the Audiotracks. It is then transmitted to AudioHardware for playback. Currently, Android can create up to 32 audio streams simultaneously, meaning that Mixer can process up to 32 AudioTrack data streams simultaneously.

3.4 Show Me The Code

public class AudioTrackManager {

.

    // Audio stream type

    private static final int mStreamType = AudioManager.STREAM_MUSIC;

    // specify the sampling rate (MediaRecoder is usually 8000Hz, AAC is usually 44100Hz. Set the sampling rate to 44100, currently the usual sampling rate, official documentation indicates that this value is compatible with all Settings)

    private static final int mSampleRateInHz = 44100;

    // Specifies the number of channels to capture audio. Specify constants for this in the AudioFormat class

    private static final int mChannelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO; / / mono

    // Specifies the audio quantization number. The following possible constants are specified in the AudioFormaat class. Usually we choose ENCODING_PCM_16BIT and ENCODING_PCM_8BIT and PCM stands for pulse code modulation, which is actually the original audio sample.

    // Therefore, you can set the resolution of each sample to 16 or 8 bits. 16 bits takes up more space and processing power, and the audio is more realistic.

    private static final int mAudioFormat = AudioFormat.ENCODING_PCM_16BIT;

    // Specify the buffer size. You can get this by calling the getMinBufferSize method of the AudioRecord class.

    private int mMinBufferSize;

    //STREAM means that the user writes data to audiotrack once and for all in the application. This is the same as sending data in a socket,

    // The application layer gets data from somewhere, such as PCM data by codec, and writes it to Audiotrack.

    private static int mMode = AudioTrack.MODE_STREAM;



    private void initData(a) {

        // Determine the size of the frame according to the sampling rate, sampling accuracy, single-channel and dual-channel.

        mMinBufferSize = AudioTrack.getMinBufferSize(mSampleRateInHz, mChannelConfig, mAudioFormat);// Calculate the minimum buffer

        // Note that, based on digital audio knowledge, this is the size of a one-second buffer.

        / / create the AudioTrack

        mAudioTrack = new AudioTrack(mStreamType, mSampleRateInHz, mChannelConfig,

                mAudioFormat, mMinBufferSize, mMode);

    }





    / * *

* Start the player thread

* /


    private void startThread(a) {

        destroyThread();

        isStart = true;

        if (mRecordThread == null) {

            mRecordThread = new Thread(recordRunnable);

            mRecordThread.start();

        }

    }



    / * *

* Playback thread

* /


    private Runnable recordRunnable = new Runnable() {

        @Override

        public void run(a) {

            try {

                // Set the priority of the thread

             android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);

                byte[] tempBuffer = new byte[mMinBufferSize];

                int readCount = 0;

                while (mDis.available() > 0) {

                    readCount = mDis.read(tempBuffer);

                    if (readCount == AudioTrack.ERROR_INVALID_OPERATION || readCount == AudioTrack.ERROR_BAD_VALUE) {

                        continue;

                    }

                    // Write voice data while playing

                    if(readCount ! =0&& readCount ! = -1) {

                        // AudioTrack is not initialized and released when it stops playing. The state is STATE_UNINITIALIZED

                        if (mAudioTrack.getState() == mAudioTrack.STATE_UNINITIALIZED) {

                            initData();

                        }

                        mAudioTrack.play();

                        mAudioTrack.write(tempBuffer, 0, readCount);

                    }

                }

                // Stop playing when it is finished

                stopPlay();

            } catch (Exception e) {

                e.printStackTrace();

            }

        }



    };



    / * *

* Start playing

     *

     * @param path

* /


    public void startPlay(String path) {

        try {

            setPath(path);

            startThread();

        } catch (Exception e) {

            e.printStackTrace();

        }

    }



    / * *

* Stop playing

* /


    public void stopPlay(a) {

        try {

            destroyThread();// Destroy the thread

            if(mAudioTrack ! =null) {

                if (mAudioTrack.getState() == AudioRecord.STATE_INITIALIZED) {// Initialization succeeded

                    mAudioTrack.stop();// Stop playing

                }

                if(mAudioTrack ! =null) {

                    mAudioTrack.release();// Release the audioTrack resource

                }

            }

            if(mDis ! =null) {

                mDis.close();// Close the data input stream

            }

        } catch (Exception e) {

            e.printStackTrace();

        }

    }



}

Copy the code

The source address

Welcome fork or star