【Android音视频学习之路(七)】音视频录制流程实践

【Android音视频学习之路(一)】如何在 Android 平台绘制一张图片

【Android音视频学习之路(二)】AudioRecord API详解及应用

【Android音视频学习之路(三)】AudioTrack 使用与详解

【Android音视频学习之路(四)】Camera 的使用

【Android音视频学习之路(五)】MediaExtractor和MediaMuxer讲解

【Android音视频学习之路(六)】 MediaCodec 简单讲解

“这是我参与8月更文挑战的第1天,活动详情查看:8月更文挑战

前言

在前面我们学习了AudioRecord、AudioTrack、Camera、 MediaExtractor、MediaMuxer
API、MediaCodec。 其实就已经可以这些知识串联起来,做一些稍微复杂的事情

一、流程分析

1.1 需求说明

我们需要做的事情就是:串联整个音视频录制流程,完成音视频的采集、编码、封包成 mp4 输出。

1.2 实现方式

使用MediaCodec 类进行编码压缩,视频压缩为H.264,音频压缩为aac,使用MediaMuxer 将音视频合
成为MP4。

1.3 数据处理思路

使用MediaCodec 类进行编码压缩,视频压缩为H.264,音频压缩为aac,使用MediaMuxer 将音视频合
成为MP4。

二、 实现过程

2.1 收集Camera数据,并转码为H264存储到文件

这儿的示例用的是camera,因为用法比较简单,camera2和cameraX的使用思路其实是差不多的,就是API不一样,代码坐下修改就行啦~

在收集数据之前,对Camera设置一些参数,方便收集后进行数据处理:

    val parameter = camera?.parameters
    parameter?.previewFormat = ImageFormat.NV21
    parameter?.setPreviewSize(1280,720)
复制代码

然后设置PreviewCallback,就可以获取到Camera的原始NV21数据:

 camera?.setPreviewCallback { bytes, camera ->

 }
复制代码

在创建一个H264Encoder类,在里面进行编码操作,并将编码后的数据存储到文件

class H264VideoEncoder(private val width: Int,
                       private val height: Int,
                       private val frameRate: Int) {

    private val mediaCodec: MediaCodec
    private val mediaMuxer: MediaMuxer
    private val yuv420Queue = ArrayBlockingQueue<ByteArray>(10)
    private var videoTrack = -1


    var isRunning = false

    init {
        val mediaFormat = MediaFormat.createVideoFormat("video/avc", width, height)
        mediaFormat.setInteger(
            MediaFormat.KEY_COLOR_FORMAT,
            MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar
        )
        mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, width * height * 5)
        mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 30)
        mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1)

        mediaCodec = MediaCodec.createEncoderByType("video/avc")
        mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        mediaCodec.start()

        val path =
            Environment.getExternalStorageDirectory().absolutePath + File.separator + "temp_video.mp4"

        mediaMuxer = MediaMuxer(path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4) //输出为MP4
    }

    /***
     * 开始编码
     */
    fun startEncoder() {
        GlobalScope.launch(Dispatchers.IO) {
            isRunning = true
            var pts = 0L
            var generateIndex = 0L

            while (isRunning) {
                val input = yuv420Queue.take()
                val yuv420sp = ByteArray(width * height * 3 / 2)
                // 必须要转格式,否则录制的内容播放出来为绿屏
                NV21ToNV12(input, yuv420sp, width, height)

                try {
                    //获取输入流队列
                    val inputBufferIndex = mediaCodec.dequeueInputBuffer(0) //间隔多久 0为立即返回
                    val inputBuffer = mediaCodec.getInputBuffer(inputBufferIndex)
                    if (inputBuffer != null) {
                        pts = computePresentationTime(generateIndex)
                        //输入流入队列
                        inputBuffer.put(yuv420sp)
                        mediaCodec.queueInputBuffer(
                            inputBufferIndex,
                            0,
                            yuv420sp.size,
                            pts,
                            0
                        )
                        generateIndex += 1
                    }

                    //获取输出流队列
                    val bufferInfo = MediaCodec.BufferInfo()
                    var outputBufferIndex =
                        mediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC) //timeoutUs 超时时间

                    if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                        videoTrack = mediaMuxer.addTrack(mediaCodec.outputFormat)
                        ALog.e("xiao", "format改变, videoTrack: $videoTrack")
                        if (videoTrack >= 0) {
                            mediaMuxer.start()
                            ALog.e("xiao", "开始混合")
                        }
                    }
                    while (outputBufferIndex >= 0) {
                        if (videoTrack >= 0) {
                            mediaCodec.getOutputBuffer(outputBufferIndex)?.let {
                                mediaMuxer.writeSampleData(videoTrack, it, bufferInfo)
                            }
                            mediaCodec.releaseOutputBuffer(outputBufferIndex, false)
                        }
                        outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)
                    }
                } catch (e: Exception) {
                    ALog.e("xiao", "错误了: ${e.message}")
                    e.printStackTrace()
                }
            }

            try {
                mediaMuxer.stop()
                mediaMuxer.release()
            } catch (e: Exception) {
                e.printStackTrace()
            }

            try {
                mediaCodec.stop()
                mediaCodec.release()
            } catch (e: Exception) {
                e.printStackTrace()
            }

            ALog.e("xiao", "结束")
        }
    }

    /**
     * 停止编码数据
     */
    fun stopEncoder() {
        isRunning = false
    }

    /**
     * 根据帧数生成时间戳
     */
    private fun computePresentationTime(frameIndex: Long): Long {
        return 132 + frameIndex * 1000000 / frameRate
    }

    private fun NV21ToNV12(nv21: ByteArray, nv12: ByteArray, width: Int, height: Int) {
        val frameSize = width * height
        System.arraycopy(nv21, 0, nv12, 0, frameSize)
        for (i in 0 until frameSize) {
            nv12[i] = nv21[i]
        }
        for (j in 0 until (frameSize / 2) step 2) {
            nv12[frameSize + j - 1] = nv21[j + frameSize]
        }
        for (j in 0 until (frameSize / 2) step 2) {
            nv12[frameSize + j] = nv21[j + frameSize - 1]
        }
    }


    fun putData(buffer: ByteArray) {
        if (yuv420Queue.size >= 10) {
            yuv420Queue.poll()
        }
        yuv420Queue.put(buffer)
    }


    companion object {
        private const val TIMEOUT_USEC = 12000L
    }
}
复制代码

接下来我们要做的是采集音频并混合成音视频

录制音频

class H264AudioEncoder(
    private val sampleRateInHz: Int,
    private val channelConfig: Int,
    private val audioFormat: Int
) {

    private val recordBufSize: Int =
        AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat)
    private var audioRecord = AudioRecord(
        MediaRecorder.AudioSource.MIC,
        sampleRateInHz,
        channelConfig,
        audioFormat,
        recordBufSize
    )
    private var mediaCodec: MediaCodec =
        MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC)
    private val path =
        Environment.getExternalStorageDirectory().absolutePath + File.separator + "temp_audio.mp4"
    private val mediaMuxer = MediaMuxer(path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
    private var audioTrack = -1

    private var isRunning = false

    /***
     * 开始编码
     */
    fun startEncoder() {
        isRunning = true
        GlobalScope.launch(IO) {
            val audioFormat =
                MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, sampleRateInHz, 1)
            audioFormat.setInteger(
                MediaFormat.KEY_AAC_PROFILE,
                MediaCodecInfo.CodecProfileLevel.AACObjectLC
            )
            audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1000)
            audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1)
            audioFormat.setInteger(MediaFormat.KEY_SAMPLE_RATE, sampleRateInHz)
            mediaCodec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
            mediaCodec.start()

            audioRecord.startRecording()
            val buffer = ByteArray(recordBufSize)
            while (isRunning) {
                val readBytes = audioRecord.read(buffer, 0, recordBufSize)
                ALog.e("xiao", "解码音频数据:$readBytes")
                try {
                    encode(buffer, readBytes)
                } catch (e: java.lang.Exception) {
                    ALog.e("xiao", "解码音频(Audio)数据 失败")
                    e.printStackTrace()
                }
            }

            try {
                audioRecord.stop()
                audioRecord.release()
            } catch (e: Exception) {
                ALog.e("xiao", e.message)
                e.printStackTrace()
            }

            try {
                mediaMuxer.stop()
                mediaMuxer.release()
            } catch (e: Exception) {
                ALog.e("xiao", e.message)
                e.printStackTrace()
            }

            try {
                mediaCodec.stop()
                mediaCodec.release()
            } catch (e: Exception) {
                ALog.e("xiao", e.message)
                e.printStackTrace()
            }

            ALog.e("xiao", "结束了")
        }
    }

    private val bufferInfo = MediaCodec.BufferInfo()
    private fun encode(byteArray: ByteArray, readBytes: Int) {
        val inputBufferIndex = mediaCodec.dequeueInputBuffer(TIMEOUT_USEC) //间隔多久 0为立即返回
        ALog.e("xiao","inputBufferIndex: $inputBufferIndex")
        if (inputBufferIndex < 0) return
        val inputBuffer = mediaCodec.getInputBuffer(inputBufferIndex)
        if (inputBuffer != null) {
            inputBuffer.put(byteArray)
            if (readBytes <= 0) {
                ALog.e("xiao", "send BUFFER_FLAG_END_OF_STREAM")
                mediaCodec.queueInputBuffer(
                    inputBufferIndex,
                    0,
                    0,
                    System.nanoTime() / 1000,
                    MediaCodec.BUFFER_FLAG_END_OF_STREAM
                )
            } else {
                mediaCodec.queueInputBuffer(
                    inputBufferIndex,
                    0,
                    readBytes,
                    System.nanoTime() / 1000,
                    0
                )
            }
        }

        /*获取解码后的数据*/
        var outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)

        if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
            audioTrack = mediaMuxer.addTrack(mediaCodec.outputFormat)
            ALog.e("xiao", "format改变, audioTrack: $audioTrack")
            if (audioTrack >= 0) {
                mediaMuxer.start()
                ALog.e("xiao", "开始混合")
            }
        }
        while (outputBufferIndex >= 0) {
            if (audioTrack >= 0) {
                val outBuffer = mediaCodec.getOutputBuffer(outputBufferIndex)
                if (bufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG != 0) {
                    bufferInfo.size = 0
                }
                if (bufferInfo.size != 0 && outBuffer != null) {
                    mediaMuxer.writeSampleData(audioTrack, outBuffer, bufferInfo)
                }
                mediaCodec.releaseOutputBuffer(outputBufferIndex, false)
            }
            outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)
        }
    }

    /**
     * 停止编码数据
     */
    fun stopEncoder() {
        isRunning = false
    }

    /**
     * 根据帧数生成时间戳
     */
    private val frameRate = 30
    private fun computePresentationTime(frameIndex: Long): Long {
        return 132 + frameIndex * 1000000 / frameRate
    }

    companion object {
        private const val TIMEOUT_USEC = 12000L
    }
}
复制代码

然后我们就有了mp4音频和mp4视频2段素材了,用mediaMuxer混合就好了(参考第5节)

class H264Muxer(
    private val videoPath: String = Environment.getExternalStorageDirectory().absolutePath + File.separator + "temp_video.mp4",
    private val audioPath: String = Environment.getExternalStorageDirectory().absolutePath + File.separator + "temp_audio.mp4"
) {

    private val outputPath =
        Environment.getExternalStorageDirectory().absolutePath + File.separator + "video_output.mp4"
    private val videoExtractor = MediaExtractor()
    private val audioExtractor = MediaExtractor()
    private var mediaMuxer:MediaMuxer? = null

    // 开始混合
    @Throws(RuntimeException::class)
    fun muxer() {
        mediaMuxer = MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
        val muxer = mediaMuxer ?: return

        //音轨信息
        var audioTrackIndex = -1
        var audioMuxerTrackIndex = -1
        var audioMaxInputSize = 0
        audioExtractor.setDataSource(audioPath)
        val audioTrackCount = audioExtractor.trackCount
        for (i in 0 until audioTrackCount) {
            val format = audioExtractor.getTrackFormat(i)
            val mime = format.getString(MediaFormat.KEY_MIME) ?: continue
            if (mime.startsWith("audio/")) {
                ALog.e("xiao","找到音频轨")
                audioTrackIndex = i
                audioMuxerTrackIndex = muxer.addTrack(format)//将音轨添加到MediaMuxer,并返回新的轨道
                audioMaxInputSize = format.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE)
                break
            }
        }

        //图像信息
        var videoTrackIndex = -1
        var videoMuxerTrackIndex = -1
        var videoMaxInputSize = 0
        var videoFrameRate = 0
        videoExtractor.setDataSource(videoPath)
        val videoTrackCount = videoExtractor.trackCount
        for (i in 0 until videoTrackCount) {
            val format = videoExtractor.getTrackFormat(i)
            val mime = format.getString(MediaFormat.KEY_MIME) ?: continue
            if (mime.startsWith("video/")) {
                ALog.e("xiao","找到视频轨")
                videoTrackIndex = i
                videoMuxerTrackIndex = muxer.addTrack(format) //将视频轨添加到MediaMuxer,并返回新的轨道
                videoMaxInputSize = format.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE); //得到能获取的有关视频的最大值
                videoFrameRate = format.getInteger(MediaFormat.KEY_FRAME_RATE)//获取视频的帧率
                break
            }
        }

        if (audioTrackIndex == -1) throw RuntimeException("未找到音轨")
        if (videoTrackIndex == -1) throw RuntimeException("未找到视频轨")

        muxer.start() //开始合成

        audioExtractor.selectTrack(audioTrackIndex) //将提供音频的视频选择到音轨上
        val audioMediaInfo = MediaCodec.BufferInfo()
        val audioBuffer = ByteBuffer.allocate(audioMaxInputSize)
        while (true) {
            val sampleSize = audioExtractor.readSampleData(audioBuffer, 0)//检索当前编码的样本并将其存储在字节缓冲区中
            if (sampleSize <= 0) {
                audioExtractor.unselectTrack(audioTrackIndex)
                break
            }

            //设置样本编码信息
            audioMediaInfo.offset = 0
            audioMediaInfo.size = sampleSize
            audioMediaInfo.flags = audioExtractor.sampleFlags
            audioMediaInfo.presentationTimeUs = audioExtractor.sampleTime

            muxer.writeSampleData(audioMuxerTrackIndex, audioBuffer, audioMediaInfo)
            audioExtractor.advance()
        }

        videoExtractor.selectTrack(videoTrackIndex) ////将提供视频图像的视频选择到视频轨上
        val videoMediaInfo = MediaCodec.BufferInfo()
        val videoBuffer = ByteBuffer.allocate(videoMaxInputSize)
        while (true) {
            val sampleSize = videoExtractor.readSampleData(videoBuffer, 0)//检索当前编码的样本并将其存储在字节缓冲区中
            if (sampleSize <= 0) {
                videoExtractor.unselectTrack(videoTrackIndex)
                break
            }

            //设置样本编码信息
            videoMediaInfo.offset = 0
            videoMediaInfo.size = sampleSize
            videoMediaInfo.flags = MediaCodec.BUFFER_FLAG_KEY_FRAME
            videoMediaInfo.presentationTimeUs += 1000 * 1000 / videoFrameRate

            muxer.writeSampleData(videoMuxerTrackIndex, videoBuffer, videoMediaInfo)
            videoExtractor.advance()
        }

        audioExtractor.release()
        videoExtractor.release()

        muxer.stop()
        muxer.release()
        ALog.e("xiao","完成了")
    }
}
复制代码

为什么不能直接输出完整的mp4呢?

上面的操作可能有点迷惑,需要先分别录制2个mp4文件再合成成一个,有点麻烦。直接输出完整的mp4也不是不行,然后就需要对代码做下修改了

大体思路就是分别缓存音频和视频编码输出的buffer,然后统一用muxer来处理。直接上代码吧

class H264Encode {

    // 视频相关参数
    private val width = 1280
    private val height = 720
    private val frame = 30

    private var videoCodec: MediaCodec? = null
    private val yuv420Queue = ArrayBlockingQueue<ByteArray>(10)

    // 音频相关参数
    private val sampleRateInHz: Int = 44100
    private val channelConfig: Int = AudioFormat.CHANNEL_CONFIGURATION_MONO
    private val encodingBitRate: Int = AudioFormat.ENCODING_PCM_16BIT
    private var audioCodec: MediaCodec? = null
    private var audioRecord: AudioRecord? = null

    private var isRunning = false
    private var outputPath =
        Environment.getExternalStorageDirectory().absolutePath + File.separator + "xiao.mp4"
    private val muxerDateQueue = LinkedBlockingQueue<MuxerData>()
    private var mediaMuxer: MediaMuxer? = null

    @Volatile
    private var isVideoTrackAdd = false
    @Volatile
    private var isAudioTrackAdd = false

    fun startEncoder() {
        isRunning = true
        muxerDateQueue.clear()
        isVideoTrackAdd = false
        isAudioTrackAdd = false
        GlobalScope.launch(IO) {
            mediaMuxer = MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
            launch { catch({ onVideoEncoder() }, "onVideoEncoder") }
            launch { catch({ onAudioEncoder() }, "onAudioEncoder") }
            launch { catch({ onMuxer()}, "onMuxer") }
        }
    }

    fun stopEncoder() {
        isRunning = false
    }

    fun putVideoData(buffer: ByteArray) {
        if (yuv420Queue.size >= 10) {
            yuv420Queue.poll()
        }
        yuv420Queue.put(buffer)
    }

    private fun onVideoEncoder() {
        val muxer = mediaMuxer ?: return

        var videoTrack = -1
        val mediaFormat =
            MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, width, height)
        mediaFormat.setInteger(
            MediaFormat.KEY_COLOR_FORMAT,
            MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar
        )
        mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, width * height * 5)
        mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 30)
        mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1)

        videoCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC)
        val videoCodec = videoCodec ?: throw NullPointerException("videoEncoder 未初始化")
        videoCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        videoCodec.start()

        while (isRunning) {
            val input = yuv420Queue.take()
            val yuv420sp = ByteArray(width * height * 3 / 2)
            // 必须要转格式,否则录制的内容播放出来为绿屏
            NV21ToNV12(input, yuv420sp, width, height)

            try {
                //获取输入流队列
                val inputBufferIndex = videoCodec.dequeueInputBuffer(0) //间隔多久 0为立即返回
                val inputBuffer = videoCodec.getInputBuffer(inputBufferIndex)
                if (inputBuffer != null) {
                    inputBuffer.put(yuv420sp)
                    val nanoTime = System.nanoTime() / 1000
                    videoCodec.queueInputBuffer(inputBufferIndex, 0, yuv420sp.size, nanoTime, 0)
                }

                //获取输出流队列
                val bufferInfo = MediaCodec.BufferInfo()
                var outputBufferIndex = videoCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)
                if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                    videoTrack = muxer.addTrack(videoCodec.outputFormat)
                    isVideoTrackAdd = true
                }
                while (outputBufferIndex >= 0) {
                    if (videoTrack >= 0) {
                        videoCodec.getOutputBuffer(outputBufferIndex)?.let {
                            if (isMuxerStart()) muxerDateQueue.put(MuxerData(Video, videoTrack, it ,bufferInfo))
                        }
                        videoCodec.releaseOutputBuffer(outputBufferIndex, false)
                    }
                    outputBufferIndex = videoCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)
                }
            } catch (e: Exception) {
                ALog.e("xiao", "onVideoEncoder出错: ${e.message}")
                e.printStackTrace()
            }
        }

        try {
            videoCodec.stop()
            videoCodec.release()
            this.videoCodec = null
        } catch (e: Exception) {
            e.printStackTrace()
        }
    }

    private fun onAudioEncoder() {
        val muxer = mediaMuxer ?: return
        var audioTrack = -1

        val audioFormat =
            MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, sampleRateInHz, channelConfig)
        audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC)
        audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1000)
        audioCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC)
        audioCodec?.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        audioCodec?.start()

        val recordBufSize = AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, encodingBitRate)
        audioRecord = AudioRecord(MediaRecorder.AudioSource.MIC, sampleRateInHz, channelConfig, encodingBitRate, recordBufSize)
        audioRecord?.startRecording()

        val audioCodec = audioCodec ?: throw NullPointerException()
        val audioRecord = audioRecord ?: throw NullPointerException()

        val bufferInfo = MediaCodec.BufferInfo()
        val buffer = ByteArray(recordBufSize)
        while (isRunning) {
            val readBytes = audioRecord.read(buffer, 0, recordBufSize)
            try {
                val inputBufferIndex = audioCodec.dequeueInputBuffer(TIMEOUT_USEC)
                if (inputBufferIndex < 0) return
                val inputBuffer = audioCodec.getInputBuffer(inputBufferIndex)
                if (inputBuffer != null) {
                    inputBuffer.put(buffer)
                    if (readBytes <= 0) {
                        ALog.e("xiao", "send BUFFER_FLAG_END_OF_STREAM")
                        audioCodec.queueInputBuffer(inputBufferIndex, 0, 0, System.nanoTime() / 1000, MediaCodec.BUFFER_FLAG_END_OF_STREAM)
                    } else {
                        audioCodec.queueInputBuffer(inputBufferIndex, 0, readBytes, System.nanoTime() / 1000, 0)
                    }
                }

                /*获取解码后的数据*/
                var outputBufferIndex = audioCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)
                if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                    audioTrack = muxer.addTrack(audioCodec.outputFormat)
                    ALog.e("xiao", "format改变, audioTrack: $audioTrack")
                    if (audioTrack >= 0) {
                        isAudioTrackAdd = true
                    }
                }

                while (outputBufferIndex >= 0) {
                    if (audioTrack >= 0) {
                        val outBuffer = audioCodec.getOutputBuffer(outputBufferIndex)
                        if (bufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG != 0) {
                            bufferInfo.size = 0
                        }
                        if (bufferInfo.size != 0 && outBuffer != null) {
                            if (isMuxerStart()) muxerDateQueue.put(MuxerData(Audio, audioTrack, outBuffer, bufferInfo))
                        }
                        audioCodec.releaseOutputBuffer(outputBufferIndex, false)
                    }
                    outputBufferIndex = audioCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC)
                }

            } catch (e: Exception) {
                ALog.e("xiao", "解码音频(Audio)数据失败: ${e.message}")
                e.printStackTrace()
            }
        }

        try {
            audioRecord.stop()
            audioRecord.release()
            this.audioCodec = null
        } catch (e: Exception) {
            ALog.e("xiao", e.message)
            e.printStackTrace()
        }

        try {
            audioCodec.stop()
            audioCodec.release()
            this.audioCodec = null
        } catch (e: Exception) {
            ALog.e("xiao", e.message)
            e.printStackTrace()
        }
    }

    private suspend fun onMuxer() {
        while (!isMuxerStart()) { delay(100) }
        val muxer = mediaMuxer ?: throw NullPointerException()
        muxer.start()

        ALog.e("xiao","开始muxer")
        while (isRunning || muxerDateQueue.size > 0) {
            muxerDateQueue.take().apply {
                muxer.writeSampleData(trackIndex, buffer, bufferInfo)
            }
        }

        try {
            muxer.stop()
            muxer.release()
        } catch (e: Exception) {
            ALog.e("xiao", e.message)
            e.printStackTrace()
        }
        mediaMuxer = null
        ALog.e("xiao","muxer结束")
    }

    private fun NV21ToNV12(nv21: ByteArray, nv12: ByteArray, width: Int, height: Int) {
        val frameSize = width * height
        System.arraycopy(nv21, 0, nv12, 0, frameSize)
        for (i in 0 until frameSize) {
            nv12[i] = nv21[i]
        }
        for (j in 0 until (frameSize / 2) step 2) {
            nv12[frameSize + j - 1] = nv21[j + frameSize]
        }
        for (j in 0 until (frameSize / 2) step 2) {
            nv12[frameSize + j] = nv21[j + frameSize - 1]
        }
    }

    private fun isMuxerStart() = isAudioTrackAdd && isVideoTrackAdd

    private suspend fun catch(action: (suspend () -> Unit), key: String) {
        try {
            action.invoke()
        } catch (e: Exception) {
            ALog.e("xiao", "$key ${e.message}")
            e.printStackTrace()
        }
    }

    data class MuxerData(
        val type: Type,
        val trackIndex: Int,
        val buffer: ByteBuffer,
        val bufferInfo: MediaCodec.BufferInfo
    ) {
        enum class Type {
            Video,
            Audio
        }
    }

    companion object {
        private const val TIMEOUT_USEC = 12000L
    }
}
复制代码

2.2 使用Camer2/CameraX 来收集数据并输出为 mp4

camera直出的是NV12,而cameraX/camera2出的是YUV 3个数组,所以需要处理一下

fun yuv420ToNv21(image: ImageProxy): ByteArray {
    val planes = image.planes
    val yBuffer: ByteBuffer = planes[0].buffer
    val uBuffer: ByteBuffer = planes[1].buffer
    val vBuffer: ByteBuffer = planes[2].buffer
    val ySize: Int = yBuffer.remaining()
    val uSize: Int = uBuffer.remaining()
    val vSize: Int = vBuffer.remaining()
    val nv21 = ByteArray(ySize + vSize + 1)
    yBuffer.get(nv21, 0, ySize)
    vBuffer.get(nv21, ySize, vSize)
    val u = ByteArray(uSize)
    uBuffer.get(u)

    //每隔开一位替换V,达到VU交替
    var pos = ySize + 1
    for (i in 0 until uSize) {
        if (i % 2 == 0) {
            nv21[pos] = u[i]
            pos += 2
        }
    }
    return nv21
}
复制代码

然后YUV相关的知识,下一篇再回讲解一下的

源码地址

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享