国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 系統 > Android > 正文

Android中音視頻合成的幾種方案詳析

2019-10-22 18:21:32
字體:
來源:轉載
供稿:網友

前言

最近工作中遇到了音視頻處理的需求,Android下音視頻合成,在當前調研方案中主要有三大類方法:MediaMux硬解碼,mp4parser,FFmepg。三種方法均可實現,但是也有不同的局限和問題,先將實現和問題記錄于此,便于之后的總結學習。下面話不多說了,來一起看看詳細的介紹吧。

方法一(Fail)

利用MediaMux實現音視頻的合成。

效果:可以實現音視頻的合并,利用Android原生的VideoView和SurfaceView播放正常,大部分的播放器也播放正常,但是,但是,在上傳Youtube就會出現問題:音頻不連續,分析主要是上傳Youtube時會被再次的壓縮,可能在壓縮的過程中出現音頻的幀率出現問題。

分析:在MediaCodec.BufferInfo的處理中,時間戳presentationTimeUs出現問題,導致Youtube的壓縮造成音頻的紊亂。

public static void muxVideoAndAudio(String videoPath, String audioPath, String muxPath) { try {  MediaExtractor videoExtractor = new MediaExtractor();  videoExtractor.setDataSource(videoPath);  MediaFormat videoFormat = null;  int videoTrackIndex = -1;  int videoTrackCount = videoExtractor.getTrackCount();  for (int i = 0; i < videoTrackCount; i++) {  videoFormat = videoExtractor.getTrackFormat(i);  String mimeType = videoFormat.getString(MediaFormat.KEY_MIME);  if (mimeType.startsWith("video/")) {   videoTrackIndex = i;   break;  }  }  MediaExtractor audioExtractor = new MediaExtractor();  audioExtractor.setDataSource(audioPath);  MediaFormat audioFormat = null;  int audioTrackIndex = -1;  int audioTrackCount = audioExtractor.getTrackCount();  for (int i = 0; i < audioTrackCount; i++) {  audioFormat = audioExtractor.getTrackFormat(i);  String mimeType = audioFormat.getString(MediaFormat.KEY_MIME);  if (mimeType.startsWith("audio/")) {   audioTrackIndex = i;   break;  }  }  videoExtractor.selectTrack(videoTrackIndex);  audioExtractor.selectTrack(audioTrackIndex);  MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();  MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();  MediaMuxer mediaMuxer = new MediaMuxer(muxPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);  int writeVideoTrackIndex = mediaMuxer.addTrack(videoFormat);  int writeAudioTrackIndex = mediaMuxer.addTrack(audioFormat);  mediaMuxer.start();  ByteBuffer byteBuffer = ByteBuffer.allocate(500 * 1024);  long sampleTime = 0;  {  videoExtractor.readSampleData(byteBuffer, 0);  if (videoExtractor.getSampleFlags() == MediaExtractor.SAMPLE_FLAG_SYNC) {   videoExtractor.advance();  }  videoExtractor.readSampleData(byteBuffer, 0);  long secondTime = videoExtractor.getSampleTime();  videoExtractor.advance();  long thirdTime = videoExtractor.getSampleTime();  sampleTime = Math.abs(thirdTime - secondTime);  }  videoExtractor.unselectTrack(videoTrackIndex);  videoExtractor.selectTrack(videoTrackIndex);  while (true) {  int readVideoSampleSize = videoExtractor.readSampleData(byteBuffer, 0);  if (readVideoSampleSize < 0) {   break;  }  videoBufferInfo.size = readVideoSampleSize;  videoBufferInfo.presentationTimeUs += sampleTime;  videoBufferInfo.offset = 0;  //noinspection WrongConstant  videoBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;//videoExtractor.getSampleFlags()  mediaMuxer.writeSampleData(writeVideoTrackIndex, byteBuffer, videoBufferInfo);  videoExtractor.advance();  }  while (true) {  int readAudioSampleSize = audioExtractor.readSampleData(byteBuffer, 0);  if (readAudioSampleSize < 0) {   break;  }  audioBufferInfo.size = readAudioSampleSize;  audioBufferInfo.presentationTimeUs += sampleTime;  audioBufferInfo.offset = 0;  //noinspection WrongConstant  audioBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;// videoExtractor.getSampleFlags()  mediaMuxer.writeSampleData(writeAudioTrackIndex, byteBuffer, audioBufferInfo);  audioExtractor.advance();  }  mediaMuxer.stop();  mediaMuxer.release();  videoExtractor.release();  audioExtractor.release(); } catch (IOException e) {  e.printStackTrace(); } }

方法二(Success)

public static void muxVideoAudio(String videoFilePath, String audioFilePath, String outputFile) { try {  MediaExtractor videoExtractor = new MediaExtractor();  videoExtractor.setDataSource(videoFilePath);  MediaExtractor audioExtractor = new MediaExtractor();  audioExtractor.setDataSource(audioFilePath);  MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);  videoExtractor.selectTrack(0);  MediaFormat videoFormat = videoExtractor.getTrackFormat(0);  int videoTrack = muxer.addTrack(videoFormat);  audioExtractor.selectTrack(0);  MediaFormat audioFormat = audioExtractor.getTrackFormat(0);  int audioTrack = muxer.addTrack(audioFormat);  LogUtil.d(TAG, "Video Format " + videoFormat.toString());  LogUtil.d(TAG, "Audio Format " + audioFormat.toString());  boolean sawEOS = false;  int frameCount = 0;  int offset = 100;  int sampleSize = 256 * 1024;  ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);  ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);  MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();  MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();  videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);  audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);  muxer.start();  while (!sawEOS) {  videoBufferInfo.offset = offset;  videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);  if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {   sawEOS = true;   videoBufferInfo.size = 0;  } else {   videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();   //noinspection WrongConstant   videoBufferInfo.flags = videoExtractor.getSampleFlags();   muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);   videoExtractor.advance();   frameCount++;  }  }    boolean sawEOS2 = false;  int frameCount2 = 0;  while (!sawEOS2) {  frameCount2++;  audioBufferInfo.offset = offset;  audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);  if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {   sawEOS2 = true;   audioBufferInfo.size = 0;  } else {   audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();   //noinspection WrongConstant   audioBufferInfo.flags = audioExtractor.getSampleFlags();   muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);   audioExtractor.advance();  }  }  muxer.stop();  muxer.release();  LogUtil.d(TAG,"Output: "+outputFile); } catch (IOException e) {  LogUtil.d(TAG, "Mixer Error 1 " + e.getMessage()); } catch (Exception e) {  LogUtil.d(TAG, "Mixer Error 2 " + e.getMessage()); } }

方法三

利用mp4parser實現

mp4parser是一個視頻處理的開源工具箱,由于mp4parser里的方法都依靠工具箱里的一些內容,所以需要將這些內容打包成jar包,放到自己的工程里,才能對mp4parser的方法進行調用。

compile “com.googlecode.mp4parser:isoparser:1.1.21”

問題:上傳Youtube壓縮后,視頻數據丟失嚴重,大部分就只剩下一秒鐘的時長,相當于把視頻變成圖片了,囧

 public boolean mux(String videoFile, String audioFile, final String outputFile) { if (isStopMux) {  return false; } Movie video; try {  video = MovieCreator.build(videoFile); } catch (RuntimeException e) {  e.printStackTrace();  return false; } catch (IOException e) {  e.printStackTrace();  return false; } Movie audio; try {  audio = MovieCreator.build(audioFile); } catch (IOException e) {  e.printStackTrace();  return false; } catch (NullPointerException e) {  e.printStackTrace();  return false; } Track audioTrack = audio.getTracks().get(0); video.addTrack(audioTrack); Container out = new DefaultMp4Builder().build(video); FileOutputStream fos; try {  fos = new FileOutputStream(outputFile); } catch (FileNotFoundException e) {  e.printStackTrace();  return false; } BufferedWritableFileByteChannel byteBufferByteChannel = new  BufferedWritableFileByteChannel(fos); try {  out.writeContainer(byteBufferByteChannel);  byteBufferByteChannel.close();  fos.close();  if (isStopMux) {  return false;  }  runOnUiThread(new Runnable() {  @Override  public void run() {   mCustomeProgressDialog.setProgress(100);   goShareActivity(outputFile);//   FileUtils.insertMediaDB(AddAudiosActivity.this,outputFile);//  }  }); } catch (IOException e) {  e.printStackTrace();  if (mCustomeProgressDialog.isShowing()) {  mCustomeProgressDialog.dismiss();  }  ToastUtil.showShort(getString(R.string.process_failed));  return false; } return true; } private static class BufferedWritableFileByteChannel implements WritableByteChannel { private static final int BUFFER_CAPACITY = 2000000; private boolean isOpen = true; private final OutputStream outputStream; private final ByteBuffer byteBuffer; private final byte[] rawBuffer = new byte[BUFFER_CAPACITY]; private BufferedWritableFileByteChannel(OutputStream outputStream) {  this.outputStream = outputStream;  this.byteBuffer = ByteBuffer.wrap(rawBuffer); } @Override public int write(ByteBuffer inputBuffer) throws IOException {  int inputBytes = inputBuffer.remaining();  if (inputBytes > byteBuffer.remaining()) {  dumpToFile();  byteBuffer.clear();  if (inputBytes > byteBuffer.remaining()) {   throw new BufferOverflowException();  }  }  byteBuffer.put(inputBuffer);  return inputBytes; } @Override public boolean isOpen() {  return isOpen; } @Override public void close() throws IOException {  dumpToFile();  isOpen = false; } private void dumpToFile() {  try {  outputStream.write(rawBuffer, 0, byteBuffer.position());  } catch (IOException e) {  throw new RuntimeException(e);  } } }

方法四

利用FFmpeg大法

FFmpeg 由于其豐富的 codec 插件,詳細的文檔說明,并且與其調試復雜量大的編解碼代碼(是的,用 MediaCodec 實現起來十分啰嗦和繁瑣)還是不如調試一行 ffmpeg 命令來的簡單。

Merge Video /Audio and retain both audios

可以實現,兼容性強,但由于是軟解碼,合并速度很慢,忍受不了,而相應的FFmpeg優化還不太了解,囧…….

總結

以上就是這篇文章的全部內容了,希望本文的內容對大家的學習或者工作具有一定的參考學習價值,如果有疑問大家可以留言交流,謝謝大家對VEVB武林網的支持。


注:相關教程知識閱讀請移步到Android開發頻道。
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 察隅县| 买车| 资源县| 监利县| 泸州市| 清远市| 金溪县| 于田县| 淄博市| 井研县| 遂川县| 常宁市| 佛山市| 荥阳市| 沭阳县| 金门县| 凌源市| 满洲里市| 汪清县| 博乐市| 聊城市| 镇康县| 蓬溪县| 北京市| 栾川县| 洛川县| 门源| 普兰店市| 大安市| 南京市| 莱州市| 上杭县| 宁陵县| 利辛县| 甘孜县| 抚远县| 台中市| 长顺县| 阳城县| 寻甸| 闸北区|