国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 系統 > Android > 正文

Android 7.0 Audio的Resample過程詳解

2019-11-08 00:23:42
字體:
來源:轉載
供稿:網友

Android 7.0 Audio的Resample過程詳解

Qidi 2017.02.23 (Markdown & Haroopad)


【前言】

處理過音頻文件的工程師都知道音頻數據存在采樣率(Sample Rate)這個指標。在位深度(Bit Depth)一定的情況下,采樣率越高,理論上來說播放出來的聲音就越細膩,錄制的聲音也就越保真,反之亦然。

但在較早的Android系統版本上,不管音頻文件原來的采樣率幾何,統統都被重采樣(Resample)到44.1KHz進行播放,錄制的時候則是被固定為8KHz進行采樣。盡管這樣的處理方式被廣大音質愛好者所詬病,但在當時它確實是一種實現設備兼容的有效方法。

作為Android Audio BSP工程師,有必要了解系統實現Resample的過程。現在Android系統已經發布到了7.0版本,一起看看在最新的版本上這個Resample的過程是怎樣實現的吧。

【背景知識】

我們知道在Android系統中,當應用層APP播放一個音頻文件時,Framework層的AudioPolicyService(APS)會接收上層APP傳遞來的音頻參數(例如格式、聲道、采樣率等),并調用AudioFlingercreateTrack()方法對應創建1個Track,再調用openOutput()方法來打開1個outputStream,然后使用這個outputStream來創建相應的Playback線程(依據應用場景可以是OffloadThread、DirectOutputThread、MixerThread),最終在Playback線程中匹配之前創建的Track,開始自APP至HAL的數據傳輸。

【Resample過程分析】

那么我們對Android Audio Resample過程的分析就從AudioFlinger開始。在AudioFlinger::openOutput()中可以看到,在Playback線程被成功創建之后,即被加入到mPlaybackThreads向量中進行管理了。具體代碼如下:

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module, audio_io_handle_t *output, audio_config_t *config, audio_devices_t devices, const String8& address, audio_output_flags_t flags){ ...... AudioStreamOut *outputStream = NULL; status_t status = outHwDev->openOutputStream( // 打開1個outputStream &outputStream, *output, devices, flags, config, address.string()); mHardwareStatus = AUDIO_HW_IDLE; if (status == NO_ERROR) { PlaybackThread *thread; if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) { thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady); ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread); } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) || !isValidPcmSinkFormat(config->format) || !isValidPcmSinkChannelMask(config->channel_mask)) { thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady); ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread); } else { thread = new MixerThread(this, outputStream, *output, devices, mSystemReady); // 默認情況下,創建MixerThread類型的Playback線程 ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread); } mPlaybackThreads.add(*output, thread); // 將新創建的線程加入向量 return thread; } return 0;}

隨后Playback線程運行,對應的AudioFlinger::Playback::threadLoop()方法被執行,在該方法中調用了prepareTracks_l()函數。這個函數實際上是對應于AudioFlinger::MixerThread::prepareTracks_l()這個函數。threadLoop()函數代碼細節如下:

bool AudioFlinger::PlaybackThread::threadLoop(){ Vector< sp<Track> > tracksToRemove; ...... while (!exitPending()) { cpuStats.sample(myName); Vector< sp<EffectChain> > effectChains; { // scope for mLock Mutex::Autolock _l(mLock); processConfigEvents_l(); ...... saveOutputTracks(); ...... if ((!mActiveTracks.size() && systemTime() > mStandbyTimeNs) || isSuspended()) { // put audio hardware into standby after short delay if (shouldStandby_l()) { threadLoop_standby(); mStandby = true; } ...... } // mMixerStatusIgnoringFastTracks is also updated internally mMixerStatus = prepareTracks_l(&tracksToRemove); // 調用prepareTracks_l(),為Playback線程匹配已注冊的Track ...... // prevent any changes in effect chain list and in each effect chain // during mixing and effect process as the audio buffers could be deleted // or modified if an effect is created or deleted lockEffectChains_l(effectChains); } // mLock scope ends ...... // enable changes in effect chain unlockEffectChains(effectChains); // Finally let go of removed track(s), without the lock held // since we can't guarantee the destructors won't acquire that // same lock. This will also mutate and push a new fast mixer state. threadLoop_removeTracks(tracksToRemove); tracksToRemove.clear(); // FIXME I don't understand the need for this here; // it was in the original code but maybe the // assignment in saveOutputTracks() makes this unnecessary? clearOutputTracks(); // Effect chains will be actually deleted here if they were removed from // mEffectChains list during mixing or effects processing effectChains.clear(); // FIXME Note that the above .clear() is no longer necessary since effectChains // is now local to this block, but will keep it for now (at least until merge done). } threadLoop_exit(); if (!mStandby) { threadLoop_standby(); mStandby = true; } ...... return false;}

Resample的過程就發生在prepareTracks_l()函數中,所以我們來好好閱讀一下。在該函數中,通過一個for循環遍歷所有處于active狀態的track。每一次循環中,都要進行如下2步操作: 1. 通過reqSampleRate = track->mAudioTrackServerProxy->getSampleRate()來獲取硬件設備所支持的采樣率; 2. 之后調用mAudioMixer->setParameter(name, AudioMixer::RESAMPLE, AudioMixer::SAMPLE_RATE, (void*)(uintptr_t)reqSampleRate),通過對比音頻文件采樣率和音頻設備支持的采樣率,判斷是否創建新的Resampler對象,或者從已有的Resampler對象列表中返回1個;

prepareTracks_l()函數代碼細節如下:

AudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l( Vector< sp<Track> > *tracksToRemove){ ...... // find out which tracks need to be processed size_t count = mActiveTracks.size(); // 獲取處于active狀態的track的數量 ...... for (size_t i=0 ; i<count ; i++) { const sp<Track> t = mActiveTracks[i].promote(); if (t == 0) { continue; } // this const just means the local variable doesn't change Track* const track = t.get(); // 獲取對應的track ...... audio_track_cblk_t* cblk = track->cblk(); // The first time a track is added we wait // for all its buffers to be filled before processing it int name = track->name(); ...... if ((framesReady >= minFrames) && track->isReady() && !track->isPaused() && !track->isTerminated()) { ...... int param = AudioMixer::VOLUME; if (track->mFillingUpStatus == Track::FS_FILLED) { // no ramp for the first volume setting track->mFillingUpStatus = Track::FS_ACTIVE; if (track->mState == TrackBase::RESUMING) { track->mState = TrackBase::ACTIVE; param = AudioMixer::RAMP_VOLUME; } mAudioMixer->setParameter(name, AudioMixer::RESAMPLE, AudioMixer::RESET, NULL); // FIXME should not make a decision based on mServer } else if (cblk->mServer != 0) { // If the track is stopped before the first frame was mixed, // do not apply ramp param = AudioMixer::RAMP_VOLUME; } // compute volume for this track ...... // Delegate volume control to effect in track effect chain if needed ...... // XXX: these things DON'T need to be done each time mAudioMixer->setBufferProvider(name, track); mAudioMixer->enable(name); mAudioMixer->setParameter(name, param, AudioMixer::VOLUME0, &vlf); // 設置左聲道音量 mAudioMixer->setParameter(name, param, AudioMixer::VOLUME1, &vrf); // 設置右聲道音量 mAudioMixer->setParameter(name, param, AudioMixer::AUXLEVEL, &vaf); // 設置輔助聲道音量 mAudioMixer->setParameter( name, AudioMixer::TRACK, AudioMixer::FORMAT, (void *)track->format()); // 設置音頻數據格式 mAudioMixer->setParameter( name, AudioMixer::TRACK, AudioMixer::CHANNEL_MASK, (void *)(uintptr_t)track->channelMask()); // 設置音頻聲道數 mAudioMixer->setParameter( name, AudioMixer::TRACK, AudioMixer::MIXER_CHANNEL_MASK, (void *)(uintptr_t)mChannelMask); // limit track sample rate to 2 x output sample rate, which changes at re-configuration uint32_t maxSampleRate = mSampleRate * AUDIO_RESAMPLER_DOWN_RATIO_MAX; uint32_t reqSampleRate = track->mAudioTrackServerProxy->getSampleRate(); // 獲取音頻設備所支持的采樣率 if (reqSampleRate == 0) { reqSampleRate = mSampleRate; } else if (reqSampleRate > maxSampleRate) { reqSampleRate = maxSampleRate; } mAudioMixer->setParameter( name, AudioMixer::RESAMPLE, AudioMixer::SAMPLE_RATE, // 設置音頻采樣率(必要時會進行重采樣) (void *)(uintptr_t)reqSampleRate); AudioPlaybackRate playbackRate = track->mAudioTrackServerProxy->getPlaybackRate(); mAudioMixer->setParameter( name, AudioMixer::TIMESTRETCH, AudioMixer::PLAYBACK_RATE, // 設置播放碼率 &playbackRate); /* * Select the appropriate output buffer for the track. * * Tracks with effects go into their own effects chain buffer * and from there into either mEffectBuffer or mSinkBuffer. * * Other tracks can use mMixerBuffer for higher precision * channel accumulation. If this buffer is enabled * (mMixerBufferEnabled true), then selected tracks will accumulate * into it. * */ if (mMixerBufferEnabled && (track->mainBuffer() == mSinkBuffer || track->mainBuffer() == mMixerBuffer)) { mAudioMixer->setParameter( name, AudioMixer::TRACK, AudioMixer::MIXER_FORMAT, (void *)mMixerBufferFormat); // 設置緩沖區數據格式 mAudioMixer->setParameter( name, AudioMixer::TRACK, AudioMixer::MAIN_BUFFER, (void *)mMixerBuffer); // 分配主緩沖區 // TODO: override track->mainBuffer()? mMixerBufferValid = true; } else { ...... } mAudioMixer->setParameter( name, AudioMixer::TRACK, AudioMixer::AUX_BUFFER, (void *)track->auxBuffer()); // 分配副緩沖區 // reset retry count track->mRetryCount = kMaxTrackRetries; // If one track is ready, set the mixer ready if: // - the mixer was not ready during previous round OR // - no other track is not ready if (mMixerStatusIgnoringFastTracks != MIXER_TRACKS_READY || mixerStatus != MIXER_TRACKS_ENABLED) { mixerStatus = MIXER_TRACKS_READY; } } else { // 出現underrun,以及相應處理操作 ...... } } // Push the new FastMixer state if necessary ...... // Now perform the deferred reset on fast tracks that have stopped ...... // remove all the tracks that need to be... removeTracks_l(*tracksToRemove); ...... // sink or mix buffer must be cleared if all tracks are connected to an // effect chain as in this case the mixer will not write to the sink or mix buffer // and track effects will accumulate into it ...... // if any fast tracks, then status is ready ...... return mixerStatus;}

在確認要使用的Resampler對象存在后,調用invalidateState(1 << name)使設置生效,開始執行重采樣。invalidateState()函數會調用AudioMixer::process_validate(),在該函數中首先通過語句t.hook = getTrackHook(TRACKTYPE_RESAMPLE, t.mMixerChannelCount, t.mMixerInFormat, t.mMixerFormat);獲取執行重采樣操作的函數,隨后通過state->hook = process_resampling;中的t.hook(&t, outTemp, numFrames, state->resampleTemp, aux)語句進行調用。 setParameter()函數代碼如下:

void AudioMixer::setParameter(int name, int target, int param, void *value){ ...... int valueInt = static_cast<int>(reinterpret_cast<uintptr_t>(value)); int32_t *valueBuf = reinterpret_cast<int32_t*>(value); switch (target) { ...... case RESAMPLE: switch (param) { case SAMPLE_RATE: ALOG_ASSERT(valueInt > 0, "bad sample rate %d", valueInt); if (track.setResampler(uint32_t(valueInt), mSampleRate)) { // 新建或查找1個Resampler對象 ALOGV("setParameter(RESAMPLE, SAMPLE_RATE, %u)", uint32_t(valueInt)); invalidateState(1 << name); // 使設置生效,調用重采樣的后續處理函數 } break; case RESET: track.resetResampler(); invalidateState(1 << name); break; case REMOVE: delete track.resampler; track.resampler = NULL; track.sampleRate = mSampleRate; invalidateState(1 << name); break; default: LOG_ALWAYS_FATAL("setParameter resample: bad param %d", param); } break; }}

invalidateState()函數代碼如下:

void AudioMixer::invalidateState(uint32_t mask){ if (mask != 0) { mState.needsChanged |= mask; mState.hook = process__validate; // 使配置生效 }}

process__validate()函數代碼如下:

void AudioMixer::process__validate(state_t* state){ ...... uint32_t en = state->enabledTracks; while (en) { ...... if (n & NEEDS_MUTE) { ...... } else { ...... if (n & NEEDS_RESAMPLE) { all16BitsStereoNoResample = false; resampling = true; t.hook = getTrackHook(TRACKTYPE_RESAMPLE, t.mMixerChannelCount, t.mMixerInFormat, t.mMixerFormat); // 獲取Resample時track對象需要執行的函數(查看getTrackHook()可以看到被獲取的函數是track__genericResample()) ALOGV_IF((n & NEEDS_CHANNEL_COUNT__MASK) > NEEDS_CHANNEL_2, "Track %d needs downmix + resample", i); } else { ...... } } } // select the processing hooks state->hook = process__nop; if (countActiveTracks > 0) { if (resampling) { if (!state->outputTemp) { state->outputTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount]; } if (!state->resampleTemp) { state->resampleTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount]; } state->hook = process__genericResampling; // 在需要重采樣操作的情況下,調用process_genericResampling()函數 } else { ...... } } ...... // Now that the volume ramp has been done, set optimal state and // track hooks for subsequent mixer process ......}

process_genericResampling()函數代碼如下:

// generic code with resamplingvoid AudioMixer::process__genericResampling(state_t* state){ ...... uint32_t e0 = state->enabledTracks; while (e0) { // process by group of tracks with same output buffer // to optimize cache use ...... while (e1) { ...... // this is a little goofy, on the resampling case we don't // acquire/release the buffers because it's done by // the resampler. if (t.needs & NEEDS_RESAMPLE) { t.hook(&t, outTemp, numFrames, state->resampleTemp, aux); // 調用track__genericResample()函數執行Resample } else { ...... } } convertMixerFormat(out, t1.mMixerFormat, outTemp, t1.mMixerInFormat, numFrames * t1.mMixerChannelCount); }}

至此,Android系統播放音頻時的Resample過程就分析完成了。

具體的Resample處理實質是數字信號處理,是個數學運算過程。Android系統中提供的算法有線性插值、三次插值、FIR濾波 3種。感興趣的工程師同仁可以自行查閱相關資料書籍,這里不對數字信號處理的細節進行討論。


發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 乐都县| 革吉县| 奎屯市| 南城县| 汉川市| 买车| 浙江省| 余干县| 南开区| 依兰县| 宣汉县| 从江县| 伊宁市| 青神县| 吴旗县| 上栗县| 龙井市| 寻乌县| 贡觉县| 莱芜市| 张北县| 尉犁县| 福安市| 高碑店市| 普兰县| 镇巴县| 万全县| 四会市| 栾城县| 临朐县| 乐山市| 元氏县| 社会| 泰州市| 临澧县| 陆丰市| 康平县| 伊宁县| 泰来县| 中方县| 永康市|