九 h264 RTP傳輸詳解(1)
前幾章對Server端的介紹中有個比較重要的問題沒有仔細探究:如何打開文件并獲得其SDP信息。我們就從這里入手吧。當RTSPServer收到對某個媒體的DESCRIBE請求時,它會找到對應的ServerMediasession,調用ServerMediaSession::generateSDPDescription()。generateSDPDescription()中會遍歷調用ServerMediaSession中所有的調用ServerMediaSubsession,通過subsession->sdpLines()取得每個Subsession的sdp,合并成一個完整的SDP返回之。我們幾乎可以斷定,文件的打開和分析應該是在每個Subsession的sdpLines()函數中完成的,看看這個函數:
[cpp] view plain copychar const* OnDemandServerMediaSubsession::sdpLines() { if (fSDPLines == NULL) { // We need to construct a set of SDP lines that describe this // subsession (as a unicast stream). To do so, we first create // dummy (unused) source and "RTPSink" objects, // whose parameters we use for the SDP lines: unsigned estBitrate; FramedSource* inputSource = createNewStreamSource(0, estBitrate); if (inputSource == NULL) return NULL; // file not found struct in_addr dummyAddr; dummyAddr.s_addr = 0; Groupsock dummyGroupsock(envir(), dummyAddr, 0, 0); unsigned char rtpPayloadType = 96 + trackNumber() - 1; // if dynamic RTPSink* dummyRTPSink = createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource); setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate); Medium::close(dummyRTPSink); closeStreamSource(inputSource); } return fSDPLines; } 其所為如是:Subsession中直接保存了對應媒體文件的SDP,但是在第一次獲取時fSDPLines為NULL,所以需先獲取fSDPLines。其做法比較費事,竟然是建了臨時的Source和RTPSink,把它們連接成一個StreamToken,Playing一段時間之后才取得了fSDPLines。createNewStreamSource()和createNewRTPSink()都是虛函數,所以此處創建的source和sink都是繼承類指定的,我們分析的是H264,也就是H264VideoFileServerMediaSubsession所指定的,來看一下這兩個函數:[cpp] view plain copyFramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource( unsigned /*clientSessionId*/, unsigned& estBitrate) { estBitrate = 500; // kbps, estimate // Create the video source: ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName); if (fileSource == NULL) return NULL; fFileSize = fileSource->fileSize(); // Create a framer for the Video Elementary Stream: return H264VideoStreamFramer::createNew(envir(), fileSource); } RTPSink* H264VideoFileServerMediaSubsession::createNewRTPSink( Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* /*inputSource*/) { return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic); } 可以看到,分別創建了H264VideoStreamFramer和H264VideoRTPSink。可以肯定H264VideoStreamFramer也是一個Source,但它內部又利用了另一個source--ByteStreamFileSource。后面會分析為什么要這樣做,這里先不要管它。還沒有看到真正打開文件的代碼,繼續探索:[cpp] view plain copyvoid OnDemandServerMediaSubsession::setSDPLinesFromRTPSink( RTPSink* rtpSink, FramedSource* inputSource, unsigned estBitrate) { if (rtpSink == NULL) return; char const* mediaType = rtpSink->sdpMediaType(); unsigned char rtpPayloadType = rtpSink->rtpPayloadType(); struct in_addr serverAddrForSDP; serverAddrForSDP.s_addr = fServerAddressForSDP; char* const ipAddressStr = strDup(our_inet_ntoa(serverAddrForSDP)); char* rtpmapLine = rtpSink->rtpmapLine(); char const* rangeLine = rangeSDPLine(); char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource); if (auxSDPLine == NULL) auxSDPLine = ""; char const* const sdpFmt = "m=%s %u RTP/AVP %d/r/n" "c=IN IP4 %s/r/n" "b=AS:%u/r/n" "%s" "%s" "%s" "a=control:%s/r/n"; unsigned sdpFmtSize = strlen(sdpFmt) + strlen(mediaType) + 5 /* max short len */ + 3 /* max char len */ + strlen(ipAddressStr) + 20 /* max int len */ + strlen(rtpmapLine) + strlen(rangeLine) + strlen(auxSDPLine) + strlen(trackId()); char* sdpLines = new char[sdpFmtSize]; sPRintf(sdpLines, sdpFmt, mediaType, // m= <media> fPortNumForSDP, // m= <port> rtpPayloadType, // m= <fmt list> ipAddressStr, // c= address estBitrate, // b=AS:<bandwidth> rtpmapLine, // a=rtpmap:... (if present) rangeLine, // a=range:... (if present) auxSDPLine, // optional extra SDP line trackId()); // a=control:<track-id> delete[] (char*) rangeLine; delete[] rtpmapLine; delete[] ipAddressStr; fSDPLines = strDup(sdpLines); delete[] sdpLines; } 此函數中取得Subsession的sdp并保存到fSDPLines。打開文件應在rtpSink->rtpmapLine()甚至是Source創建時已經做了。我們不防先把它放一放,而是先把SDP的獲取過程搞個通透。所以把焦點集中到getAuxSDPLine()上。[cpp] view plain copychar const* OnDemandServerMediaSubsession::getAuxSDPLine( RTPSink* rtpSink, FramedSource* /*inputSource*/) { // Default implementation: return rtpSink == NULL ? NULL : rtpSink->auxSDPLine(); } 很簡單,調用了rtpSink->auxSDPLine()那么我們要看H264VideoRTPSink::auxSDPLine():不用看了,很簡單,取得source 中保存的PPS,SPS等形成a=fmpt行。但事實上并沒有這么簡單,H264VideoFileServerMediaSubsession重寫了getAuxSDPLine()!如果不重寫,則說明auxSDPLine已經在前面分析文件時獲得了,那么既然重寫,就說明前面沒有獲取到,只能在這個函數中重寫。look H264VideoFileServerMediaSubsession中這個函數:[cpp] view plain copychar const* H264VideoFileServerMediaSubsession::getAuxSDPLine( RTPSink* rtpSink, FramedSource* inputSource) { if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client) if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known // until we start reading the file. This means that "rtpSink"s "auxSDPLine()" will be NULL initially, // and we need to start reading data from our file until this changes. fDummyRTPSink = rtpSink; // Start reading the file: fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this); // Check whether the sink's 'auxSDPLine()' is ready: checkForAuxSDPLine(this); } envir().taskScheduler().doEventLoop(&fDoneFlag); return fAuxSDPLine; } 注釋里面解釋得很清楚,H264不能在文件頭中取得PPS/SPS,必須在播放一下后(當然,它是一個原始流文件,沒有文件頭)才行。也就是說不能從rtpSink中取得了。為了保證在函數退出前能取得AuxSDP,把大循環搬到這里來了。afterPlayingDummy()是在播放結束也就是取得aux sdp之后執行。在大循環之前的checkForAuxSDPLine()做了什么呢?[cpp] view plain copyvoid H264VideoFileServerMediaSubsession::checkForAuxSDPLine1() { char const* dasl; if (fAuxSDPLine != NULL) { // Signal the event loop that we're done: setDoneFlag(); } else if (fDummyRTPSink != NULL && (dasl = fDummyRTPSink->auxSDPLine()) != NULL) { fAuxSDPLine = strDup(dasl); fDummyRTPSink = NULL; // Signal the event loop that we're done: setDoneFlag(); } else { // try again after a brief delay: int uSecsToDelay = 100000; // 100 ms nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay, (TaskFunc*) checkForAuxSDPLine, this); } } 它檢查是否已取得Aux sdp,如果取得了,設置結束標志,直接返回。如果沒有,就檢查是否sink中已取得了aux sdp,如果是,也設置結束標志,返回。如果還沒有取得,則把這個檢查函數做為delay task加入計劃任務中。每100毫秒檢查一次,每檢查一次主要就是調用一次fDummyRTPSink->auxSDPLine()。大循環在檢測到fDoneFlag改變時停止,此時已取得了aux sdp。但是如果直到文件結束也沒有得到aux sdp,則afterPlayingDummy()被執行,在其中停止掉這個大循環。然后在父Subsession類中關掉這些臨時的source和sink。在直正播放時重新創建。新聞熱點
疑難解答