WebRTC VideoEngine 本地Video数据处理-VideoCaptureInput
发布时间:2020-12-14 02:13:32 所属栏目:大数据 来源:网络整理
导读:前面我们分析了VideoCaptureInputTest的实现,从中了解到了VideoCaptureInput的基本用法: 这里详细分析VideoCaptureInput的具体实现。 1.VideoCaptureInput类成员分析: 关键成员变量: encoder_thread_,对输入video数据进行处理的主线程 capture_event_,
前面我们分析了VideoCaptureInputTest的实现,从中了解到了VideoCaptureInput的基本用法:
1.VideoCaptureInput类成员分析:关键成员变量:
2. 创建VideoCaptureInput对象
VideoCaptureInput::VideoCaptureInput( ProcessThread* module_process_thread,VideoCaptureCallback* frame_callback,VideoRenderer* local_renderer,SendStatisticsProxy* stats_proxy,CpuOveruSEObserver* overuse_observer,EncodingTimeObserver* encoding_time_observer) : capture_cs_(CriticalSectionWrapper::CreateCriticalSection()),module_process_thread_(module_process_thread),frame_callback_(frame_callback),local_renderer_(local_renderer),stats_proxy_(stats_proxy),incoming_frame_cs_(CriticalSectionWrapper::CreateCriticalSection()),//创建encode thread encoder_thread_(ThreadWrapper::CreateThread(EncoderThreadFunction),this,"EncoderThread")),//创建用于控制encode thread的EventWrapper capture_event_(EventWrapper::Create()),stop_(0),//根据该变量,可以保证输入数据的ts是递增的(如果不是,对应帧会被丢弃) last_captured_timestamp_(0),//用于网络传输 delta_ntp_internal_ms_( Clock::GetRealTimeClock()->CurrentNtpInMilliseconds() - TickTime::MillisecondTimestamp()),//监控CPU使用情况,通过一帧数据处理时间的长短 overuse_detector_(new OveruseFrameDetector(Clock::GetRealTimeClock(),CpuOveruSEOptions(),overuse_observer,stats_proxy)),//编码耗时? encoding_time_observer_(encoding_time_observer) { //启动线程,并设置优先级 encoder_thread_->Start(); encoder_thread_->SetPriority(kHighPriority); //这个还没弄明白 module_process_thread_->RegisterModule(overuse_detector_.get()); } 3. 创建Encode线程?ThreadWrapper::CreateThreadrtc::scoped_ptr<ThreadWrapper> ThreadWrapper::CreateThread( ThreadRunFunction func,void* obj,const char* thread_name) { return rtc::scoped_ptr<ThreadWrapper>( //非win平台下,ThreadType即为ThreadPosix new ThreadType(func,obj,thread_name)).Pass(); } ThreadPosix::ThreadPosix(ThreadRunFunction func,void* obj,const char* thread_name) : run_function_(func),obj_(obj),stop_event_(false,false),name_(thread_name ? thread_name : "webrtc"),thread_(0) { RTC_DCHECK(name_.length() < 64); } ThreadWrapper::Start()函数,具体实现在ThreadPosix类,Start()函数创建一个线程,执行StartThread函数 bool ThreadPosix::Start() { .... RTC_CHECK_EQ(0,pthread_create(&thread_,&attr,&StartThread,this)); return true; } ThreadPosix::StartThread void* ThreadPosix::StartThread(void* param) { static_cast<ThreadPosix*>(param)->Run(); return 0; }Run方法循环执行run_function_(obj_) ,就是在创建encoder_thread_时传入的EncoderThreadFunction void ThreadPosix::Run() { .... // It's a requirement that for successful thread creation that the run // function be called at least once (see RunFunctionIsCalled unit test),// so to fullfill that requirement,we use a |do| loop and not |while|. do { if (!run_function_(obj_)) break; } while (!stop_event_.Wait(0)); } 4. Encode处理将数据发送给ViEncoder,进而交由VideoCodingModule处理,该方法只是线程的一次操作,线程的循环实在ThreadWapper中实现的 bool VideoCaptureInput::EncoderThreadFunction(void* obj) { return static_cast<VideoCaptureInput*>(obj)->EncoderProcess(); } bool VideoCaptureInput::EncoderProcess() { //定义输入数据两帧间的最长时间间隔 static const int kThreadWaitTimeMs = 100; int64_t capture_time = -1; //等待数据到达信号 if (capture_event_->Wait(kThreadWaitTimeMs) == kEventSignaled) { if (rtc::AtomicOps::AcquireLoad(&stop_)) //结束线程 return false; int64_t encode_start_time = -1; VideoFrame deliver_frame; { CriticalSectionScoped cs(capture_cs_.get()); if (!captured_frame_.IsZeroSize()) { //将输入数据保存在captured_frame_,并清空captured_frame_buffer并重置其ts deliver_frame = captured_frame_; captured_frame_.Reset(); } } if (!deliver_frame.IsZeroSize()) { capture_time = deliver_frame.render_time_ms(); encode_start_time = Clock::GetRealTimeClock()->TimeInMilliseconds(); //将数据分发到哪里呢,ViEEncoder类继承了VideoCaptureCallback类,并重写了该函数 frame_callback_->DeliverFrame(deliver_frame); } // Update the overuse detector with the duration. if (encode_start_time != -1) { int encode_time_ms = static_cast<int>( Clock::GetRealTimeClock()->TimeInMilliseconds() - encode_start_time); overuse_detector_->FrameEncoded(encode_time_ms); stats_proxy_->OnEncodedFrame(encode_time_ms); if (encoding_time_observer_) { encoding_time_observer_->OnReportEncodedTime( deliver_frame.ntp_time_ms(),encode_time_ms); } } } // We're done! if (capture_time != -1) { overuse_detector_->FrameSent(capture_time); } return true; }如上述分析,该线程是由capture_event_->Wait()阻塞,进而由capture_event_->Set()驱动,通过查找Set()我们发现整个encode线程是由IncomingCapturedFrame方法进行驱动的 5. Video数据输入IncomingCapturedFrame,将数据保存至captured_frame_,并驱动EncodeProcess进行处理 void VideoCaptureInput::IncomingCapturedFrame(const VideoFrame& video_frame) { // TODO(pbos): Remove local rendering,it should be handled by the client code // if required. if (local_renderer_) //进行本地显示,RenderFrame中定义了各种平台下的render方法 local_renderer_->RenderFrame(video_frame,0); //更新本地发送数据的信息(biterate,frame account,rtcp account .etc) stats_proxy_->OnIncomingFrame(video_frame.width(),video_frame.height()); VideoFrame incoming_frame = video_frame; if (incoming_frame.ntp_time_ms() != 0) { // If a NTP time stamp is set,this is the time stamp we will use. //根据输入video数据的源确定的,如果是本地源应该是不包含NTP ts的 incoming_frame.set_render_time_ms(incoming_frame.ntp_time_ms() - delta_ntp_internal_ms_); } else { // NTP time stamp not set. int64_t render_time = incoming_frame.render_time_ms() != 0 ? incoming_frame.render_time_ms() : TickTime::MillisecondTimestamp(); incoming_frame.set_render_time_ms(render_time); incoming_frame.set_ntp_time_ms(render_time + delta_ntp_internal_ms_); } // Convert NTP time,in ms,to RTP timestamp. const int kMsToRtpTimestamp = 90; incoming_frame.set_timestamp( kMsToRtpTimestamp * static_cast<uint32_t>(incoming_frame.ntp_time_ms())); CriticalSectionScoped cs(capture_cs_.get()); if (incoming_frame.ntp_time_ms() <= last_captured_timestamp_) { // We don't allow the same capture time for two frames,drop this one. LOG(LS_WARNING) << "Same/old NTP timestamp (" << incoming_frame.ntp_time_ms() << " <= " << last_captured_timestamp_ << ") for incoming frame. Dropping."; return; } /*ShallowCopy,从该方法的实现看,是new了一个VideoFrame,但是没有为video buffer开辟行的空间 *可以认为与原有的对象共享资源,相对的还有deep copy */ captured_frame_.ShallowCopy(incoming_frame); last_captured_timestamp_ = incoming_frame.ntp_time_ms(); //当capture到Frame时执行,用于 overuse_detector_->FrameCaptured(captured_frame_.width(),captured_frame_.height(),captured_frame_.render_time_ms()); TRACE_EVENT_ASYNC_BEGIN1("webrtc","Video",video_frame.render_time_ms(),"render_time",video_frame.render_time_ms()); //驱动EncodeProcess处理captured_frame_ capture_event_->Set(); } 问题:VideoCaptureInput由谁创建,IncomingCapturedFrame由谁调用? 我的理解是WebRTC仅提供了一套借口,接口的使用方法通过unittest给出了初步的使用方法,其他的需要开发者自己实现,例如我希望在android系统中使用VebRTC VideoEngine,那么我就需要从camera获取frame数据,然后创建VideoCaptureInput对象,并通过IncomingCapturedFrame将其传递给WebRTC的encode模块。(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |