加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 百科 > 正文

c – 如何使用opencv 2.4.9获得有效的ORB结果?

发布时间:2020-12-16 09:56:05 所属栏目:百科 来源:网络整理
导读:int method = 0;std::vectorcv::KeyPoint keypoints_object,keypoints_scene;cv::Mat descriptors_object,descriptors_scene;cv::ORB orb;int minHessian = 500;//cv::OrbFeatureDetector detector(500);//ORB orb(25,1.0f,2,10,10);cv::OrbFeatureDetector
int method = 0;

std::vector<cv::KeyPoint> keypoints_object,keypoints_scene;
cv::Mat descriptors_object,descriptors_scene;

cv::ORB orb;

int minHessian = 500;
//cv::OrbFeatureDetector detector(500);
//ORB orb(25,1.0f,2,10,10);
cv::OrbFeatureDetector detector(25,10);
//cv::OrbFeatureDetector detector(500,1.20000004768,8,31,ORB::HARRIS_SCORE,31);
cv::OrbDescriptorExtractor extractor;

//-- object
if( method == 0 ) { //-- ORB
    orb.detect(img_object,keypoints_object);
    //cv::drawKeypoints(img_object,keypoints_object,img_object,cv::Scalar(0,255,255));
    //cv::imshow("template",img_object);

    orb.compute(img_object,descriptors_object);
} else { //-- SURF test
    detector.detect(img_object,keypoints_object);
    extractor.compute(img_object,descriptors_object);
}
// https://stackoverflow.com/a/11798593
//if(descriptors_object.type() != CV_32F)
//    descriptors_object.convertTo(descriptors_object,CV_32F);


//for(;;) {
    cv::Mat frame = cv::imread("E:ProjectsImages2-134-2.bmp",1);
    cv::Mat img_scene = cv::Mat(frame.size(),CV_8UC1);
    cv::cvtColor(frame,img_scene,cv::COLOR_RGB2GRAY);
    //frame.copyTo(img_scene);
    if( method == 0 ) { //-- ORB
        orb.detect(img_scene,keypoints_scene);
        orb.compute(img_scene,keypoints_scene,descriptors_scene);
    } else { //-- SURF
        detector.detect(img_scene,keypoints_scene);
        extractor.compute(img_scene,descriptors_scene);
    }

    //-- matching descriptor vectors using FLANN matcher
    cv::BFMatcher matcher;
    std::vector<cv::DMatch> matches;
    cv::Mat img_matches;
    if(!descriptors_object.empty() && !descriptors_scene.empty()) {
        matcher.match (descriptors_object,descriptors_scene,matches);

        double max_dist = 0; double min_dist = 100;

        //-- Quick calculation of max and min idstance between keypoints
        for( int i = 0; i < descriptors_object.rows; i++)
        { double dist = matches[i].distance;
            if( dist < min_dist ) min_dist = dist;
            if( dist > max_dist ) max_dist = dist;
        }
        //printf("-- Max dist : %f n",max_dist );
        //printf("-- Min dist : %f n",min_dist );
        //-- Draw only good matches (i.e. whose distance is less than 3*min_dist)
        std::vector< cv::DMatch >good_matches;

        for( int i = 0; i < descriptors_object.rows; i++ )

        { if( matches[i].distance < (max_dist/1.6) )
            { good_matches.push_back( matches[i]); }
        }

        cv::drawMatches(img_object,
                good_matches,img_matches,cv::Scalar::all(-1),std::vector<char>(),cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

        //-- localize the object
        std::vector<cv::Point2f> obj;
        std::vector<cv::Point2f> scene;

        for( size_t i = 0; i < good_matches.size(); i++) {
            //-- get the keypoints from the good matches
            obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
            scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
        }
        if( !obj.empty() && !scene.empty() && good_matches.size() >= 4) {
            cv::Mat H = cv::findHomography( obj,scene,cv::RANSAC );

            //-- get the corners from the object to be detected
            std::vector<cv::Point2f> obj_corners(4);
            obj_corners[0] = cv::Point(0,0);
            obj_corners[1] = cv::Point(img_object.cols,0);
            obj_corners[2] = cv::Point(img_object.cols,img_object.rows);
            obj_corners[3] = cv::Point(0,img_object.rows);

            std::vector<cv::Point2f> scene_corners(4);

            cv::perspectiveTransform( obj_corners,scene_corners,H);

            //-- Draw lines between the corners (the mapped object in the scene - image_2 )
            cv::line( img_matches,
                    scene_corners[0] + cv::Point2f(img_object.cols,0),
                    scene_corners[1] + cv::Point2f(img_object.cols,
                    cv::Scalar(0,4 );
            cv::line( img_matches,
                    scene_corners[2] + cv::Point2f(img_object.cols,
                    scene_corners[3] + cv::Point2f(img_object.cols,4 );

        }
    }

        t =(double) getTickCount() - t;
    printf("Time :%f",(double)(t*1000./getTickFrequency()));

    cv::imshow("match result",img_matches );
    cv::waitKey();


return 0;

我在这里执行两个图像之间的模板匹配.我使用ORB算法提取关键点并将其与BF匹配器匹配,但我没有得到好结果.我在这里添加Image来理解问题

在这里,你可以看到泰迪熊上的深蓝色线,它实际上是一个矩形,当从对象关键点识别对象时,它将从帧图像周围绘制.
在这里我使用的是Opencv 2.4.9,我应该做些什么改变才能获得好的结果?

解决方法

在任何特征检测提取后进行单应性估计,您可以使用许多参数.然而,要实现的要点是,它几乎总是计算时间VS的问题.准确性.

您的代码最关键的失败点是ORB初始化:

cv::OrbFeatureDetector detector(25,10);

>第一个参数告诉提取器仅使用检测器的前25个结果.对于没有参数约束的8 DOF单应性的可靠估计,您应该具有比参数更多的特征,即80,或者仅使其成为偶数100.
>第二个参数用于在八度(或级别)之间缩小图像(或检测器补丁).使用数字1.0f意味着你不改变八度音阶之间的音阶,这没有任何意义,特别是因为你的第三个参数是2的数量而不是1的水平.默认值是1.2f的音阶和8个音阶,对于更少的计算,使用1.5f和4级的缩放(再次,只是一个建议,其他参数也将工作).
>你的第四个和最后一个参数说要计算的补丁大小是10×10,这个很小,但是如果你在低分辨率上工作就没问题了.
>你的得分类型(一个在最后一个参数之前)可以稍微改变运行时,你可以使用ORB :: FAST_SCORE而不是ORB :: HARRIS_SCORE,但它并不重要.

最后但并非最不重要的是,当你初始化BruteForce Matcher对象时,你应该记得使用cv :: NORM_HAMMING类型,因为ORB是一个二进制特性,这将使得匹配过程的规范计算实际意味着什么.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读