12/29/2014

OpenCV meanShiftFiltering example source code ( cpu: pyrMeanShiftFiltering, gpu:meanShiftFiltering, gpu:meanShiftSegmentation )


'meanshift' is clustering algorithm. It can be used color segmentation, color tracking..
This article is about color segmentation using meanShiftFiltering function in the opencv.

There are 2 example of cpu, gpu version in the source code.
Note, the input image in the gpu version must be 8uc4 type.

please refer to this page for input parameter.
and this webpage -> http://seiya-kumada.blogspot.kr/2013/05/mean-shift-filtering-practice-by-opencv.html

thank you.






...
#include < time.h>   
#include < opencv2\opencv.hpp>   
#include < opencv2\gpu\gpu.hpp>   
#include < string>   
#include < stdio.h>   
  
  
#ifdef _DEBUG           
#pragma comment(lib, "opencv_core249d.lib")   
#pragma comment(lib, "opencv_imgproc249d.lib")   //MAT processing   
#pragma comment(lib, "opencv_gpu249d.lib")   
#pragma comment(lib, "opencv_highgui249d.lib")   
#else   
#pragma comment(lib, "opencv_core249.lib")   
#pragma comment(lib, "opencv_imgproc249.lib")   
#pragma comment(lib, "opencv_gpu249.lib")   
#pragma comment(lib, "opencv_highgui249.lib")   
#endif  

using namespace cv;
using namespace std;


void ProccTimePrint( unsigned long Atime , string msg)   
{   
 unsigned long Btime=0;   
 float sec, fps;   
 Btime = getTickCount();   
 sec = (Btime - Atime)/getTickFrequency();   
 fps = 1/sec;   
 printf("%s %.4lf(sec) / %.4lf(fps) \n", msg.c_str(),  sec, fps );   
} 




void main()
{
 unsigned long AAtime=0;
 
 //image load
 Mat img = imread("image2.jpg");
 Mat outImg, outimg2;

 //cpu version meanshift
 AAtime = getTickCount();
 pyrMeanShiftFiltering(img, outImg, 30, 30, 3);
 ProccTimePrint(AAtime , "cpu");


 //gpu version meanshift
 gpu::GpuMat pimgGpu, imgGpu, outImgGpu;
 AAtime = getTickCount();
 pimgGpu.upload(img);
 //gpu meanshift only support 8uc4 type.
 gpu::cvtColor(pimgGpu, imgGpu, CV_BGR2BGRA);
 gpu::meanShiftFiltering(imgGpu, outImgGpu, 30, 30);
 outImgGpu.download(outimg2);
 ProccTimePrint(AAtime , "gpu");

 //show image
 imshow("origin", img);
 imshow("MeanShift Filter cpu", outImg);
 imshow("MeanShift Filter gpu", outimg2);


 waitKey();
}


...


Below source code is about gpu::meanShiftSegmentation.
In this function, we can set minimum segment size of pixel count.
The smaller segments are merged.

...
Mat outImg3;
 AAtime = getTickCount();
 gpu::meanShiftSegmentation(imgGpu, outImg3, 30, 30, 300);
 ProccTimePrint(AAtime , "gpu segment");
 imshow("MeanShift segmentation gpu", outImg3);
...


Related contents k-means
->http://feelmare.blogspot.kr/search/label/K-means

12/23/2014

yuv422(YUYV) to RGB and RGB to yuv422(YUYV), (Using OpenCV and TBB)

In past, I wrote an articel about YUV 444, 422, 411 introduction and yuv <-> rgb converting example code.
refer to this page -> http://feelmare.blogspot.kr/2012/11/yuv-color-format-444-422-411-simple.html

In this article, I will introduce method of using opencv and TBB.
TBB is an acronym for Thread Building Block.
TBB is to enable parellel processing using multi thread.
see the this page -> http://feelmare.blogspot.kr/2014/12/opencv-tbb-utility-parallelfor.html


This is YUV422 to RGB example source code.
In my case YUV422 is consisted of YUYV.
And input type of the YUYV data is unsigned char *.

So example is
unsigned char * yuyv to Mat rgb
In here, m_stride is real width length of yuyv data.

....
Mat yuyv(m_height, m_width, CV_8UC2);
memcpy( yuyv.data, yuyv_buffer, sizeof(unsigned char) * (m_stride * m_height) );
Mat rgb(m_height, m_width, CV_8UC3);
cvtColor(yuyv, rgb, CV_YUV2BGR_YUYV);
....



Next example is rgb to yuyv using TBB.
....
class Parallel_process : public cv::ParallelLoopBody
{

private:
 cv::Mat& inImg;
 unsigned char* outImg;
 int widhStep;
 int m_stride;

public:
 Parallel_process(cv::Mat& inputImgage,  unsigned char* outImage)
  : inImg(inputImgage), outImg(outImage){

   widhStep = inputImgage.size().width * 3; 
   m_stride = inputImgage.size().width *2;

 }

 virtual void operator()(const cv::Range& range) const
 {
  //thread
  for(int i = range.start; i < range.end; i++)
  {

   int s1 = i*widhStep;

   for(int iw=0; iw< inImg.size().width; iw=iw+2)
   {
    int s2 = iw*3;

    int mc = s1+s2;
    int B1 = (unsigned char)(inImg.data[mc + 0]);
    int G1 = (unsigned char)(inImg.data[mc + 1]);
    int R1 = (unsigned char)(inImg.data[mc + 2]);
    int B2 = (unsigned char)(inImg.data[mc + 3]);
    int G2 = (unsigned char)(inImg.data[mc + 4]);
    int R2 = (unsigned char)(inImg.data[mc + 5]);


    int Y = (0.257*R1) + (0.504*G1) + (0.098*B1) +16;
    int U = -(0.148*R1) - (0.291*G1) + (0.439*B1) + 128;
    int V = (0.439*R1 ) - (0.368*G1) - (0.071*B1) + 128;
    int Y2 = (0.257*R2) + (0.504*G2) + (0.098*B2) +16;

    Y = MMIN(255, MMAX(0, Y));
    U = MMIN(255, MMAX(0, U));
    V = MMIN(255, MMAX(0, V));
    Y2 = MMIN(255, MMAX(0, Y2)); 

    mc = i*m_stride + iw*2;
    outImg[mc + 0] = Y;
    outImg[mc + 1] = U;
    outImg[mc + 2] = Y2;
    outImg[mc + 3] = V;

   }
  }
 }
};



//in main rutine
cv::parallel_for_(cv::Range(0, (OriginMat).rows), Parallel_process((OriginMat), inP_OriginImg));

....


In opencv convert function,  YUYV to RGB option is exist (-> CV_YUV2BGR_YUYV).
But RGB to YUYV option is not exist.

thank you.

12/11/2014

11/26/2014

OpenCV SVM learning method and xml convert method to use in Hog.SetSVMDetector() function

This is example of SVM learning method.
This example is I already have explained in past time.
See the this page - >http://feelmare.blogspot.kr/search/label/SVM

But other contents is added in this example.
That is converting of trained xml file to use in Hog.MultiScaleDetection function.

Converting process is started after svm learning.
See the comment of "the second save option" in source code.

Thank you.



---
#include < stdio.h>
#include < opencv2\opencv.hpp>
//#include < opencv2\gpu\gpu.hpp>

using namespace cv;
using namespace std;


#ifdef _DEBUG        
#pragma comment(lib, "opencv_core247d.lib")         
#pragma comment(lib, "opencv_imgproc247d.lib")   //MAT processing        
#pragma comment(lib, "opencv_objdetect247d.lib") //HOGDescriptor
//#pragma comment(lib, "opencv_gpu247d.lib")        
//#pragma comment(lib, "opencv_features2d247d.lib")        
#pragma comment(lib, "opencv_highgui247d.lib")        
#pragma comment(lib, "opencv_ml247d.lib")      
//#pragma comment(lib, "opencv_stitching247d.lib");      
//#pragma comment(lib, "opencv_nonfree247d.lib");      
  
#else        
#pragma comment(lib, "opencv_core247.lib")        
#pragma comment(lib, "opencv_imgproc247.lib")        
#pragma comment(lib, "opencv_objdetect247.lib")        
//#pragma comment(lib, "opencv_gpu247.lib")        
//#pragma comment(lib, "opencv_features2d247.lib")        
#pragma comment(lib, "opencv_highgui247.lib")        
#pragma comment(lib, "opencv_ml247.lib")        
//#pragma comment(lib, "opencv_stitching247.lib");      
//#pragma comment(lib, "opencv_nonfree247.lib");      
#endif 

class MySvm: public CvSVM  
{  
public:  
 int get_alpha_count(){
  return this->sv_total;}

 int get_sv_dim(){
  return this->var_all;}

 int get_sv_count(){
  return this->decision_func->sv_count;}
 
 double* get_alpha(){
  return this->decision_func->alpha;}
 
 float** get_sv(){
  return this->sv;}
 
 float get_rho(){
  return this->decision_func->rho;}
};


void main()
{
 
 //Read Hog feature from XML file
 ///////////////////////////////////////////////////////////////////////////
 printf("1. Feature data xml load\n");
 //create xml to read
 FileStorage read_PositiveXml("C:\\POSCO\\Learned\\Positive1643_64_64.xml", FileStorage::READ);
 FileStorage read_NegativeXml("C:\\POSCO\\Learned\\Negative16064_64_64.xml", FileStorage::READ);
 char SVMSaveFile[100] = "C:\\POSCO\\Learned\\trainedSVM_1643_16064_64_64.xml";
 char SVM_HOGDetectorFile[100] = "C:\\POSCO\\Learned\\HogDetectorXML_1643_16064_64_64.xml";
 //Positive Mat
 Mat pMat; 
 read_PositiveXml["Descriptor_of_images"] >> pMat;
 //Read Row, Cols
 int pRow,pCol;
 pRow = pMat.rows; pCol = pMat.cols;

 //Negative Mat
 Mat nMat;
 read_NegativeXml["Descriptor_of_images"] >> nMat;
 //Read Row, Cols
 int nRow,nCol;
 nRow = nMat.rows; nCol = nMat.cols;

 //Rows, Cols printf
 printf("   pRow=%d pCol=%d, nRow=%d nCol=%d\n", pRow, pCol, nRow, nCol );
 //release
 read_PositiveXml.release();
 //release
 read_NegativeXml.release();
 /////////////////////////////////////////////////////////////////////////////////

 //Make training data for SVM
 /////////////////////////////////////////////////////////////////////////////////
 printf("2. Make training data for SVM\n");
 //descriptor data set
 Mat PN_Descriptor_mtx( pRow + nRow, pCol, CV_32FC1 ); //in here pCol and nCol is descriptor number, so two value must be same;
 memcpy(PN_Descriptor_mtx.data, pMat.data, sizeof(float) * pMat.cols * pMat.rows );
 int startP = sizeof(float) * pMat.cols * pMat.rows;
 memcpy(&(PN_Descriptor_mtx.data[ startP ]), nMat.data, sizeof(float) * nMat.cols * nMat.rows );
 //data labeling
 Mat labels( pRow + nRow, 1, CV_32FC1, Scalar(-1.0) );
    labels.rowRange( 0, pRow ) = Scalar( 1.0 );
 /////////////////////////////////////////////////////////////////////////////////

 //Set svm parameter
 /////////////////////////////////////////////////////////////////////////////////
 printf("4. SVM training\n");
 MySvm svm; //CvSVM svm;
 CvSVMParams params;
 params.svm_type = CvSVM::C_SVC;
    params.kernel_type = CvSVM::LINEAR;
    params.term_crit = cvTermCriteria( CV_TERMCRIT_ITER, 10000, 1e-6 );
 /////////////////////////////////////////////////////////////////////////////////

 //Training
 /////////////////////////////////////////////////////////////////////////////////
 svm.train(PN_Descriptor_mtx, labels, Mat(), Mat(), params);
 //Trained data save
 /////////////////////////////////////////////////////////////////////////////////
 printf("5. SVM xml save\n");
 svm.save( SVMSaveFile );
 
 //////////////////////////////////////////////////////////////////////////////////
 //Second Save option
 //This save file is for Hog.SetSVMDectector() function
 //And if we can use this function(SetSVMDectector), we can use detectMultiScale function.
 //This function is very easy to detect target, and we also can apply GPU option.

 //make firstly, inherited class to access alpha vector and value
 int svmVectorSize = svm.get_support_vector_count();
 int featureSize = pCol;
 //prepare, variables 
 
 
 Mat sv = Mat(svmVectorSize, featureSize, CV_32FC1, 0.0);
 Mat alp = Mat(1, svmVectorSize, CV_32FC1, 0.0);
 Mat re = Mat(1, featureSize, CV_32FC1, 0.0);
 Mat re2 = Mat(1, featureSize+1, CV_32FC1, 0.0);

 
 
 //set value to variables
 for(int i=0; i< svmVectorSize; ++i)
  memcpy( (sv.data + i*featureSize), svm.get_support_vector(i), featureSize*sizeof(float) ); //ok

 
 double * alphaArr = svm.get_alpha();
 int alphaCount = svm.get_alpha_count();

 for(int i=0; i< svmVectorSize; ++i)
 { 
  alp.at< float>(0, i) = (float)alphaArr[i];
  //printf("alpha[%d] = %lf \n", i, (float)alphaArr[i] );
 }
 
 //cvMatMul(alp, sv, re);
 re = alp * sv;

 for(int i=0; i< featureSize; ++i)
  re2.at< float>(0,i) =  re.at< float>(0,i) * -1;
 re2.at< float>(0,featureSize) = svm.get_rho();

 //save to 1d vector to XML format!!
 FileStorage svmSecondXML(SVM_HOGDetectorFile, FileStorage::WRITE);
 svmSecondXML << "SecondSVMd" << re2 ; 

 svmSecondXML.release();
 
 
// FileStorage hogXml("testXML.xml", FileStorage::WRITE); //FileStorage::READ
// write(hogXml, "Data", PN_Descriptor_mtx);
// write(hogXml, "Label", labels);
// hogXml.release();
}



...

11/14/2014

cvCalcBackProjectPatch example source code


...
#include< cv.h>  
#include< highgui.h>  
  
void GetHSV (const IplImage* image, IplImage** h, IplImage** s, IplImage** v);  
  
int main()  
{  
    IplImage* src = cvLoadImage ("bluecup.jpg", 1);  
    IplImage* h_src = NULL;  
    IplImage* s_src = NULL;  
    GetHSV (src, &h_src, &s_src, NULL);  
    IplImage *images[] = {h_src,s_src};  
    CvHistogram* hist_src = NULL;  
  
    /*่ฎก็ฎ—ไบŒ็ปด็›ดๆ–นๅ›พ*/  
    int dims = 2;  
    int size[] = {30, 32};  
    float range_h[] = {0, 180};  
    float range_s[] = {0, 256};  
    float* ranges[] = {range_h, range_s};  
    hist_src = cvCreateHist (dims, size, CV_HIST_ARRAY, ranges);  
    cvCalcHist (images, hist_src);  
    cvNormalizeHist (hist_src, 1);  
  
    IplImage* dst = cvLoadImage ("adrian1.jpg", 1);  
    IplImage* h_dst = NULL;  
    IplImage* s_dst = NULL;  
    GetHSV (dst, &h_dst, &s_dst, NULL);  
    images[0] = h_dst;  
    images[1] = s_dst;  
  
    CvSize patch_size = cvSize (src->width, src->height);  
    IplImage* result = cvCreateImage (cvSize(h_dst->width - patch_size.width + 1, h_dst->height - patch_size.height + 1),  
        IPL_DEPTH_32F, 1);  
    cvCalcBackProjectPatch (images, result, patch_size, hist_src, CV_COMP_CORREL, 1);  
    cvShowImage ("result", result);  
      

    CvPoint max_location;  
    cvMinMaxLoc(result, NULL, NULL, NULL, &max_location, NULL);  
    max_location.x += cvRound (patch_size.width / 2);  
    max_location.y += cvRound (patch_size.height / 2);  
  

    CvPoint top = cvPoint(max_location.x - patch_size.width / 2,max_location.y - patch_size.height / 2);  
    CvPoint bottom = cvPoint(max_location.x + patch_size.width / 2, max_location.y + patch_size.height / 2);  
    cvRectangle (dst, top, bottom, CV_RGB(255, 0, 0), 1, 8, 0);  
    cvShowImage ("dst", dst);  
  
    cvWaitKey (0);  
  
    cvReleaseImage(&src);    
    cvReleaseImage(&dst);    
    cvReleaseImage(&h_src);    
    cvReleaseImage(&h_dst);    
    cvReleaseImage(&s_dst);    
    cvReleaseImage(&s_src);    
    cvReleaseHist(&hist_src);    
    cvReleaseImage(&result);    
    cvDestroyAllWindows();  
}  
  
void GetHSV (const IplImage* image, IplImage** h, IplImage** s, IplImage** v)  
{  
    IplImage* hsv = cvCreateImage (cvGetSize (image), 8, 3);  
    cvCvtColor (image, hsv, CV_BGR2HSV);  
      
    if ((h != NULL) && (*h == NULL))  
        *h = cvCreateImage (cvGetSize(image), 8, 1);  
    if ((s != NULL) && (*s == NULL))  
        *s = cvCreateImage (cvGetSize(image), 8, 1);  
    if ((v != NULL) && (*v == NULL))  
        *v = cvCreateImage (cvGetSize(image), 8, 1);  
  
    cvSplit (hsv, *h, (s == NULL)?NULL:*s, (v == NULL)?NULL:*v, NULL);  
    cvReleaseImage (&hsv);  
}  


---

11/05/2014

Opencv gpu MOG2_GPU example source code (background subtraction)



refer to example source code
I also have introduced other background subtraction method in here.
http://feelmare.blogspot.kr/2014/04/opencv-study-background-subtractor-mog.html

..
#include < time.h>
#include < opencv2\opencv.hpp>
#include < opencv2\gpu\gpu.hpp>
#include < string>
#include < stdio.h>


#ifdef _DEBUG        
#pragma comment(lib, "opencv_core249d.lib")
#pragma comment(lib, "opencv_imgproc249d.lib")   //MAT processing
#pragma comment(lib, "opencv_gpu249d.lib")
#pragma comment(lib, "opencv_highgui249d.lib")
#else
#pragma comment(lib, "opencv_core249.lib")
#pragma comment(lib, "opencv_imgproc249.lib")
#pragma comment(lib, "opencv_gpu249.lib")
#pragma comment(lib, "opencv_highgui249.lib")
#endif   


#define RWIDTH 800
#define RHEIGHT 600

using namespace std;
using namespace cv;

int main()
{
 /////////////////////////////////////////////////////////////////////////
 gpu::MOG2_GPU pMOG2_g(30);
 pMOG2_g.history = 3000; //300;
 pMOG2_g.varThreshold =64; //128; //64; //32;//; 
 pMOG2_g.bShadowDetection = true;
 Mat Mog_Mask;
 gpu::GpuMat Mog_Mask_g;
 /////////////////////////////////////////////////////////////////////////


 VideoCapture cap("C:\\videoSample\\tracking\\sample.avi");//0);
 /////////////////////////////////////////////////////////////////////////
 Mat o_frame;
 gpu::GpuMat o_frame_gpu;
 gpu::GpuMat r_frame_gpu;
 gpu::GpuMat rg_frame_gpu;
 gpu::GpuMat r_frame_blur_gpu;
 /////////////////////////////////////////////////////////////////////////

 cap >> o_frame;
 if( o_frame.empty() )
   return 0; 
 vector< gpu::GpuMat> gpurgb(3);
 vector< gpu::GpuMat> gpurgb2(3);
 /////////////////////////////////////////////////////////////////////////


 unsigned long AAtime=0, BBtime=0;

 //Mat rFrame;
 Mat showMat_r_blur;
 Mat showMat_r;

 while(1)
 {
  /////////////////////////////////////////////////////////////////////////
  cap >> o_frame;
  if( o_frame.empty() )
   return 0;

  
  o_frame_gpu.upload(o_frame);
  gpu::resize(o_frame_gpu, r_frame_gpu, Size(RWIDTH, RHEIGHT) );
  AAtime = getTickCount();
  

  gpu::split(r_frame_gpu, gpurgb);
  gpu::blur(gpurgb[0], gpurgb2[0], Size(3,3) );
  gpu::blur(gpurgb[1], gpurgb2[1], Size(3,3) );
  gpu::blur(gpurgb[2], gpurgb2[2], Size(3,3) );
  gpu::merge(gpurgb2, r_frame_blur_gpu);
  //
  pMOG2_g.operator()(r_frame_blur_gpu, Mog_Mask_g,-1);
  //
  Mog_Mask_g.download(Mog_Mask);

  BBtime = getTickCount(); 
  float pt = (BBtime - AAtime)/getTickFrequency(); 
  float fpt = 1/pt;
  printf("gpu %.4lf / %.4lf \n",  pt, fpt );

  
  r_frame_gpu.download(showMat_r);
  //rg_frame_gpu.download(showMat_rg);
  r_frame_blur_gpu.download(showMat_r_blur);
  imshow("origin", showMat_r);
  //imshow("gray", showMat_rg);
  imshow("blur", showMat_r_blur);
  imshow("mog_mask", Mog_Mask);
  
  
  /////////////////////////////////////////////////////////////////////////

  if( waitKey(10) > 0)
   break;
 }

 return 0;
}
..

Opencv gpu 3 channel blur example

There is no 3 channel blur in gpu function.

gpu::blur is support CV_8UC1 and CV_8UC4  channel only.
gpu::gaussianblur is also not suitable often.

So one of idea is split channel.
split 3 channel and perform blur function for each channel.
and then merge to a blur 3channel image.

this is faster than cpu code(lager image will be faster).

In my case, the process takes cpu :0.0126sec gpu:0.0035sec in 800x600 image.

refer to example source code.


...
//gpu case
gpu::resize(o_frame_gpu, r_frame_gpu, Size(RWIDTH, RHEIGHT) );
vector< gpu::GpuMat> gpurgb(3);
vector< gpu::GpuMat> gpurgb2(3);
gpu::split(r_frame_gpu, gpurgb);
gpu::blur(gpurgb[0], gpurgb2[0], Size(3,3) );
gpu::blur(gpurgb[1], gpurgb2[1], Size(3,3) );
gpu::blur(gpurgb[2], gpurgb2[2], Size(3,3) );
gpu::merge(gpurgb2, r_frame_blur_gpu);

//cpu case
resize(o_frame, rFrame, Size(RWIDTH, RHEIGHT) );
blur(rFrame, blurFrame, Size(3,3));



...


11/03/2014

opencv randn(...) example

opencv randn is like in matlab.

The randn function make values of normal distribution random

in matlab
randn is usage like this..

randn()
>> 0.4663

randn(10,1)'
>>   -0.1465    1.0143    0.4669    1.5750   -1.1900    0.2689   -0.2967   -0.4877    0.5671    0.5632

to use mean 5, variance 3
5+3*rand(10,1)
>> 6.2932   12.5907    6.6214    1.6941    4.8522    3.1484    6.1745    4.5230    5.2183    5.6888


OK, now consider case of OpenCV
We will make mean 10 and variance 2 normal distribution random values and fill in 2x10 matrix.

example 1)
randn
..
cv::Mat matrix2xN(2, 10, CV_32FC1);
randn(matrix2xN, 10, 2);
for (int i = 0; i &lt; 10; ++i)
{
  cout << matrix2xN.at<float>(0, i) << " ";
  cout << matrix2xN.at<float>(1, i) << endl;
}
..

example 2)
randn and randu
..
cv::Mat matrix2xN(2, 10, CV_32FC1);
randn(matrix2xN, 10, 2);
for (int i = 0; i < 10; ++i)
{
    cout << matrix2xN.at< float>(0, i) << " ";
    cout << matrix2xN.at< float>(1, i) << endl;
}

//gaussian generation example
Mat Gnoise = Mat(5, 5, CV_8SC1);
randn(Gnoise, 5, 10); //mean, variance
cout << Gnoise << endl;
//
Mat Unoise = Mat(5, 5, CV_8SC1);
randu(Unoise, 5, 10); //low, high
cout << Unoise << endl;


//noise adapt
Mat Gaussian_noise = Mat(img.size(), img.type());
double mean = 0;
double std = 10;
randn(Gaussian_noise, mean, std); //mean, std
Mat colorNoise = img + Gaussian_noise;
..

OpenCV EMD(earth mover distance) example source code

EMD(earth mover distance) method is very good method to compare image similarity.
But processing time is slow.
For using the EMD compare, we should make signature value.
The EMD method compares two signatures value.

Firstly, we prepare histograms of 2 images.
And convert values of histrogram to signature.

A configuration of signature values is very simple.

bins value, x index, y index.
bins value, x index, y index.
bins value, x index, y index.
bins value, x index, y index.
bins value, x index, y index.
....

Of course this type is in case of 2d histogram.
More detail, see the source code.

In here I cannot explain earth mover distance algorithm.
please refer to internet information.

thank you.


origin images
 
result


...
#include < iostream>
#include < vector>

#include < stdio.h>      
#include < opencv2\opencv.hpp>    


#ifdef _DEBUG           
#pragma comment(lib, "opencv_core249d.lib")   
#pragma comment(lib, "opencv_imgproc249d.lib")   //MAT processing   
#pragma comment(lib, "opencv_highgui249d.lib")   
#else   
#pragma comment(lib, "opencv_core249.lib")   
#pragma comment(lib, "opencv_imgproc249.lib")      
#pragma comment(lib, "opencv_highgui249.lib")   
#endif   


using namespace cv;   
using namespace std;   
  
  
  
int main()   
{   

 //read 2 images for histogram comparing   
 ///////////////////////////////////////////////////////////////////////////////////////////////////////////////   
 Mat imgA, imgB;   
 imgA = imread(".\\image1.jpg");   
 imgB = imread(".\\image2.jpg");   


 imshow("img1", imgA);
 imshow("img2", imgB);


 //variables preparing   
 ///////////////////////////////////////////////////////////////////////////////////////////////////////////////   
 int hbins = 30, sbins = 32;    
 int channels[] = {0,  1};   
 int histSize[] = {hbins, sbins};   
 float hranges[] = { 0, 180 };   
 float sranges[] = { 0, 255 };   
 const float* ranges[] = { hranges, sranges};    

 Mat patch_HSV;   
 MatND HistA, HistB;   

 //cal histogram & normalization   
 ///////////////////////////////////////////////////////////////////////////////////////////////////////////////   
 cvtColor(imgA, patch_HSV, CV_BGR2HSV);   
 calcHist( &patch_HSV, 1, channels,  Mat(), // do not use mask   
  HistA, 2, histSize, ranges,   
  true, // the histogram is uniform   
  false );   
 normalize(HistA, HistA,  0, 1, CV_MINMAX);   


 cvtColor(imgB, patch_HSV, CV_BGR2HSV);   
 calcHist( &patch_HSV, 1, channels,  Mat(),// do not use mask   
  HistB, 2, histSize, ranges,   
  true, // the histogram is uniform   
  false );   
 normalize(HistB, HistB, 0, 1, CV_MINMAX);   

 //compare histogram   
 ///////////////////////////////////////////////////////////////////////////////////////////////////////////////   
 int numrows = hbins * sbins;

 //make signature
 Mat sig1(numrows, 3, CV_32FC1);
 Mat sig2(numrows, 3, CV_32FC1);

 //fill value into signature
 for(int h=0; h< hbins; h++)
 {
  for(int s=0; s< sbins; ++s)
  {
   float binval = HistA.at< float>(h,s);
   sig1.at< float>( h*sbins + s, 0) = binval;
   sig1.at< float>( h*sbins + s, 1) = h;
   sig1.at< float>( h*sbins + s, 2) = s;

   binval = HistB.at< float>(h,s);
   sig2.at< float>( h*sbins + s, 0) = binval;
   sig2.at< float>( h*sbins + s, 1) = h;
   sig2.at< float>( h*sbins + s, 2) = s;
  }
 }

 //compare similarity of 2images using emd.
 float emd = cv::EMD(sig1, sig2, CV_DIST_L2); //emd 0 is best matching. 
 printf("similarity %5.5f %%\n", (1-emd)*100 );
 
 waitKey(0);   

 return 0;   
}  

...

10/31/2014

example source code - Region of interest image capture from video using opencv

Region of interest image capture from video using opencv

p key is pause, for capture of roi image patch
esc key is stop


..
#include < stdio.h>
#include < iostream>

#include < opencv2\opencv.hpp>

#ifdef _DEBUG        
#pragma comment(lib, "opencv_core249d.lib")
#pragma comment(lib, "opencv_highgui249d.lib")
#else
#pragma comment(lib, "opencv_core249.lib")
#pragma comment(lib, "opencv_highgui249.lib")
#endif 

using namespace std;
using namespace cv;

bool selectObject = false;
Rect selection;
Point origin;
Mat image;
bool pause =false;

Rect PatchRect;
Mat PatchImg;

static void onMouse( int event, int x, int y, int, void* )
{
 if( selectObject & pause)
 {
  
  selection.x = MIN(x, origin.x);
  selection.y = MIN(y, origin.y);
  selection.width = std::abs(x - origin.x);
  selection.height = std::abs(y - origin.y);
  selection &= Rect(0, 0, image.cols, image.rows);
 }

 switch( event )
 {
 case CV_EVENT_LBUTTONDOWN:
  origin = Point(x,y);
  selection = Rect(x,y,0,0);
  selectObject = true;
  break;
 case CV_EVENT_LBUTTONUP:
  if(selectObject && pause)
  {
   if(selection.width > 30 && selection.height > 30 )
   {
    PatchRect = selection;
    image( PatchRect ).copyTo( PatchImg );
    imshow("Selected Img", PatchImg );
   }else
    selection = Rect(0,0,0,0);
  }
  selectObject = false;
  pause = false;
  
  break;
 }
}


int main (void)  
{  


 VideoCapture cap(0);
 Mat frame;
 namedWindow( "Demo", 0 );
 setMouseCallback( "Demo", onMouse, 0 );
 printf("P key is pause, ESC key is exit.\n");

 for(;;)
 {
  if(!pause)
   cap >> frame;
  if( frame.empty() )
   break;
  frame.copyTo(image);


  if( pause && selection.width > 0 && selection.height > 0 )
  {
   rectangle(image, Point(selection.x-1, selection.y-1), Point(selection.x+selection.width+1, selection.y+selection.height+1), CV_RGB(255,0,0) );
  }
  
  imshow( "Demo", image );

  char k = waitKey(10);

  if( k == 27 )
   break;
  else if(k == 'p' || k=='P' )
   pause=!pause;
 }

 return 0;  
}  
--

9/26/2014

opencv tip, Rect bounding

Sometimes, it causes memory error or unexpected result when we set lager rect size than image.

It is very annoying to always check.

But this tip is very simple and easy.

Plz see the code and result.


cv::Rect bounds(0,0,100,100);
cv::Rect roi(10,10,40,40);
Rect boundedRect = (roi & bounds);
cout << "x = " << boundedRect.x << endl;
cout << "y = " << boundedRect.y << endl;
cout << "width = " << boundedRect.width << endl;
cout << "height = " << boundedRect.height << endl;
 
cv::Rect roi2(-10,10,40,40);
Rect boundedRect2 = (roi2 & bounds);
cout << boundedRect << endl;

cv::Rect roi3(-10,-10,400,40);
cout << (roi3 & bounds) << endl;
...


9/24/2014

google urls of each countries

usa : http://www.google.com/webhp?hl=en
uk: http://www.google.co.uk/webhp?hl=en
australia  :  http://www.google.com.au/webhp?hl=en

france by french   http://www.google.fr/webhp?hl=fr
france by english   http://www.google.fr/webhp?hl=en

japen   http://www.google.co.jp/webhp?hl=ja
japen by english   http://www.google.co.jp/webhp?hl=en

OpenCV face detection using adaboost example source code and cpu vs gpu detection speed compare (CascadeClassifier, CascadeClassifier_GPU, detectMultiScale)

OpenCV has AdaBoost algorithm function.
And gpu version also is provided.

For using detection, we prepare the trained xml file.
Although we can train some target using adaboost algorithm in opencv functions, there are several trained xml files in the opencv folder. (mostly in opencv/sources/data/haarcascades )

I will use "haarcascade_frontalface_alt.xml" file for face detection example.

gpu and cpu both versions use xml file.

more detail refer to this source code.
The source code is included 2 version of cpu and gpu.

result is ..
gpu is faster than cpu version (but exactly they may not be same condition..)
blue boxes are result of cpu.
red boxes are results of gpu.
The results are not important because it can be different by parameters values.



<code start>

<code end>

Github
https://github.com/MareArts/AdaBoost-Face-Detection-test-using-OpenCV


#Tags
cvtColor, CascadeClassifier, CascadeClassifier_GPU, detectMultiScale,

9/23/2014

C/C++, option parameter / argument parser (using wingetopt.h, wingetopt.c)

This is option parameter parsing example.
When we excute cmd file with option, this paser is parsing each value of options.

ex) facedetection.exe -o B.avi -p 1000 -l A.avi

in source, parameters is parsred by "B.avi", "1000", ""(-l option exist), "A.avi"

see example source code easier understanding.


main.cpp
#include < iostream>
#include < string>
#include "wingetopt.h"

using namespace std;

struct Options 
{
 Options():Number(10),use_A(false),infile(),outfile()
 {}

 int Number;
 bool use_A;
 string infile;
 string outfile;
};

void parse_command_line(int argc, char** argv, Options& o)
{
 int c = -1;
 while( (c = getopt(argc, argv, "lo:p:")) != -1 )
 {
  switch(c)
  {
  case 'l':
   o.use_A = true;
   break;
  case 'o':
   o.outfile = optarg;
   break;
  case 'p':
   o.Number = atoi(optarg);
   break;
  default:
   cout << "error message" << endl;
   exit(1);
  }
 }

 if( optind < argc )
 {
  o.infile = argv[optind];
 }

 cout << "Num : " << o.Number << endl;
 cout << "Input file: " << o.infile << endl;
 cout << "Output file: " << o.outfile << endl;
 cout << "Use A: " << o.use_A << endl;
}

int main(int argc, char** argv)
{
 Options o;
 parse_command_line(argc, argv, o);
 


}
... wingetopt.h
/*
POSIX getopt for Windows

AT&T Public License

Code given out at the 1985 UNIFORUM conference in Dallas.  
*/

#ifdef __GNUC__
#include 
#endif
#ifndef __GNUC__

#ifndef _WINGETOPT_H_
#define _WINGETOPT_H_

#ifdef __cplusplus
extern "C" {
#endif

extern int opterr;
extern int optind;
extern int optopt;
extern char *optarg;
extern int getopt(int argc, char **argv, char *opts);

#ifdef __cplusplus
}
#endif

#endif  /* _GETOPT_H_ */
#endif  /* __GNUC__ */
... wingetopt.c
/*
POSIX getopt for Windows

AT&T Public License

Code given out at the 1985 UNIFORUM conference in Dallas.  
*/

#ifndef __GNUC__

#include "wingetopt.h"
#include < stdio.h>

#define NULL 0
#define EOF (-1)
#define ERR(s, c) if(opterr){\
 char errbuf[2];\
 errbuf[0] = c; errbuf[1] = '\n';\
 fputs(argv[0], stderr);\
 fputs(s, stderr);\
 fputc(c, stderr);}
 //(void) write(2, argv[0], (unsigned)strlen(argv[0]));\
 //(void) write(2, s, (unsigned)strlen(s));\
 //(void) write(2, errbuf, 2);}

int opterr = 1;
int optind = 1;
int optopt;
char *optarg;

int
getopt(argc, argv, opts)
int argc;
char **argv, *opts;
{
 static int sp = 1;
 register int c;
 register char *cp;

 if(sp == 1)
  if(optind >= argc ||
     argv[optind][0] != '-' || argv[optind][1] == '\0')
   return(EOF);
  else if(strcmp(argv[optind], "--") == NULL) {
   optind++;
   return(EOF);
  }
 optopt = c = argv[optind][sp];
 if(c == ':' || (cp=strchr(opts, c)) == NULL) {
  ERR(": illegal option -- ", c);
  if(argv[optind][++sp] == '\0') {
   optind++;
   sp = 1;
  }
  return('?');
 }
 if(*++cp == ':') {
  if(argv[optind][sp+1] != '\0')
   optarg = &argv[optind++][sp+1];
  else if(++optind >= argc) {
   ERR(": option requires an argument -- ", c);
   sp = 1;
   return('?');
  } else
   optarg = argv[optind++];
  sp = 1;
 } else {
  if(argv[optind][++sp] == '\0') {
   sp = 1;
   optind++;
  }
  optarg = NULL;
 }
 return(c);
}

#endif  /* __GNUC__ */

...




one more tip,
If you want to input argument values in the VS tools, you can input in here, -> property -> debug -> command argument
refer to this image~~





9/17/2014

opencv, simple source code Video frames to jpeg files (VideoCapture, imwrite)

simple source code.

read avi file and save jpeg files.

set video file name and save directory property in your setting.

#include < opencv2\opencv.hpp>
#include < stdio.h>


#ifdef _DEBUG        
#pragma comment(lib, "opencv_core249d.lib")
#pragma comment(lib, "opencv_imgproc249d.lib")   
#pragma comment(lib, "opencv_highgui249d.lib")
#else
#pragma comment(lib, "opencv_core249.lib")
#pragma comment(lib, "opencv_imgproc249.lib")
#pragma comment(lib, "opencv_highgui249.lib")
#endif  

using namespace std;
using namespace cv;

void main()
{
 VideoCapture stream1("./bigBugs1.avi");  //file name

 if (!stream1.isOpened()) { //check if video file has been initialised   
  cout << "cannot open the file";   
 }   
 //window name
 namedWindow("Origin");   

 //string 
 char str[256];
 int frameCount=0;
 //unconditional loop   
 while (true) {   
  Mat Frame;   
  if( stream1.read(Frame) == 0) //get one frame form video   
   break;
  imshow("Origin", Frame);   

  sprintf_s(str,".\\frames1\\%d_frames.jpg", frameCount++);
  imwrite(str,Frame);
  
  

  if (waitKey(30) >= 0)   
   break;   
 }   

 destroyAllWindows();   


}

..


9/03/2014

Python mail read example, using imaplib

import imaplib
import email
import mimetypes
from email import header


def decodeHeader( headerMsg ):
    L = header.decode_header(headerMsg)
    s = ''
    for s1, chset in L:
        if(type(s1) == bytes):
            s += s1.decode(chset) if chset else s1.decode()
        else:
            s += s1
    return s



host = 'imap.xxx.comt'
userid = 'myaccount@xxx.com'
passwd = 'passward'

imap = imaplib.IMAP4_SSL(host)
imap.login(userid, passwd)

imap.select('specific_folder') #or use INBOX
status, email_ids = imap.search(None, '(ALL)') #or use , '(UNSEEN)' )


for num in email_ids[0].split():
    type1, data = imap.fetch(num, '(RFC822)')
    raw_email = data[0][1]
    email_msg = email.message_from_bytes( raw_email )

    print( 'Subject: ', decodeHeader( email_msg['Subject'] ) )
    print( 'From: ', decodeHeader( email_msg['From'] ) )
    print( 'To: ', decodeHeader( email_msg['To'] ) )
    print( 'Date: ', decodeHeader( email_msg['Date'] ) )


    type1, data = imap.fetch(num, '(UID BODY[TEXT])')
    raw_email = data[0][1]
    print('contents: ', raw_email )
    print('----\n\n')


9/01/2014

Python send mail example, smtplib


import smtplib
from email.mime.text import MIMEText
from email.header import Header

host = 'smtp.gmail.com:587'
me = 'me@gmail.com'
you = 'you@daum.net'
subject = 'I love ํŒŒ์ด์ฌ'
contents = 'It is contents'

msg = MIMEText(contents.encode('utf-8'), _subtype='plain', _charset='utf-8')
msg['Subject'] = Header(subject.encode('utf-8'), 'utf-8')
msg['From'] = me
msg['To'] = you

s = smtplib.SMTP(host)
s.starttls()
#print( s.ehlo() )
s.login('ID','PASS')
problems = s.sendmail(me, [you], msg.as_string() )
#print( problems )
s.quit()





8/27/2014

python byte to string, string to byte example

refer to this code
thank you.

str = byte.decode( encoding='UTF-8')
byte2 = str.encode( encoding='UTF-8' )

8/12/2014

Opencv float array to Mat


you can input float array data to Mat through this way.

float H[9]={1,2,3,4,5,6,7,8,9};
Mat TT = Mat(3,3,CV_32FC1, &H);

But becareful, if H value is changed, TT value is also changed.

See the example source code.



float H[9];
 for(int i=0; i< 9; ++i)
  H[i] = i;
 
 for(int i=0; i< 9; ++i)
  cout << i << " : " << H[i] << endl;

 Mat TT = Mat(3,3,CV_32FC1, &H);
 cout << TT << endl; 

 for(int i=0; i< 9; ++i)
  H[i] = i*10;

 cout << "TT values are chaned.." << endl;
 cout << TT << endl;

 cout << endl;
 
 
If we modify code like this..
 
Mat TT = Mat(3,3,CV_32FC1, &H);

-> Mat TT = Mat(3,3,CV_32FC1, &H).clone();

TT value is not affected by float array.


8/11/2014

CUDA Link2005 error.

I meet the this error when compile CUDA.

error LNK2005: "int __cdecl XXXX" (?XXXXX@@YAHXZ) already defined in XXX.cu.obj

I solved error from the web page. -> http://stackoverflow.com/questions/5295503/cuda-lnk2005-error-on-device-function-used-in-header-file

In my case, I add "inline" keyword front of cuda function name.
ex)
 __device__ void matrix_set_identity(GPU_Matrix *A)

->

inline  __device__ void matrix_set_identity(GPU_Matrix *A)


Thank you.

proof, lim x->0 sin(x)/x = 1


8/06/2014

nvcc : fatal error : Could not set up the environment for Microsoft Visual Studio using ...

I meet this error when I complie opencv + cuda.



I have tried hundreds of times to solve this problem.
I tried opencv 2.4.8 + cuda 5.5 + vs 2012
opencv 2.4.8 + cuda 6 + vs 2012
opencv 2.4.9 + cuda 5.5 + vs 2012
opencv 2.4.9 + cuda 6 + vs 2012
opencv 2.4.9 + cuda 6 + vs 2013.

when I almost dead, I find solution.
It is solution.
Open "nvcc.profile" by text editor.
This file may be located in "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.0\bin" (in my case)
And add this setense
CUDA_NVCC_FLAGS += --compiler-bindir = "-IE:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin"

so this figure is all contents of nvcc.profile.




I am happy to notice this tip to world.
^^



7/29/2014

no duplication random, (stl random_shuffle example source code)

sometimes, we need no duplication random sequence data.
simple method is using shuffle random in stl.

refer to this source code.

#include < stdio.h>
#include < vector>
#include < algorithm>
#include < cstdlib>
#include < ctime>
#include < iostream>
using namespace std;


void main()
{
 //set size and initialize
 vector< int > A(10);
 for(int i=0; i< A.size(); ++i)
  A[i] = i;

 //confirm
 printf("----origin data \n");
 for(int i=0; i< A.size(); ++i)
  printf("[%d] - %d \n", i, A[i] );
 printf("----\n");

 //random 
 srand( unsigned (time(0) ) );
 random_shuffle( A.begin(), A.end() );

 //confirm
 printf("---- After shuffle \n");
 for(int i=0; i< A.size(); ++i)
  printf("[%d] - %d \n", i, A[i] );
 printf("----\n");
}


..

7/28/2014

(OpenCV Study) OpticalFlow Gpu feature extraction and matching (GoodFeaturesToTrackDetector_GPU, gpu:: PyrLKOpticalFlow example source code)

This is example source code of Gpu mode optical flow and matching.
Source code is little bit complex in matching refine part.


function of BruteForceMatcher_GPU is added for more accurate matching.
The result is like that.


The source code is here..

...
Mat Ma = Mat::eye(3, 3, CV_64FC1);
 cout << Ma << endl;
 double dm[3][3] = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
 Mat Mb = Mat(3, 3, CV_64F, dm);
 cout << Mb << endl;
 
 //Matrix - matrix operations :
 Mat Mc;
 cv::add(Ma, Mb, Mc); // Ma+Mb   -> Mc
 cout << Ma+Mb << endl;
 cout << Mc << endl;
 cv::subtract(Ma, Mb, Mc);      // Ma-Mb   -> Mc
 cout << Ma - Mb << endl;
 cout << Mc << endl;
 Mc = Ma*Mb; //Ma*Mb;
 cout << Mc << endl;
 
 //Elementwise matrix operations :
 cv::multiply(Ma, Mb, Mc);   // Ma.*Mb   -> Mc
 cout << Mc << endl;
 Mc = Ma.mul(Mb);
 cout << Mc << endl;
 cv::divide(Ma, Mb, Mc);      // Ma./Mb  -> Mc
 cout << Mc << endl;
 Mc = Ma + 10; //Ma + 10 = Mc
 cout << Mc << endl;

 //Vector products :
 double va[] = { 1, 2, 3 };
 double vb[] = { 0, 0, 1 };
 double vc[3];

 Mat Va(3, 1, CV_64FC1, va);
 Mat Vb(3, 1, CV_64FC1, vb);
 Mat Vc(3, 1, CV_64FC1, vc);

 double res = Va.dot(Vb); // dot product:   Va . Vb -> res
 Vc = Va.cross(Vb);    // cross product: Va x Vb -> Vc
 cout << res << " " << Vc << endl;


 //Single matrix operations :
 Mc = Mb.t();      // transpose(Ma) -> Mb (cannot transpose onto self)
 cout << Mc << endl;
 cv::Scalar t = trace(Ma); // trace(Ma) -> t.val[0] 
 cout << t.val[0] << endl; 
 double d = determinant(Ma); // det(Ma) -> d
 cout << d << endl;
 Mc = Ma.inv();         // inv(Mb) -> Mc
 invert(Ma, Mc);
 cout << Mc << endl;


 //Inhomogeneous linear system solver :
 double dm2[3][3] = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
 Mat A(3, 3, CV_64FC1, dm2);
 Mat x(3, 1, CV_64FC1);
 double vvb[] = { 14, 32, 52 };
 Mat b(3, 1, CV_64FC1, vvb);
 cv::solve(A, b, x, DECOMP_SVD); //// solve (Ax=b) for x
 cout << x << endl;


 //Eigen analysis(of a symmetric matrix) :
 float f11[] = { 1, 0.446, -0.56, 0.446, 1, -0.239, -0.56, 0.239, 1 };
 Mat data(3, 3, CV_32F, f11);
 Mat value, vector;
 eigen(data, value, vector);
 cout << "Eigenvalues" << value << endl;
 cout << "Eigenvectors" << endl;
 cout << vector << endl;


 //Singular value decomposition :
 Mat w, u, v;
 SVDecomp(data, w, u, v); // A = U W V^T
 //The flags cause U and V to be returned transposed(does not work well without the transpose flags).
 cout << w << endl;
 cout << u << endl;
 cout << v << endl;
...


refer to this posting
optical flow and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-opticalflow-gpu-feature.html
orb feature and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-orb-gpu-feature-extraction.html
surf feature and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-surf-gpu-and-matching.html

(Opencv Study) Orb gpu feature extraction and Matching (ORB_GPU, BruteForceMatcher_GPU example source code)

This is example source cod of ORB_GPU feature detection and matching.
ORB feature is known extraction speed is faster than surf and sift.
By the way, in my test case, speed time is not so fast.
But surf and sift is nofree algorithm. orb is free to use in commercial project.

This figure is matching result of orb example.
Note, this is gpu version result.


 

The example source code is here.
Especially, ORB is using Hamming matching method. L1 and L2 cann't use.

#include < stdio.h>
#include < iostream>

#include < opencv2\opencv.hpp>
#include < opencv2/core/core.hpp>
#include < opencv2/highgui/highgui.hpp>
//#include < opencv2/video/background_segm.hpp>
#include < opencv2\gpu\gpu.hpp>
#include < opencv2\stitching\detail\matchers.hpp >  
//#include < opencv2\nonfree\features2d.hpp >    


#ifdef _DEBUG        
#pragma comment(lib, "opencv_core247d.lib")
#pragma comment(lib, "opencv_gpu247d.lib")
#pragma comment(lib, "opencv_features2d247d.lib")
#pragma comment(lib, "opencv_highgui247d.lib")
#pragma comment(lib, "opencv_nonfree247d.lib")
#else
#pragma comment(lib, "opencv_core247.lib")
#pragma comment(lib, "opencv_gpu247.lib")
#pragma comment(lib, "opencv_features2d247.lib")
#pragma comment(lib, "opencv_highgui247.lib")
#pragma comment(lib, "opencv_nonfree247.lib");
#endif 



using namespace cv;
using namespace std;



void main()
{

 

 gpu::GpuMat img1(imread("C:\\videoSample\\Image\\Picture6.jpg", CV_LOAD_IMAGE_GRAYSCALE)); 
    gpu::GpuMat img2(imread("C:\\videoSample\\Image\\Picture7.jpg", CV_LOAD_IMAGE_GRAYSCALE)); 


 unsigned long t_AAtime=0, t_BBtime=0;
 float t_pt;
 float t_fpt;
 t_AAtime = getTickCount(); 


 
 //extractFeatures
 gpu::ORB_GPU orb(2000);
 gpu::GpuMat keypoints1GPU, keypoints2GPU; 
    gpu::GpuMat descriptors1GPU, descriptors2GPU; 

 
 orb(img1, gpu::GpuMat(), keypoints1GPU, descriptors1GPU);
 orb(img2, gpu::GpuMat(), keypoints2GPU, descriptors2GPU);

 cout << "FOUND " << keypoints1GPU.cols << " keypoints on first image" << endl; 
    cout << "FOUND " << keypoints2GPU.cols << " keypoints on second image" << endl; 

 gpu::BruteForceMatcher_GPU< Hamming > matcher;    
 vector< vector< DMatch> > matches; 
 matcher.knnMatch(descriptors1GPU, descriptors2GPU, matches, 2); 
 
 //matching
 std::vector< DMatch > good_matches;
 for(int k = 0; k < min(descriptors1GPU.rows-1,(int) matches.size()); k++) 
    {
        if((matches[k][0].distance < 0.6*(matches[k][1].distance)) && ((int) matches[k].size()<=2 && (int) matches[k].size()>0))
        {
            good_matches.push_back(matches[k][0]);
        }
    }    


 t_BBtime = getTickCount();
 t_pt = (t_BBtime - t_AAtime)/getTickFrequency();
 t_fpt = 1/t_pt;
 printf("%.4lf sec/ %.4lf fps\n",  t_pt, t_fpt );


 vector< KeyPoint> keypoints1, keypoints2;
    vector< float> descriptors1, descriptors2;
    orb.downloadKeyPoints(keypoints1GPU, keypoints1);
    orb.downloadKeyPoints(keypoints2GPU, keypoints2);
 printf("%d %d\n", keypoints1.size(), keypoints2.size() );

 
 Mat img_matches; 
 Mat img11, img22;
 img1.download(img11);
 img2.download(img22);
 Mat outImg;

    drawMatches(img11, keypoints1, img22, keypoints2, good_matches, img_matches);
 //drawKeypoints(img11, kp1, outImg);

 namedWindow("matches", 0);
    imshow("matches", img_matches);

    waitKey(0);
}


//
refer to this posting
optical flow and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-opticalflow-gpu-feature.html
orb feature and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-orb-gpu-feature-extraction.html
surf feature and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-surf-gpu-and-matching.html

7/27/2014

(OpenCV Study) Surf GPU and Matching (SURF_GPU, BruteForceMatcher_GPU example source code)

This is example source code of Matching using surf and bruteForceMathing of gpu version.
I think this simple example source code is useful to your gpu mode feature matching project.

This is source image.
The RC car is my favorite machine.


 
 
The figure is result of surf matching.



Processing time is 0.4028second and 2.4828 fps.
Image size is 1920x1080.


#include < stdio.h>
#include < iostream>

#include < opencv2\opencv.hpp>
#include < opencv2/core/core.hpp>
#include < opencv2/highgui/highgui.hpp>
#include < opencv2\gpu\gpu.hpp>
#include < opencv2\stitching\detail\matchers.hpp >  


#ifdef _DEBUG        
#pragma comment(lib, "opencv_core247d.lib")
#pragma comment(lib, "opencv_gpu247d.lib")
#pragma comment(lib, "opencv_features2d247d.lib")
#pragma comment(lib, "opencv_highgui247d.lib")
#pragma comment(lib, "opencv_nonfree247d.lib")
#else
#pragma comment(lib, "opencv_core247.lib")
#pragma comment(lib, "opencv_gpu247.lib")
#pragma comment(lib, "opencv_features2d247.lib")
#pragma comment(lib, "opencv_highgui247.lib")
#pragma comment(lib, "opencv_nonfree247.lib")
#endif 


using namespace cv;
using namespace std;

void main()
{
gpu::GpuMat img1(imread("C:\\videoSample\\Image\\Picture6.jpg", CV_LOAD_IMAGE_GRAYSCALE)); 
    gpu::GpuMat img2(imread("C:\\videoSample\\Image\\Picture7.jpg", CV_LOAD_IMAGE_GRAYSCALE)); 

 
 /////////////////////////////////////////////////////////////////////////////////////////
 unsigned long t_AAtime=0, t_BBtime=0;
 float t_pt;
 float t_fpt;
 t_AAtime = getTickCount(); 
 /////////////////////////////////////////////////////////////////////////////////////////

    gpu::SURF_GPU surf(400);
    // detecting keypoints & computing descriptors 
    gpu::GpuMat keypoints1GPU, keypoints2GPU; 
    gpu::GpuMat descriptors1GPU, descriptors2GPU; 
    surf(img1, gpu::GpuMat(), keypoints1GPU, descriptors1GPU); 
    surf(img2, gpu::GpuMat(), keypoints2GPU, descriptors2GPU); 
    
    cout << "FOUND " << keypoints1GPU.cols << " keypoints on first image" << endl; 
    cout << "FOUND " << keypoints2GPU.cols << " keypoints on second image" << endl; 

    // matching descriptors 
    gpu::BruteForceMatcher_GPU< L2< float> > matcher;    
 vector< vector< DMatch> > matches; 
 matcher.knnMatch(descriptors1GPU, descriptors2GPU, matches, 2); 
    
    // downloading results  Gpu -> Cpu
    vector< KeyPoint> keypoints1, keypoints2; 
    vector< float> descriptors1, descriptors2; 
    surf.downloadKeypoints(keypoints1GPU, keypoints1);
    surf.downloadKeypoints(keypoints2GPU, keypoints2);
    //surf.downloadDescriptors(descriptors1GPU, descriptors1); 
    //surf.downloadDescriptors(descriptors2GPU, descriptors2);

 vector< KeyPoint> matchingKey1, matchingKey2;
 std::vector< DMatch > good_matches;
 for(int k = 0; k < min(descriptors1GPU.rows-1,(int) matches.size()); k++) 
    {
        if((matches[k][0].distance < 0.6*(matches[k][1].distance)) && ((int) matches[k].size()<=2 && (int) matches[k].size()>0))
        {
            good_matches.push_back(matches[k][0]);

        }
    }
 


 
 t_BBtime = getTickCount();
 t_pt = (t_BBtime - t_AAtime)/getTickFrequency();
 t_fpt = 1/t_pt;
 printf("feature extraction = %.4lf / %.4lf \n",  t_pt, t_fpt );



 
 
 
    // drawing the results 
    Mat img_matches; 
 Mat img11, img22;
 img1.download(img11);
 img2.download(img22);
 
 //drawMatches(img11, matchingKey1, img22, matchingKey2, good_matches, img_matches); 
    drawMatches(img11, keypoints1, img22, keypoints2, good_matches, img_matches); 

 //drawKeypoints(img11, keypoints1, img_matches);
 

    namedWindow("matches", 0); 
    imshow("matches", img_matches); 
    waitKey(0); 
}




refer to this posting
optical flow and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-opticalflow-gpu-feature.html
orb feature and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-orb-gpu-feature-extraction.html
surf feature and matching using gpu
http://feelmare.blogspot.kr/2014/07/opencv-study-surf-gpu-and-matching.html

7/08/2014

STL vector sort, lambda version


//vector
vector< pair > vote;

//data input
~~
~~

//sort
 sort(vote.begin(), vote.end(), [](const pair& A, const pair& B){ return A.second > B.second; } );

7/03/2014

(Python, openCV study), k-means example source code of python and C++, and processing time comparing

Example source code of K-means algorithm in OpenCV,
The source code are two version, one is python and other is C++.
And I compare processing time, I do same condition such as same image, same parameter, and I checked same result.

The winner of processing speed is C++.
Despite C++ contain many "for" syntax but faster than python.

Check example source code.

This is input image


C++ version.
...
#include <  stdio.h>   
#include <  iostream>   
#include <  opencv2\opencv.hpp>   


#ifdef _DEBUG           
#pragma comment(lib, "opencv_core247d.lib")   
#pragma comment(lib, "opencv_imgproc247d.lib")   //MAT processing   
#pragma comment(lib, "opencv_highgui247d.lib")   
#else   
#pragma comment(lib, "opencv_core247.lib")   
#pragma comment(lib, "opencv_imgproc247.lib")   
#pragma comment(lib, "opencv_highgui247.lib")   
#endif  

using namespace cv;
using namespace std;

void main()
{

 unsigned long AAtime=0, BBtime=0; //check processing time   
 unsigned long inAtime=0, inBtime=0;
 AAtime = getTickCount(); //check processing time   

 inAtime = getTickCount(); //check processing time  
 Mat src = imread( "mare-08.jpg", 1 );
 Mat samples(src.rows * src.cols, 3, CV_32F);
 for( int y = 0; y < src.rows; y++ )
  for( int x = 0; x < src.cols; x++ )
   for( int z = 0; z < 3; z++)
    samples.at< float>(y + x*src.rows, z) = src.at< Vec3b>(y,x)[z];
 inBtime = getTickCount(); //check processing time    
 printf("in Data preparing %.2lf sec \n",  (inBtime - inAtime)/getTickFrequency() ); //check processing time   

 inAtime = getTickCount(); //check processing time  
 int clusterCount = 5;
 Mat labels;
 int attempts = 10;
 Mat centers;
 kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10, 1.0), attempts, KMEANS_RANDOM_CENTERS, centers );
 inBtime = getTickCount(); //check processing time    
 printf("K mean processing %.2lf sec \n",  (inBtime - inAtime)/getTickFrequency() ); //check processing time   


 inAtime = getTickCount(); //check processing time  
 Mat new_image( src.size(), src.type() );
 for( int y = 0; y < src.rows; y++ )
 {
  for( int x = 0; x < src.cols; x++ )
  { 
   int cluster_idx = labels.at< int>(y + x*src.rows,0);
   new_image.at< Vec3b>(y,x)[0] = centers.at< float>(cluster_idx, 0);
   new_image.at< Vec3b>(y,x)[1] = centers.at< float>(cluster_idx, 1);
   new_image.at< Vec3b>(y,x)[2] = centers.at< float>(cluster_idx, 2);
  }
 }
 inBtime = getTickCount(); //check processing time    
 printf("out Data Preparing processing %.2lf sec \n",  (inBtime - inAtime)/getTickFrequency() ); //check processing time   

 BBtime = getTickCount(); //check processing time    
 printf("Total processing %.2lf sec \n",  (BBtime - AAtime)/getTickFrequency() ); //check processing time   
  
 //imshow( "clustered image", new_image );
 imwrite("clustered_image.jpg", new_image);
 //waitKey( 0 );
 
}
...
result image and processing time of C++ version




Python Version
...
import numpy as np
import cv2
from matplotlib import pyplot as plt




e1 = cv2.getTickCount()

inA = cv2.getTickCount()
img = cv2.imread('mare-08.jpg')
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
inB = cv2.getTickCount()
print("in data preparing", (inB-inA)/cv2.getTickFrequency())

# define criteria, number of clusters(K) and apply kmeans()
inA = cv2.getTickCount()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 5
ret,label,center = cv2.kmeans(Z,K,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
inB = cv2.getTickCount()
print("K-means ", (inB-inA)/cv2.getTickFrequency())


# Now convert back into uint8, and make original image
inA = cv2.getTickCount()
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
inB = cv2.getTickCount()
print("out data preparing", (inB-inA)/cv2.getTickFrequency())

#print(center)
#print(label)

e2 = cv2.getTickCount()
time = (e2 - e1)/cv2.getTickFrequency()
print("total time", time, 1/time)

cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
...

The result of python

7/02/2014

python study, dateutil install method

simple way is
1. download dateutil from https://pypi.python.org/pypi/python-dateutil and unzip
2. copy dateutil foler to the python\Lib\site-packages
3. restart IDE
4. test import dateutil

6/30/2014

Python Sutdy, To use python class in the c source code (Python embedding)

This post is that how to embed python file in C source code.
Most articles in internet is opposed concept, such as to embed C source code into python

"Extending" is called to embed C module into Python
"Embedding" is called  to embed Python module into C

This article is about embedding.
My final goal is that coding opencv in python and relase to C, C++ users by dll type.

This source cod is example how to use python class in the C.
You can know easily if you see the code carefully.

..
main.cpp
#include 

#ifdef _DEBUG           
#pragma comment(lib, "python27_d.lib")  //Now, this option does not run.
#else   
#pragma comment(lib, "python27.lib")   
#endif    



void main()
{
 PyObject *module, *request, *mP;
 float rVal;

 Py_Initialize();
 module = PyImport_ImportModule("emPy"); //.py file name
 if( module == NULL)
 {
  PyErr_Clear();
  printf("Unable to import embed module");
 }

 request = PyObject_CallMethod(module, "myPower", NULL); //class name
 if(request == NULL)
 {
  PyErr_Clear();
  printf("fail to call class");
 }

 mP = PyObject_CallMethod(request, "myPow","f",10.0); //member function name, input value

 if(mP == NULL)
 {
  PyErr_Clear();
  printf("fail to call function");
 }else{
  PyArg_Parse(mP,"f", &rVal);  //get value from class function of .py
  printf("%lf \n", rVal);
 }

 //clear
 if( module != NULL )
  Py_DECREF(module);
 else
  PyErr_Print();

 if( request != NULL )
  Py_DECREF(request);
 else
  PyErr_Print();
 
 if( mP != NULL )
  Py_DECREF(mP);
 else
  PyErr_Print();
 
 Py_Exit(0);
}
..



emPy.py
...
class myPower:
    def myPow(self, inA):
        print inA*inA
        return inA*inA

...

Environment setting
- emPy.py should be located same directory with main.cpp
- Path setting -> include -> "C:\Python27\include"
                           lib        -> "C:\Python27\libs"


6/27/2014

python + opencv study -> class making, opencv and numpy simple usages,

I made simple image subtraction class by python + opencv.
More detail, the class evaluate whether two image is same or diffrent by 2 threshold.
first threshold is the britness different of pixel.
second threshold is percent of change. eg. count(changed pixel) / area(width*height)

This class can be applied detection of motion in continues image.

And you can study how to run opencv in the python.
I am also bigginer of python use.

I studied a part of relation numpy and opencv.

class_ImgSubtraction.py
--
__author__ = 'mare'


import numpy as np
import cv2


class ImgSubtraction:
    #image load
    def __init__(self, r_img, th1, th2):
        self.RImg = r_img
        self.Th1, self.Th2 = th1, th2
        self.cols, self.rows = r_img.shape[:2]
        self.area = self.cols * self.rows

    #image subtraction
    def eval_subtraction(self, c_img):

        #return false if c_img size is different with RImg
        if self.RImg.shape[:2] != c_img.shape[:2]:
            return 0

        ic_img = c_img
        #subtraction
        is_img = np.subtract(self.RImg, np.int_(ic_img))
        #abs
        ia_img = np.abs(is_img)
        #count pixels difference over than th1
        dcount = np.sum(ia_img > self.Th1)
        #image change percent
        dpersent = (dcount/np.float32(self.area) ) * 100

        if dpersent >= self.Th2:
            return 1
        else:
            return 0

--

main.py
--
__author__ = 'mare'


import cv2
from class_ImgSubtraction import ImgSubtraction


RImg = cv2.imread('test.png', 0)
CImg = cv2.imread('test2.png', 0)

e1 = cv2.getTickCount()

cImgSub = ImgSubtraction(RImg, 10, 1)


if cImgSub.eval_subtraction(CImg):
    print ('image different')

e2 = cv2.getTickCount()
time = (e2 - e1)/cv2.getTickFrequency()
print(time, 1/time)

cv2.waitKey(0)

--

you can also download the source code on the github
-> https://gist.github.com/mare90/2ea9b9ca7c80c8c259e1