1/19/2016

ICF(Integral Channel Features) + WaldBoost example (opencv, ICFDetector uses example)

Refer to this page about official reference of ICF
http://docs.opencv.org/3.0.0/d6/dc8/classcv_1_1xobjdetect_1_1ICFDetector.html

Other page for referencing
http://d.hatena.ne.jp/takmin/20151218/1450447204 (in Japaneses)
http://answers.opencv.org/question/56263/integral-channel-feature-detector-for-cars-bad-results/
http://answers.opencv.org/question/67810/opencv-3-beta-icf-classifier-performance-is-very-bad/

ICF+WaldBoost is upgraded on Google Summer of Code 2015(GSC).
And For use this, you build OpenCV included opencv_contrib.
Github address is -> https://github.com/Itseez/opencv_contrib
ICF is located in xobjdetect module.

The method to build including extra module(opencv_contrib) is refer to this page.
http://study.marearts.com/2015/01/mil-boosting-tracker-test-in-opencv-30.html


ICF(integral Channel Features) and ACF is feature like Haar and WaldBoost is learning algorithm like AdaBoost.
WaldBoost and ICF is a pair. For using ICF feature, WaldBoost algorithm must be run in OpenCV.


Good result Introduced in the Internet but,
Parameter settings for best learning is difficult and uncomfortable showing the learning process.

Anyway, let's find how to use it.

0. Data prepare part.
- make Positive images path list and Negative images path list.
- don't worry size and color channel, when learning these are corrected.
- But if negative image size is small(correction size I don't know), error is occurred when learning part.
...

...


1. learning part.
...
It is difficult to discover exact paramer values.


...
2. detection part.
...
values are scores matched rect vector.
I think 1 is best. 0 is worst.

...

this is some result.

1/17/2016

Ceemple OpenCV -> Pre build and Quick start for opencv in VS 2013


Ceemple is good tool for easy start opencv in vs 2013.
If your development environment is window, vs 2013 CUDA, This is easy way that you can use the opencv.
You can just install the ceemple opencv package by installation in vs 2013 IDE.

Tools -> Extensions and Updates -> search by "ceemple opencv"
And download -> install.

After install.
Make your project by Visual C++ -> Ceemple opencv Project.


When you create this project, you don't need setting include, lib path, And you build cam draw example immediately.

But this package just support vs 2013, 64bit, window now.

Thank you.



OpenCV Background Subtraction sample code

Dear Peter

Here is your answer.
I hope this posting & source code help to you.
And This page also is good explain to use background subtraction.
http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html#gsc.tab=0

This is simple code for example background subtraction.

The input is cam video.
Before get video frame, source code set some options.
MOG2, ROI, and morphology

in while, blur is option for reduce noise,
And morphology is also option.
As your project purpose and camera environment, you add more pre-processing.

But I don't add labeling code, normally do labeling processing after background subtration, because blob selecting(valid size or not), blob counting, interpretation of blob.

More information
Background subtraction code.
http://study.marearts.com/search/label/Background%20subtraction
Morphology
http://study.marearts.com/search/label/Morphology
Labeling (findContours)http://study.marearts.com/search/label/Blob%20labeling

Thank you.


#include "opencv2/opencv.hpp"

using namespace cv;

int main(int, char)
{
 VideoCapture cap(0); // open the default camera
 if (!cap.isOpened()) // check if we succeeded
  return -1;

 Ptr< BackgroundSubtractorMOG2 > MOG2 = createBackgroundSubtractorMOG2(3000, 64);
 //Options
 //MOG2->setHistory(3000);
 //MOG2->setVarThreshold(128);
 //MOG2->setDetectShadows(1); //shadow detection on/off
 Mat Mog_Mask;
 Mat Mog_Mask_morpho;

 Rect roi(100, 100, 300, 300);
 
 namedWindow("Origin", 1);
 namedWindow("ROI", 1);
 Mat frame;
 Mat RoiFrame;
 Mat BackImg;

 int LearningTime=0; //300;

 Mat element;
 element = getStructuringElement(MORPH_RECT, Size(9, 9), Point(4, 4));

 for (;;)
 {
  
  cap >> frame; // get a new frame from camera
  if (frame.empty())
   break;

  //option 
  blur(frame(roi), RoiFrame, Size(3, 3));
  //RoiFrame = frame(roi);

  //Mog processing
  MOG2->apply(RoiFrame, Mog_Mask);

  if (LearningTime < 300)
  {
   LearningTime++;
   printf("background learning %d \n", LearningTime);
  }
  else
   LearningTime = 301;

  //Background Imge get
  MOG2->getBackgroundImage(BackImg);  

  //morphology 
  morphologyEx(Mog_Mask, Mog_Mask_morpho, CV_MOP_DILATE, element);
  //Binary
  threshold(Mog_Mask_morpho, Mog_Mask_morpho, 128, 255, CV_THRESH_BINARY);  

  imshow("Origin", frame);
  imshow("ROI", RoiFrame);
  imshow("MogMask", Mog_Mask);
  imshow("BackImg", BackImg);
  imshow("Morpho", Mog_Mask_morpho);

  if (waitKey(30) >= 0)
   break;
 }

 
 return 0;
}

..










1/13/2016

Study of line fitting in 3D and example source code (matlab)

This article is the method about line fitting in 3d points.
If we have these 3d points, how to find best 3d line?



The main concept is to find principal component of 3d points.
There are several method to find principal component in some data.
RANSAC(Random Sample Consensus), PCA(Principal Component Analysis), SVD(Singular Value Decomposition), Probabilistic approach...

I will approach 3 ways to solve this problem.
Those are SVD, SVD II(Cross product), PCA.
And I will use matlab for example code, because matlab is very simple and useful.


1. SVD method

SVD(singular value decomposition) factorize a input matrix into 3 parts.


If M is input matrix and mxn matrix, SVM factorize U,sigma,V.
U is unitary matrix of mxm, 
Sigma is rectangular diagonal matrix of mxn,
and V is unitary matrix of nxn

In here(to get principal axis of 3d points), V matrix is important.
The biggest major component is the first column in V. Or the column of the largest value in sigma, that column is main component in the V.

See example.

[0.7139 0.0008 0.7003] is matched with 8.9673.
[-0.1302 -0.9824 0.1338] is matched with 5.1319.
[-0.6880 0.1867 0.7012] is matched with 3.3285.

In this case, The biggest principal axis is [0.7139, 0.0008 0.7003].
In line fitting problem, the vector is that line direction and vector. very simple.
Or we can get cross product by [-0.1302 -0.9824 0.1338] X [-0.6880 0.1867 0.7012].

More detail, see example source code in matlab.

2. PCA
PCA is also same concept with SVD. To get main axis, that is to get principal component in some matrix.

In matlab, we can get pca processing result very simple.
like that
[coeff, score, roots] = princomp(X);

And first column is the biggest principal vector.
dirVect = coeff(:,1)

Sorry, I can not explain well in English...
But If you see source code, you can understand more correct.

About line equation in 3d and intorduction, refer to this page->

%make example data
X = mvnrnd(rand(1,3)*10, [1 .2 .7; .2 1 0; .7 0 1],50);

%data drawing
figure(1);
plot3(X(:,1),X(:,2),X(:,3),'r+');
grid on;
hold on;


%Get mean value in data and subtraction data
x = X(:,1);
y = X(:,2);
z = X(:,3);
P=[mean(x),mean(y),mean(z)]; %mean
mX = [x-P(1),y-P(2),z-P(3)]; %subtraction data, X - mean

%set figure axis - that is setted by data
maxlim = max( X(:,:,:) );
minlim = min( X(:,:,:) );
axis([minlim(1) maxlim(1) minlim(2) maxlim(2) minlim(3) maxlim(3)]);
axis square


%SVD
[U, S, V] = svd(mX,0);
%V(:,1) is line direction or main vector
t=[-1000:1000]; %t is range.
SS=V(:,1)*t;
%line equation
A=P(1)+SS(1,:);
B=P(2)+SS(2,:);
C=P(3)+SS(3,:);
%drawing
plot3(A,B,C, 'k-');
...




%%cross product
%SVD
[U, S, V] = svd(mX,0);
direction=cross(V(:,end),V(:,end-1))
%direction(1)
%direction=direction./direction(1)
A2=P(1)+direction(1)*t;
B2=P(2)+direction(2)*t;
C2=P(3)+direction(3)*t;
plot3(A2,B2,C2, 'b-');
...


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% PCA
[coeff,score,roots] = princomp(X);
dirVect = coeff(:,1)

A3=P(1)+dirVect(1)*t;
B3=P(2)+dirVect(2)*t;
C3=P(3)+dirVect(3)*t;
plot3(A3,B3,C3, 'b-');
...

..

And the method to check error is to get distance sum between 3d point of data and point on fitted line that normal direction crossed.

See code.


[n m] =size(X)
x0 = P(1);
y0 = P(2);
z0 = P(3);
a = V(1,1)
b = V(2,1)
c = V(3,1)

terror=0;
for i=1:n
    x = X(i,1);
    y = X(i,2);
    z = X(i,3);
    
    %get corss point on the line and that is of normal direction of 3d point
    t = -(a*x0 - a*x + b*y0 - b*y + c*z0 - c*z) / (a^2 + b^2 + c^2);    
    lx = x0 + t*a ;
    ly = y0 + t*b ;
    lz = z0 + t*c ;

    plot3(lx,ly,lz,'r+'); %%point on the line - A
    plot3(x,y,z,'k+'); %%point of the data - B
    plot3([x lx],[y ly],[z lz],'k-'); %line A to B
    
    d1 = ( (lx-x)^2+(ly-y)^2+(lz-z)^2 ); %uclidian distance between A and B
    terror= d1 + terror;
    
end

disp('total error =')
disp(terror/n )

1/06/2016

SVM + HOG learning and detection methods using HogDescriptor

Dear Erol รงฤฑtak

This posting is for cleanning up the SVM + HOG learning and detection methods to help you.


Step 1. Prepare Data.

Prepare Positive and Negative images
Same size and gray scale

And make xml file for a more convenient data management.
Refer to this page for this step



Step 2. Training by SVM

Load positive.xml and Negative.xml and train by SVM
Refer to this page for this step 2




Step 3. Test by SVM

After training, test other images.
Refer to this page for this step 3



Step 4. for using MultiScaleDetection() 

For using Hog.MultiScaleDetection() and other functions.
We have to change the value that result of svm training. 

Refer to this page for Step 4. (There is a method for converting the end of source code)
http://study.marearts.com/2014/11/opencv-svm-learning-method-and-xml.html

after training.
refer to this source code for using method
..
//Load trained SVM xml data
FileStorage svmDX_Xml("XXXXX.xml", FileStorage::READ);
Mat xMat;
svmDX_Xml["SecondSVMd"] >> xMat;
vector< float> VX;  
//copy mat to vector  
VX.assign((float*)xMat.datastart, (float*)xMat.dataend);

//HogDescriptor
HOGDescriptor d( Size(64,64), Size(16,16), Size(8,8), Size(8,8), 9); //must be same with training setting.
d.setSVMDetector(VX);
...
d.detect(...) or d.detectMultiScale(...)

..



Thank you.

1/05/2016

(lucky tip) linux opencv+cuda cmake setting and build error ->nvcc fatal compute_11

About nvcc fatal compute_11 error
when build linux(ubuntu) after opencv + cuda setting using cmake

Try change cmake setting.
Cann't you see this option check Advanced box.

CUDA_GENERATION=Kepler
Add 3.2 in CUDA_ARCH_BIN

CUDA_ARCH_BIN = 3.2