Two view of cam to one screen using stitching algorithm(OpenCV, example source code), (mosaic)

This source code based on ->
This link page introduces how to make a mosaic image from two adjacent images.
I made two cam video to one stitching video using the source code.

After run, operate the program of 3 keys.
'q' key is quit, 'p' key is processing(stitching), 'r' is reset.

Mat TwoInOneOut(Mat Left, Mat Right);

void main()
 VideoCapture stream1(0);   //0 is the id of video device.0 if you have only one camera
 VideoCapture stream2(1);   //0 is the id of video device.0 if you have only one camera
 if (!stream1.isOpened()) { //check if video device has been initialised
  cout << "cannot open camera 1";

 if (!stream2.isOpened()) { //check if video device has been initialised
  cout << "cannot open camera 2";

// namedWindow("Processing");
// namedWindow("Left");
// namedWindow("Right");

 Mat H;
 int mode=0;
 //unconditional loop
 while (true) {
  Mat cameraFrame1;
  stream1.read(cameraFrame1); //get one frame form video

  Mat cameraFrame2;
  stream2.read(cameraFrame2); //get one frame form video

  if(mode == 0)
   imshow("Left", cameraFrame1);
   imshow("Right", cameraFrame2);

  Mat Left(cameraFrame1.rows, cameraFrame1.cols, CV_8U);
  Mat Right(cameraFrame1.rows, cameraFrame1.cols, CV_8U);
  cvtColor(cameraFrame1, Left, CV_RGB2GRAY, CV_8U);
  cvtColor(cameraFrame2, Right, CV_RGB2GRAY, CV_8U);

  if (waitKey(30) == 'p')
   printf("Homography Matrix Processing\n");
   H = TwoInOneOut(Left, Right);

  if(waitKey(30) == 'r')
   printf("normal mode\n");
  //printf("%d %d\n", H.cols, H.rows);
  if(H.cols == 3 && H.rows == 3)
   Mat WarpImg( Left.rows*2, Left.cols*2, cameraFrame1.depth() );
      //printf("%d %d\n", A.depth(), A.channels());
   warpPerspective(cameraFrame2, WarpImg, H, Size(WarpImg.cols, WarpImg.rows));
   Mat tempWarpImg = WarpImg(Rect(0,0,Left.cols,Left.rows));

   Mat WarpImg( Left.rows*2, Left.cols*2, CV_8U);
      //printf("%d %d\n", A.depth(), A.channels());
   warpPerspective(Right, WarpImg, H, Size(WarpImg.cols, WarpImg.rows));
   Mat tempWarpImg = WarpImg(Rect(0,0,Left.cols,Left.rows));
   if(mode ==1)
    imshow("Processing", WarpImg );
  //Mat t = WarpImg( Rect(0,0,B.cols, B.rows));


  //imshow("Processing", t );

  if (waitKey(30) == 'q')


Mat TwoInOneOut(Mat Left, Mat Right)
 Mat H;

 if(Left.channels() != 1 || Right.channels() != 1)
  printf("Channel Error\n");
  return H;

 //Detect the keypoints using SURF Detector
    int minHessian = 300; //1500; 
    SurfFeatureDetector detector( minHessian );
 SurfDescriptorExtractor extractor;

    std::vector< KeyPoint> kp_Left;
    detector.detect( Left, kp_Left );    
 Mat des_Left;
    extractor.compute( Left, kp_Left, des_Left );

 std::vector< KeyPoint> kp_Right;
 detector.detect( Right, kp_Right );
 Mat des_Right;
 extractor.compute( Right, kp_Right, des_Right );

 std::vector< vector< DMatch > > matches;
 FlannBasedMatcher matcher;
 matcher.knnMatch(des_Left, des_Right, matches, 2);
 //matcher.knnMatch(des_Right, des_Left, matches, 2);
 std::vector< DMatch > good_matches;

 for (size_t i = 0; i < matches.size(); ++i)
  if (matches[i].size() < 2)

  const DMatch &m1 = matches[i][0];
  const DMatch &m2 = matches[i][1];

  if(m1.distance <= 0.7 * m2.distance)        

 //Draw only "good" matches
 Mat img_matches;
    drawMatches( Left, kp_Left, Right, kp_Right, good_matches, 
  img_matches, Scalar::all(-1), Scalar::all(-1), 
  vector< char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
 imshow("Match", img_matches);

 //Find H
 if(good_matches.size() > 20 )
  std::vector< Point2f >  LeftMatchPT;
  std::vector< Point2f >  RightMatchPT;
  for( unsigned int i = 0; i < good_matches.size(); i++ )
   //-- Get the keypoints from the good matches
   LeftMatchPT.push_back( kp_Left[ good_matches[i].queryIdx ].pt );
   RightMatchPT.push_back( kp_Right[ good_matches[i].trainIdx ].pt );

  H = findHomography( RightMatchPT, LeftMatchPT, CV_RANSAC );
  //H = findHomography( LeftMatchPT,RightMatchPT, CV_RANSAC );

 return H;


The source code
-> here


Window function GetTickCount, OpenCV function getTickCount (example source code)

Do not confuse,
GetTickCount and getTickCount is different function.
The first is ms window function.
The second is openCV function.
The method to use is little bit different.
Show example source code~!

unsigned long Atime=0, Btime=0;
unsigned long AAtime=0, BBtime=0;

Atime = GetTickCount();
AAtime = getTickCount();

someFunctionTakeLongTime(); //Test function

Btime = GetTickCount();
BBtime = getTickCount();

printf("%.2lf \n",  (Btime - Atime)/1000.0 );
printf("%.2lf \n",  (BBtime - AAtime)/getTickFrequency() );

Be careful when you use this function.

Lifetrons, DrumBassIII BT

Lifetrons, DrumBass III BT
Bluetooth speaker
deluxe wireless edition

Small in size but sound is very big!!
satisfactory design.
But also weak bass
But my thought, the sound quality is a little bit better than Macbook air notebook.


OpenCV 2.46 Calibration example source code (using calibrateCamera function)

This is advanced from "http://feelmare.blogspot.kr/2011/08/camera-calibration-using-pattern-image.html"

When you run the calibration example source code, some information will ask you.
First question is to ask number of width corner points.
Second question is to ask number of height corner points.
Third question is to ask number of pattern boards.

Because I use this chess board pattern, the answers is as follows

Of course, you have to prepare the captured images of chess pattern.

The source code detect corner points and calibration will be performed.
This function 'findChessboardCorners' is used to detection corners.
And 'calibrateCamera' function is used to get calibration parameters.

This is calibration example source code.

//code start

//code end

After calibration, the source code save ->


The result images of corner detected.

This is matlab source code.
To confirm the result of calibration, I draw 2D image coordinate point to the 3D space.

m=[R|t]M or m=[R|-Rc]M
m is camera origin axis based coordinate.
M is world origin axis based coordinate.
In the -Rc, c is translate vector based on world origin axis.

pattern axis based
The equation is like this
In the equation, m is camera line coordinate for drawing.
M is camera coordinate based on pattern axis.

camera axis based, pattern position in 3D
The equation is like this
m=R*M+t or m=[R|t]M
In the equation, M is pattern coordinate for example -> [0 0 0; 10, 0 0; 0 10 0; 10 10 0] or 
R is rotation 3x3 matrix, t is 3x1 translation matrix.
After calibration, we can get each R,t of pattern boards.
m is pattern 3D coordinate based on camera origin axis. 

The main m file is Sapce2D3D.m in matlab files.

//matlab code start

//matlab code end
You can download calibration source code and matlab code in here.