12/23/2013

TBB example source code ( parallel_invoke function )

You need TBB lib to run this source code.

Download TBB file, and unzip your propely new folder in your computer.
And copy dll files into windows\system folder.

TBB download url -> https://www.threadingbuildingblocks.org/download

 
 
Set path of TBB incoude, lib location on the Visual studio.
 



This is example source code.
 

/////
#include < stdio.h>
#include < tchar.h>
#include < time.h>
#include < tbb/tbb.h>

#ifdef _DEBUG
#pragma comment(lib, "tbb_debug.lib")
#else
#pragma comment(lib, "tbb.lib");
#endif // _DEBUG

using namespace tbb;

void main()
{
 task_scheduler_init init;

 parallel_invoke(
  []()->void
 {
  ::Sleep(1000);
  ::printf("finish:%d\n", 1000);
 },
  []()->void
 {
  ::Sleep(1000);
  ::printf("finish:%d\n", 10000);
 });

 printf("All work is done \n");

 getchar();
 
}
/////

(Arduino Study) Arduino LED on/off by Serial communication

I'm sorry very easy~!



(Arduino Study) Arduino serial communication and led on/off on the board(pin 13)

Very Easy and Fun.
Later, Serial Communication window can be used by debug window~~

digitalWrite(12, HIGH); -> This code is the function to use pull up register.





12/20/2013

Install TBB + CUDA with OpenCV, ( How to setup TBB, CUDA on the OpenCV, VS 2011, window)

I introduced CUDA + OpenCV on this page -> http://feelmare.blogspot.kr/2013/12/gpicuda-opencv-setting-method-and.html

In this post, I will introduce TBB + CUDA + OpenCV install(setup) method.
TBB is abbreviation of Intel Threading Builidng Block.
This supports speed up method using parallel processing of CPU.

TBB is free and you can download on this site - > https://www.threadingbuildingblocks.org/download
Downloaded TBB file don't need setup. Just unzip the file in the appropreate directory.
There is Bin, Lib and include folder in the unzeipped directory. You can be programming directly using these files.


But if you want to use the TBB supported function of OpenCV, you have to make new OpenCV libs, dlls on your computer.
It is same procees with CUDA + OpenCV -> http://feelmare.blogspot.kr/2013/12/gpicuda-opencv-setting-method-and.html.

1. run cmake.
 
"C:/opencv_247/source" is the location of OpenCV folder.
"C:/opencv_247/source/OpenCV_CUDA_Tbb_247" is target folder to make source codes.
Make target folder at any where freely.
 
2. configuration click and select your compiler.
3. Check option


 
4. click configuration and select the location of TBB's include folder.

 
 
5. click configuration
 
You will see this red line, but the direction infomation will probably correct.
so click configuration again.
 
 
Do you see yes sign in the "Use TBB : " list?
Check~!!
 
This is my last option check list.


 
This list included cuda and TBB options.
Now, click gnerate. and you can see files made newly in the target folder.
Open OpenCV.sln file by your Visual Studio Tool.
 
And complie ~!! praying..
 
After compile realese and debug mode.
Gathering the bin and lib file in the properly folder.
 
 
 
 

 
 
----------------------------------------------------------------------------------------------
#Tip 1 : If you fail to compile opencv_source code, try rebuild again.
              In my case, I have successed by twice compile( build -> rebuild )
----------------------------------------------------------------------------------------------
 
To use TBB, You should set tbb include, lib path on your visual studio.

 
 
And copy tbb dll files in to the Windows->systems folder.
 

 
 
This is example source code using TBB
 
////
#include <  stdio.h >  
#include <  vector >
#include <  opencv2\opencv.hpp > 
#include <  opencv2\stitching\stitcher.hpp >

#ifdef _DEBUG  
#pragma comment(lib, "opencv_core247d.lib")   
#pragma comment(lib, "opencv_imgproc247d.lib")   //MAT processing  
//#pragma comment(lib, "opencv_objdetect246d.lib")   
//#pragma comment(lib, "opencv_gpu247d.lib")  
//#pragma comment(lib, "opencv_features2d246d.lib")  
#pragma comment(lib, "opencv_highgui247d.lib")  
//#pragma comment(lib, "opencv_ml246d.lib")
#pragma comment(lib, "opencv_stitching247d.lib")
#pragma comment(lib, "tbb_debug.lib")

#else  
#pragma comment(lib, "opencv_core247.lib")  
#pragma comment(lib, "opencv_imgproc247.lib")  
//#pragma comment(lib, "opencv_objdetect246.lib")  
//#pragma comment(lib, "opencv_gpu247.lib")  
//#pragma comment(lib, "opencv_features2d246.lib")  
#pragma comment(lib, "opencv_highgui247.lib")  
//#pragma comment(lib, "opencv_ml246.lib") 
#pragma comment(lib, "opencv_stitching247.lib")
#pragma comment(lib, "tbb.lib")
#endif  


using namespace cv;  
using namespace std;


void main()  
{
 
 vector<  Mat > vImg; 
 vector<  vector<  Rect > > vvRect;
 Mat rImg;

 vImg.push_back( imread("./stitching_img/m1.jpg") );
 //vImg.push_back( imread("./stitching_img/m8.jpg") );
 //vImg.push_back( imread("./stitching_img/m5.jpg") );
 vImg.push_back( imread("./stitching_img/m4.jpg") );
 vImg.push_back( imread("./stitching_img/m2.jpg") );
 //vImg.push_back( imread("./stitching_img/m7.jpg") );
 //vImg.push_back( imread("./stitching_img/m6.jpg") );
 vImg.push_back( imread("./stitching_img/m3.jpg") );
 //vImg.push_back( imread("./stitching_img/m9.jpg") );
 //vImg.push_back( imread("./stitching_img/m10.jpg") );
 //vImg.push_back( imread("./stitching_img/m11.jpg") );
 //vImg.push_back( imread("./stitching_img/m12.jpg") );
 //vImg.push_back( imread("./stitching_img/m13.jpg") );
  
  
 
 
 int c = gpu::getCudaEnabledDeviceCount();
 printf("%d\n", c);
    

 Stitcher stitcher = Stitcher::createDefault(1);


 unsigned long AAtime=0, BBtime=0;
 AAtime = getTickCount();

 //stitcher.stitch(vImg, vvRect, rImg);
 stitcher.stitch(vImg, rImg);

 BBtime = getTickCount(); 
 printf("%.2lf sec \n",  (BBtime - AAtime)/getTickFrequency() );

 imshow("Stitching Result", rImg);
 
 waitKey(0); 

}  
////

----------------------------------
#Tip 2 : If you don't know setting method Opencv + Visual Studio, reference this page http://feelmare.blogspot.kr/2013/08/visual-studio-2012-opencv-246-setting.html
#Tip 3 : If you want to GPU + Opencv, reference this page
 http://feelmare.blogspot.kr/2013/12/gpicuda-opencv-setting-method-and.html.
#Tip 4: You want to know my TBB run successfully or not, reference this page
http://feelmare.blogspot.kr/2013/12/tbb-example-source-code-parallelinvoke.html

12/17/2013

Finding largest subset images that is only adjacent(subsequnce) images, (OpenCV, SurfFeaturesFinder, BestOf2NearestMatcher, leaveBiggestComponent funcions example souce code)

The souce code flow is like that...

1.
find features in each images using SurfFeaturesFinder function.
Features value is contained in the ImageFeatures structure.

2.
Matching features.
Matcher(features, pairwise_matches, matching_mask)
in the source code, features is vector.
So the Matcher function get matcing value of each pair images.

3.
leave biggest component,
Using conf_threshold, the function leaves largest correlation images.

Input
Input image is 6 images.
4 images are sequence images, 2 images is another sequence images.

Output
The souce code gives the result that is index of subset images of 4 images component.



////
#include < stdio.h >  
#include < opencv2\opencv.hpp >  
#include < opencv2\features2d\features2d.hpp >
#include < opencv2\nonfree\features2d.hpp >
#include < opencv2\stitching\detail\matchers.hpp >
#include < opencv2\stitching\stitcher.hpp >


#ifdef _DEBUG  
#pragma comment(lib, "opencv_core247d.lib")   
//#pragma comment(lib, "opencv_imgproc247d.lib")   //MAT processing  
//#pragma comment(lib, "opencv_objdetect247d.lib")   
//#pragma comment(lib, "opencv_gpu247d.lib")  
#pragma comment(lib, "opencv_features2d247d.lib")  
#pragma comment(lib, "opencv_highgui247d.lib")  
//#pragma comment(lib, "opencv_ml247d.lib")
#pragma comment(lib, "opencv_stitching247d.lib");
#pragma comment(lib, "opencv_nonfree247d.lib");

#else  
#pragma comment(lib, "opencv_core247.lib")  
//#pragma comment(lib, "opencv_imgproc247.lib")  
//#pragma comment(lib, "opencv_objdetect247.lib")  
//#pragma comment(lib, "opencv_gpu247.lib")  
#pragma comment(lib, "opencv_features2d247.lib")  
#pragma comment(lib, "opencv_highgui247.lib")  
//#pragma comment(lib, "opencv_ml247.lib")  
#pragma comment(lib, "opencv_stitching247.lib");
#pragma comment(lib, "opencv_nonfree247.lib");
#endif  

using namespace cv;  
using namespace std;


void main()  
{
 vector< Mat > vImg;
 Mat rImg;

 vImg.push_back( imread("./m7.jpg") );
 vImg.push_back( imread("./B1.jpg") );
 vImg.push_back( imread("./m9.jpg") );
 vImg.push_back( imread("./m6.jpg") );
 vImg.push_back( imread("./B2.jpg") );
 vImg.push_back( imread("./m8.jpg") );
 

 //feature extract
 detail::SurfFeaturesFinder FeatureFinder;
 vector< detail::ImageFeatures> features;
 
 for(int i=0; i< vImg.size(); ++i)
 {  
  detail::ImageFeatures F;
  FeatureFinder(vImg[i], F);  
  features.push_back(F);
  features[i].img_idx = i;
  printf("Keypoint of [%d] - %d points \n", i, features[i].keypoints.size() );
 }
 FeatureFinder.collectGarbage();

 //match
 vector<  int> indices_;
 double conf_thresh_ = 1.0;
 Mat matching_mask;
 vector<  detail::MatchesInfo> pairwise_matches;
 detail::BestOf2NearestMatcher Matcher;
 Matcher(features, pairwise_matches, matching_mask);
 Matcher.collectGarbage();

 printf("\nBiggest subset is ...\n");
 // Leave only images we are sure are from the same panorama 
 indices_ = detail::leaveBiggestComponent(features, pairwise_matches, (float)conf_thresh_);
 Matcher.collectGarbage();

 for (size_t i = 0; i <  indices_.size(); ++i)
    {
  printf("%d \n", indices_[i] );
 }
 

}

////









12/16/2013

How to access iOS provisioning portal site?

My question : How to access iOS provisioning portal site?
                       I never find this menu on the iOS development site..

My answer : I have not paid development fee. So I cannot see the menu..


My screen

Paid man screen

12/13/2013

(Arduino study) LED brightness control by switch connection

The source code is combine of Brightness changing(http://feelmare.blogspot.kr/2013/12/arduino-study-led-brightness-changing.html) and LED turn on/off(http://feelmare.blogspot.kr/2013/12/arduino-led-onoff-using-switch-aduino.html).


The LDE brightness is changed when switch is connected.

////
const int LED=9;
const int BUTTON = 7;

int val = 0;

int old_val = 0;
int state = 0;

int brightness = 128;
unsigned long startTime = 0;

void setup(){
  pinMode(LED, OUTPUT);
  pinMode(BUTTON, INPUT);
}

void loop()
{
  val = digitalRead(BUTTON);
  
  if( (val == HIGH) && (old_val == LOW) ){
    state = 1-state;
    startTime = millis();
    delay(10);
  }
  
  
  if( (val == HIGH) && (old_val==HIGH) ){
    
    if(state == 1 && (millis() - startTime) > 500 ){
      
      brightness++;
      delay(10);
      
      if(brightness > 255){
        brightness=0;
      }
    }
  }
  
  
  old_val = val;
  if(state == 1)
  {
    analogWrite(LED, brightness);
  }
}
////





(Arduino Study) LED brightness changing, using analog pin-port, (analogWrite function)

'analogWrite' output value's range is 0~255.
In the source code, 'i' is changed 0 to 255 and 255 to 0.
So LED brightness would be changed.


//////
const int LED = 9;
int i=0;

void setup(){
  pinMode(LED, OUTPUT);
}

void loop(){
  
  for(i=0; i< 255; i++){
    analogWrite(LED, i);
    delay(10);
  }
  
  for(i=255; i> 0; --i)
  {
    analogWrite(LED, i);
    delay(10);
  }
}

//////






How to use mask parameter for SURF feature detector (OpenCV)

The function of mask parameter is we can extract features only desired position(Rect).

This is mask.
 
This is output.
 
We can see features is extracted only white rect.
This is example source code.
 
 
 
 

/////
#include < stdio.h >  
#include < opencv2\opencv.hpp >
#include < opencv2\features2d\features2d.hpp >
#include < opencv2\nonfree\features2d.hpp >

#ifdef _DEBUG  
#pragma comment(lib, "opencv_core246d.lib")   
//#pragma comment(lib, "opencv_imgproc246d.lib")   //MAT processing  
//#pragma comment(lib, "opencv_objdetect246d.lib")   
//#pragma comment(lib, "opencv_gpu246d.lib")  
#pragma comment(lib, "opencv_features2d246d.lib")  
#pragma comment(lib, "opencv_highgui246d.lib")  
//#pragma comment(lib, "opencv_ml246d.lib")
//#pragma comment(lib, "opencv_stitching246d.lib");
#pragma comment(lib, "opencv_nonfree246d.lib");

#else  
#pragma comment(lib, "opencv_core246.lib")  
//#pragma comment(lib, "opencv_imgproc246.lib")  
//#pragma comment(lib, "opencv_objdetect246.lib")  
//#pragma comment(lib, "opencv_gpu246.lib")  
#pragma comment(lib, "opencv_features2d246.lib")  
#pragma comment(lib, "opencv_highgui246.lib")  
//#pragma comment(lib, "opencv_ml246.lib")  
//#pragma comment(lib, "opencv_stitching246.lib");
#pragma comment(lib, "opencv_nonfree246.lib");
#endif  

using namespace cv;  
using namespace std;


void main()  
{
 
 //SURF_GPU example source code
 Mat inImg,outImg;
 vector< cv::KeyPoint > src_keypoints;
 vector< float > src_descriptors;
 
 //image load
 inImg = imread("ship.png",0);

 //FeatureFinder 
 SurfFeatureDetector FeatureFinder(400);

 //make mask
 Mat mask = Mat::zeros(inImg.size(), CV_8U);  
 Mat roi(mask, Rect(400,400,400,400) );
 roi = Scalar(255, 255, 255);  

 /*
 //The mode to set multiple masks
 vector< Rect > mask_rect;
 mask_rect.push_back(Rect(0,0,200,200) );
 mask_rect.push_back(Rect(200,200,200,200) );
 mask_rect.push_back(Rect(400,400,200,200) );
 mask_rect.push_back(Rect(600,600,200,200) );

 Mat mask = Mat::zeros(inImg.size(), CV_8U);  
 for(int i=0; i< mask_rect.size(); ++i)
 {
  Mat roi(mask, mask_rect[i] ); 
  roi = Scalar(255, 255, 255);  
 }
 */

 //Feature Extraction
 FeatureFinder.detect( inImg, src_keypoints , mask);

 //Features Draw
 drawKeypoints( inImg, src_keypoints, outImg, Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );

 imshow("Show", outImg); 
 imshow("mask", mask); 
 imshow("roi", roi); 
 waitKey(0);

 //save to file
 imwrite("output.jpg", outImg);
 imwrite("mask.jpg", mask);
 imwrite("roi.jpg", roi);

}

/////


If you want to set one more masks, see the comment section.
This source code is advance on this page(http://feelmare.blogspot.kr/2013/12/surffeaturedetector-exmaple-source-code.html).

SurfFeatureDetector exmaple source code (OpenCV)

This is 'SurfFeatureDetector' example source code.
On this page(http://feelmare.blogspot.kr/2013/12/surfgpu-example-source-code-feature.html), I introduced SURF_GPU.
In my case, The processing time difference is small.
GPU-> 0.99 sec, CPU-> 1.04 sec.


 



 

////
#include < stdio.h >  
#include < opencv2\opencv.hpp >
#include < opencv2\features2d\features2d.hpp >
#include < opencv2\nonfree\features2d.hpp >

#ifdef _DEBUG  
#pragma comment(lib, "opencv_core246d.lib")   
//#pragma comment(lib, "opencv_imgproc246d.lib")   //MAT processing  
//#pragma comment(lib, "opencv_objdetect246d.lib")   
//#pragma comment(lib, "opencv_gpu246d.lib")  
#pragma comment(lib, "opencv_features2d246d.lib")  
#pragma comment(lib, "opencv_highgui246d.lib")  
//#pragma comment(lib, "opencv_ml246d.lib")
//#pragma comment(lib, "opencv_stitching246d.lib");
#pragma comment(lib, "opencv_nonfree246d.lib");

#else  
#pragma comment(lib, "opencv_core246.lib")  
//#pragma comment(lib, "opencv_imgproc246.lib")  
//#pragma comment(lib, "opencv_objdetect246.lib")  
//#pragma comment(lib, "opencv_gpu246.lib")  
#pragma comment(lib, "opencv_features2d246.lib")  
#pragma comment(lib, "opencv_highgui246.lib")  
//#pragma comment(lib, "opencv_ml246.lib")  
//#pragma comment(lib, "opencv_stitching246.lib");
#pragma comment(lib, "opencv_nonfree246.lib");
#endif  

using namespace cv;  
using namespace std;


void main()  
{
/////CPU mode


//processign tiem measurement
unsigned long AAtime=0, BBtime=0;

//SURF_GPU example source code
Mat inImg,outImg;
vector src_keypoints;
vector src_descriptors;


//image load
inImg = imread("ship.png",0);

//FeatureFinder 
SurfFeatureDetector FeatureFinder(400);

//processing time measure
AAtime = getTickCount();

//Feature Extraction
FeatureFinder.detect( inImg, src_keypoints );

//Processing time measurement
BBtime = getTickCount(); 

//Features Draw
drawKeypoints( inImg, src_keypoints, outImg, Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );

imshow("Show", outImg); 
printf("Processing time = %.2lf(sec) \n",  (BBtime - AAtime)/getTickFrequency() );
printf("Features %d\n", src_keypoints.size() );

waitKey(0);

//save to file
imwrite("output.jpg", outImg);

}
////





SURF_GPU example source code (feature finder using GPU )

This is example source code about SURF_GPU.
The time of processing is taken 0.99 sec on the 1787x1510 image size.
My environment is
Intel(r) core(TM) i5-3570 cpu@3.4ghz 3.80 Ghz
NVIDA Geforce GTX 650






Example source code



//////
#include < stdio.h >  
#include < opencv2\opencv.hpp >  
#include < opencv2\nonfree\gpu.hpp >


#ifdef _DEBUG  
#pragma comment(lib, "opencv_core246d.lib")   
//#pragma comment(lib, "opencv_imgproc246d.lib")   //MAT processing  
//#pragma comment(lib, "opencv_objdetect246d.lib")   
//#pragma comment(lib, "opencv_gpu246d.lib")  
#pragma comment(lib, "opencv_features2d246d.lib")  
#pragma comment(lib, "opencv_highgui246d.lib")  
//#pragma comment(lib, "opencv_ml246d.lib")
//#pragma comment(lib, "opencv_stitching246d.lib");
#pragma comment(lib, "opencv_nonfree246d.lib");

#else  
#pragma comment(lib, "opencv_core246.lib")  
//#pragma comment(lib, "opencv_imgproc246.lib")  
//#pragma comment(lib, "opencv_objdetect246.lib")  
//#pragma comment(lib, "opencv_gpu246.lib")  
#pragma comment(lib, "opencv_features2d246.lib")  
#pragma comment(lib, "opencv_highgui246.lib")  
//#pragma comment(lib, "opencv_ml246.lib")  
//#pragma comment(lib, "opencv_stitching246.lib");
#pragma comment(lib, "opencv_nonfree246.lib");
#endif  

using namespace cv;  
using namespace std;


void main()  
{
 //processign tiem measurement
 unsigned long AAtime=0, BBtime=0;

 //SURF_GPU example source code
 Mat inImg;
 vector src_keypoints;
 vector src_descriptors;

 gpu::GpuMat inImg_g;
 gpu::GpuMat src_keypoints_gpu, src_descriptors_gpu;

 //image load
 inImg = imread("ship.png",0);

 //FeatureFinder 
 gpu::SURF_GPU FeatureFinder_gpu(400);

 //processing time measure
 AAtime = getTickCount();
 inImg_g.upload(inImg);

 //Feature Extraction
 FeatureFinder_gpu(inImg_g, gpu::GpuMat(), src_keypoints_gpu, src_descriptors_gpu, false);

 //Processing time measurement
 BBtime = getTickCount(); 
 
 //descriptor down
 FeatureFinder_gpu.downloadKeypoints(src_keypoints_gpu, src_keypoints); 
 FeatureFinder_gpu.downloadDescriptors(src_descriptors_gpu, src_descriptors);
 
 //Features Draw
 //ํŠน์ง•์  ๋ฟŒ๋ฆฌ๊ธฐ 1
 drawKeypoints(inImg, src_keypoints, inImg, Scalar::all(-1), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
 
 imshow("Show", inImg); 

 printf("Processing time = %.2lf(sec) \n",  (BBtime - AAtime)/getTickFrequency() );
 printf("Features %d\n", src_keypoints.size() );

 waitKey(0);

 //save to file
 imwrite("output.jpg", inImg);
}

///

12/11/2013

Arduino, LED On/Off using switch, (Aduino study),

Today, I studied about input signal.
So, my arduino can turn on/off LED by switch.
In the picture, black wire works the role of switch.






///
const int LED = 13;
const int BUTTON = 7;

int val = 0;
int old_val = 0;
int sw_OnOff=0;

void setup(){
  pinMode(LED, OUTPUT);
  pinMode(BUTTON, INPUT);
}


void loop(){
  val = digitalRead(BUTTON);
  
  if((old_val==LOW) && (val==HIGH)){
    sw_OnOff=1-sw_OnOff;
    delay(100);
  }
  
  old_val = val;
  
  
  digitalWrite(LED, sw_OnOff);
  
}
////


But I still don't know why it need register and why the wire connect with there??
But, it is very fun~

12/10/2013

OpenCV, What is the InputArray?

The 'InputArray' is used usually in OpenCV function parameter.
I think that is made to transfer vector< >, Matx< >, Vec< > and scalar easily.

This is the document on the opencv.org.

InputArray and OutputArray

Many OpenCV functions process dense 2-dimensional or multi-dimensional numerical arrays. Usually, such functions take cpp:class:Mat as parameters, but in some cases it’s more convenient to use std::vector<> (for a point set, for example) or Matx<> (for 3x3 homography matrix and such). To avoid many duplicates in the API, special “proxy” classes have been introduced. The base “proxy” class is InputArray. It is used for passing read-only arrays on a function input. The derived from InputArray class OutputArray is used to specify an output array for a function. Normally, you should not care of those intermediate types (and you should not declare variables of those types explicitly) - it will all just work automatically. You can assume that instead of InputArray/OutputArray you can always use Mat, std::vector<>, Matx<>, Vec<> or Scalar. When a function has an optional input or output array, and you do not have or do not want one, pass cv::noArray().


So I tested the 'InputArray' by programming.
Reference this source code.

/////
using namespace std;

void main()
{

 ///////////////////////////////////////////
 ///Test #1
 //Read
 std::vector< cv::Mat > inImgs;
 inImgs.push_back( cv::imread("S1.jpg") );
 inImgs.push_back( cv::imread("S2.jpg") );
 inImgs.push_back( cv::imread("S3.jpg") );

 //input
 cv::InputArray imgs = inImgs; 

 //property
 printf("Total = %d, Size=(%d,%d)\n", imgs.total(), imgs.size().width, imgs.size().height );

 //copy;
 std::vector< cv::Mat > inFunction;
 imgs.getMatVector( inFunction ); 

 //Test inFunction
 cv::imshow("a",inFunction[0]);
  
 cv::waitKey(0);

}
/////
output
 
 
 

Visual studio, the method do not shut the console window (๋น„์ฃผ์–ผ ์ŠคํŠœ๋””์˜ค, ์ฝ˜์†”์ฐฝ ์•ˆ๋‹ซํžˆ๊ฒŒ ํ•˜๋Š” ๋ฐฉ๋ฒ•)

This tip is easy to forget.


Select option to Cosole(..) at the sub system.

[Project] menu, select [Properties] ->
[Configuration Properties] ->
[Linker] ->
[System] ->
[Sub System] Console (/ SUBSYSTEM: CONSOLE)]

Thank you.

avrdude stk500_recv() programmer is not responding Error ( simple solution in my case.. ), Arduino beginner

I meet the error message when I upload source code to the board.


In my case, I set correct board type, at the tools->board menu.
On the Mac, you can see your board type at the system information application.


Change you correct board type and try again.

In my case, communication port is like this picture.




12/05/2013

OpenCV Cuda Example source code

#include < stdio.h >  
#include < opencv2\opencv.hpp >
#include < opencv2\gpu\gpu.hpp >
 
#ifdef _DEBUG  
#pragma comment(lib, "opencv_core246d.lib")   
#pragma comment(lib, "opencv_imgproc246d.lib")   //MAT processing  
//#pragma comment(lib, "opencv_objdetect246d.lib")   
#pragma comment(lib, "opencv_gpu246d.lib")  
//#pragma comment(lib, "opencv_features2d246d.lib")  
#pragma comment(lib, "opencv_highgui246d.lib")   
//#pragma comment(lib, "opencv_ml246d.lib")  
#else  
#pragma comment(lib, "opencv_core246.lib")  
#pragma comment(lib, "opencv_imgproc246.lib")  
//#pragma comment(lib, "opencv_objdetect246.lib")  
#pragma comment(lib, "opencv_gpu246.lib")  
//#pragma comment(lib, "opencv_features2d246.lib")  
#pragma comment(lib, "opencv_highgui246.lib")  
//#pragma comment(lib, "opencv_ml246.lib")  
#endif  

using namespace cv;  


void main()  
{  
 unsigned long AAtime=0, BBtime=0;
 
 Mat img;
 Mat outimg, outimg2;
 img = imread("Tulips.jpg",0); 
 gpu::GpuMat d_src, d_dst;
 
 d_src.upload(img);
 AAtime = getTickCount(); 
 gpu::Canny(d_src, d_dst, 35, 200, 3); 
 BBtime = getTickCount();

 printf("cuda %.5lf \n",  (BBtime - AAtime)/getTickFrequency() );
 d_dst.download(outimg);


 AAtime = getTickCount();
 Canny(img, outimg2, 35, 200, 3);
 BBtime = getTickCount();
 printf("cpu %.5lf \n",  (BBtime - AAtime)/getTickFrequency() );


 

 namedWindow("t");  
 imshow("t",outimg2); 
 namedWindow("t2");  
 imshow("t2",outimg);  
 
 cvWaitKey(0);  
 

} 


////

cuda processing takes 0.00578 sec.
and non-cuda processing takes 0.01707 sec.

In this case, cuda is faster 2times than non-cuda, but cuda will be seen higher speed.