3/28/2016

error C4996: 'inet_addr': Use inet_pton() or InetPton() instead or define _WINSOCK_DEPRECATED_NO_WARNINGS to disable deprecated API warnings

If you meet this error message
"error C4996: 'inet_addr': Use inet_pton() or InetPton() instead or define _WINSOCK_DEPRECATED_NO_WARNINGS to disable deprecated API warnings"


Add this code first line on the code.
#define _WINSOCK_DEPRECATED_NO_WARNINGS


Then we can avoid.


3/23/2016

OpenCV RTSP receiving test

To get the RTSP video and showing is easy when we use VideoCapture function in OpenCV.
http://study.marearts.com/2015/05/opencv300-rc1-videocapture-example.html
http://study.marearts.com/2013/09/opencv-video-writer-example-source-code.html

But the video looks even broken video stream is shifted a bit.
So I use VLC library for stable receiving and show.
http://study.marearts.com/2015/09/opencv-rtsp-connection-using-vlc-library.html
This method is also good.
But It is cumbersome because it requires additional library file of VLC.

So again, let's solve this problem in OpenCV.
The method is to separately manage the RTSP receving part and drawing part.

New review code.

1. normal method. (This is not stable)
...
#include "opencv2/opencv.hpp"
#include < string>

using namespace std;
using namespace cv;

int main()
{

 string streamUri = "rtsp://192.168.0.21:554/onvif/profile2/media.smp";
 VideoCapture stream(streamUri);
 if (!stream.isOpened()){
  cout << "error" << endl;
  return 0;
 }

 Mat image;
 while (1){
  stream >> image;
  imshow("test", image);
  if (waitKey(30) >= 0)
   break;

 }

 return 0;
}
..

2. Receiving part threading(stable)
...
#include "opencv2/opencv.hpp"
#include < string>
#include < list>
#include < thread>

using namespace std;
using namespace cv;



// RTSP receive buffer list
list< Mat> frames;
cv::VideoCapture stream2;
bool isRun;

// thread function for video getting and show
void StreamThread(bool &isRun)
{
 cv::Mat image;
 while (isRun){
  stream2 >> image;
  frames.push_back(image.clone());
  printf("%d mat stacked \n", frames.size());
 }
}


int main(int, char)
{
 //rtsp address 
 string streamUri = "rtsp://192.168.0.21:554/onvif/profile2/media.smp";
 stream2.open(streamUri);

 //open check
 if (!stream2.isOpened()){
  cerr << "Stream open failed : " << streamUri << endl;
  return EXIT_FAILURE;
 }

 isRun = true;
 // thread run
 thread(StreamThread, isRun).detach();


 //Mat draw only in the main function.
 while (isRun){

  if (frames.size()>1){

   Mat image = frames.front();
   imshow("test", image);
   frames.pop_front();

   if (waitKey(30) >= 0)
    break;
  }
 }

 isRun = false;

 return 0;
}
...


3. Drawing part threading

...
#include "opencv2/opencv.hpp"
#include < string>
#include < list>
#include < thread>

using namespace std;
using namespace cv;



// RTSP receive buffer list
list< Mat> frames;
bool isRun;

// thread function for video show
void drawFrame(bool &isRun)
{
 while (isRun){
  if (frames.size()>1){
   Mat image = frames.front();
   imshow("test", image);
   waitKey(1);
   frames.pop_front();
  }
 }
}


int main(int, char)
{
 //rtsp address 
 string streamUri = "rtsp://192.168.0.21:554/onvif/profile2/media.smp";
 VideoCapture stream(streamUri);

 //open check
 if (!stream.isOpened()){
  cerr << "Stream open failed : " << streamUri << endl;
  return EXIT_FAILURE;
 }

 isRun = true;
 // thread run
 thread(drawFrame, isRun).detach();

 cv::Mat image;
 //Mat get only in the main function.
 while (isRun){

  stream >> image;
  frames.push_back(image.clone());
  printf("%d mat stacked \n", frames.size());
 }

 isRun = false;

 return 0;
}
...




Github
https://github.com/MareArts/OpenCV-RTSP-receiving-Test-in-thread-and-while-processing

Thank you.

3/10/2016

Git hub connection with Visual Studio 2012 (simple tip)


Visual Studio 2012 Update (at least ver 2, this should be version 4):
http://www.microsoft.com/en-us/downlo...

Git for Windows:
http://git-scm.com/downloads

Visual Studio Tools for Git:
(When installing select the option to run Git from the Windows Command Prompt.
http://visualstudiogallery.msdn.micro...

Git clone [URL of GitHub Repo] "[Local Project Directory]"





Example of how to use the OpenCV Particle Filter (opencv Ver 2.4.9)


This is example of OpenCV particle filter




#include < iostream>
#include < vector>

#include < opencv2/opencv.hpp>
#include < opencv2/legacy/legacy.hpp>

#ifdef _DEBUG           
#pragma comment(lib, "opencv_core249d.lib")   
#pragma comment(lib, "opencv_imgproc249d.lib")   //MAT processing   
#pragma comment(lib, "opencv_objdetect249d.lib") //HOGDescriptor   
//#pragma comment(lib, "opencv_gpu249d.lib")   
//#pragma comment(lib, "opencv_features2d249d.lib")   
#pragma comment(lib, "opencv_highgui249d.lib")   
#pragma comment(lib, "opencv_ml249d.lib")   
//#pragma comment(lib, "opencv_stitching249d.lib");   
//#pragma comment(lib, "opencv_nonfree249d.lib");   
#pragma comment(lib, "opencv_video249d.lib")
#pragma comment(lib, "opencv_legacy249d.lib")
#else   
#pragma comment(lib, "opencv_core249.lib")   
#pragma comment(lib, "opencv_imgproc249.lib")   
#pragma comment(lib, "opencv_objdetect249.lib")   
//#pragma comment(lib, "opencv_gpu249.lib")   
//#pragma comment(lib, "opencv_features2d249.lib")   
#pragma comment(lib, "opencv_highgui249.lib")   
#pragma comment(lib, "opencv_ml249.lib")   
//#pragma comment(lib, "opencv_stitching249.lib");   
//#pragma comment(lib, "opencv_nonfree249.lib");   
#pragma comment(lib, "opencv_video249d.lib")   
#endif   




using namespace std;


#define drawCross( center, color, d ) line( img, cv::Point( center.x - d, center.y - d ), cv::Point( center.x + d, center.y + d ), color, 2, CV_AA, 0); line( img, cv::Point( center.x + d, center.y - d ), cv::Point( center.x - d, center.y + d ), color, 2, CV_AA, 0 )


struct mouse_info_struct { int x,y; };
struct mouse_info_struct mouse_info = {-1,-1}, last_mouse;

vector< cv::Point> mouseV, particleV;
int counter = -1;

// Define this to proceed one click at a time.
//#define CLICK 1
#define PLOT_PARTICLES 1

void on_mouse(int event, int x, int y, int flags, void* param) {
#ifdef CLICK
 if (event == CV_EVENT_LBUTTONUP) 
#endif
 {
  last_mouse = mouse_info;
  mouse_info.x = x;
  mouse_info.y = y;
  counter = 0;
 }
}

void printMat(CvMat* mat)
{
    
    for(int i=0; i< mat->rows; i++)
    {
        if(i==0)
        {
            for(int j=0; j< mat->cols; j++)  printf("%10d",j+1);
        }

        printf("\n%4d: ",i+1);
        for(int j=0; j< mat->cols; j++)
        {

            printf("%10.2f",cvGet2D(mat,i,j).val[0]);
        }
    }
    
}


int main () {
 cv::Mat img(650, 650, CV_8UC3);
 char code = (char)-1;

 cv::namedWindow("mouse particle");
 cv::setMouseCallback("mouse particle", on_mouse, 0);

 cv::Mat_< float> measurement(2,1); 
 measurement.setTo(cv::Scalar(0));

 int DP = 2;
 int MP = 2;
 int nParticles = 250;
 float xRange = 650.0;
 float yRange = 650.0;

 float minRange[] = { 0, 0, -xRange, -yRange };
 float maxRange[] = { xRange, yRange, xRange, yRange };
 CvMat LB, UB;
 cvInitMatHeader(&LB, DP, 1, CV_32FC1, minRange); 
 cvInitMatHeader(&UB, DP, 1, CV_32FC1, maxRange);

 CvConDensation* condens = cvCreateConDensation(DP, MP, nParticles);

 
 cvConDensInitSampleSet(condens, &LB, &UB); //Lower Bound, Upper Bound

 // The OpenCV documentation doesn't tell you to initialize this
 // matrix, but you have to do it.  For this 2D example, we're just
 // using a 2x2 identity matrix.  I'm sure there's a slicker way to
 // do this, left as an exercise for the reader.
 condens->DynamMatr[0] = 1.0;
 condens->DynamMatr[1] = 0.0;
 condens->DynamMatr[2] = 0.0;
 condens->DynamMatr[3] = 1.0;

 for (int i = 0; i < condens->SamplesNum; i++) {
  cv::Point partPt(condens->flSamples[i][0], condens->flSamples[i][1]);
  drawCross(partPt , cv::Scalar(255,0,(int)(i * 255.0/(float)condens->SamplesNum)), 2);
 }



 for(;;) {

  if (mouse_info.x < 0 || mouse_info.y < 0) {
   imshow("mouse particle", img);
   cv::waitKey(30);
   continue;
  }

  mouseV.clear();
  particleV.clear();

  for(;;) {
   code = (char)cv::waitKey(100);

   if( code > 0 )
    break;

#ifdef CLICK
   if (counter++ > 0) {
    continue;
   } 
#endif

   measurement(0) = mouse_info.x;
   measurement(1) = mouse_info.y;

   cv::Point measPt(measurement(0),measurement(1));
   mouseV.push_back(measPt);

   // Clear screen
   img = cv::Scalar::all(100);

   for (int i = 0; i < condens->SamplesNum; i++) {

    float diffX =  (measurement(0) - condens->flSamples[i][0])/xRange;
    float diffY =  (measurement(1) - condens->flSamples[i][1])/yRange;

    condens->flConfidence[i] = exp(-100.0f * ((diffX * diffX + diffY * diffY)));

    // plot particles
#ifdef PLOT_PARTICLES
    cv::Point partPt(condens->flSamples[i][0], condens->flSamples[i][1]);
    drawCross(partPt , 
     cv::Scalar(255,0,(int)(i * 255.0/(float)condens->SamplesNum)), 
     2);
#endif

   }

   cvConDensUpdateByTime(condens);

   cv::Point statePt(condens->State[0], condens->State[1]);
   particleV.push_back(statePt);

   for (int i = 0; i < mouseV.size() - 1; i++) {
    line(img, mouseV[i], mouseV[i+1], cv::Scalar(255,255,0), 1);
   }
   for (int i = 0; i < particleV.size() - 1; i++) {
    line(img, particleV[i], particleV[i+1], cv::Scalar(0,255,0), 1);
   }

   // plot points
   drawCross( statePt, cv::Scalar(255,0,0), 3 );
   drawCross( measPt, cv::Scalar(0,0,255), 3 );

   imshow( "mouse particle", img );
  }

  if( code == 27 || code == 'q' || code == 'Q' )
   break;
 }

 return 0;
}


3/03/2016

Deep learning study - tensor flow install on window #7

TensorFlow is running on Linux and Mac based.
Windows environment is not yet supported.

But if we use docker s/w that virtual machine similar with virtual box, we can run tensor flow on window environment.
In other words, docker create virtual environment for linux environment.
So how we use the tensorflow result that learned based on linux or mac in window?

I have not tried it yet.
There are api for c++ version in tensorflow.
There is no learning part, C++ version is included only evaluating part.
But if I success build c++ version api of tensor flow in window environment, we available the learning result in window after learning from linux.

I will try this plan after test basic tensorflow.

Then, let's use the tensorflow using docker s/w on window.

Firstly,
Tutorial about installation in linux and mac, refer to official site.
https://www.tensorflow.org/versions/r0.7/get_started/os_setup.html

---------------
#install docker
0. docker install guide
First page to install docker for various environments
https://docs.docker.com/engine/installation/

1. download docker toolbox
download docker for widnow and mac user
https://www.docker.com/products/docker-toolbox


2. install
document for install docker in window
https://docs.docker.com/engine/installation/windows/






3. run docker
excute Docker Quickstart terminal.
After setting (first takes about 1~2 minutes)

You can see the whale illustration.

verify your setup, type this command
$ docker run hello-world

then you can this message.

---------------
#install TensorFlow
type this command
$docker-machine ip default
then you can check notebook server ip.


type this command
$ docker run -it b.gcr.io/tensorflow/tensorflow
or
$ docker run -p 8888:8888 -it b.gcr.io/tensorflow/tensorflow

when you enter the command firstly, tensorflw is installed once.
and run notebook server.
but in my case, first command is not run notebook server.
If you problem with me, type second command.

---------------
#run notebook
type this on you browser
http://192.168.99.100:8888
then you can see notebook web page.



---------------
#test TensorFlow

*setup container
docker run -p 8888:8888 -d --name tfu b.gcr.io/tensorflow-udacity/assignments
*starting your container
docker start tfu
*stopping your container
docker stop tfu
show detail here
https://discussions.udacity.com/t/jupyter-notebooks-docker-windows-progress-not-being-saved/46116/4


#build a local Docker container
move to your directory
"....\tensorflow-master\tensorflow\examples\udacity"
tensorflow-master is downloaded on github

and write command (must be typing '.' )
docker build -t imageName .
about this issue refer to
https://discussions.udacity.com/t/difficulty-starting-docker-for-assignments-in-deep-learning-course/45201


----------------
*docker useful command
//show images
docker images

//show running containers
docker ps

//remove image
docker rmi imageID

//remove container
docker rm containerID





3/02/2016

Deep learning study - minimizing cross entropy #6


Deep learning flow is like that.
Expressed as a formula, It is D(S(Wx+b),L).
So It is Multinational logistic classification.


The smaller value of the cross entropy function D means close to the correct classification.



So the smaller the value of the sum of D function for all the input and labeling values can be said correct w, b is got good.

Then how to get optimization values of w, b?
That is optimization problem.

Usually using the gradient descent method via a derivative of these problems.



Sorry, my english is not good. ^^



3/01/2016

Deep learning study - cross entropy #5


The whole process of deep learning is as described above.
The final step in cross entropy is compared to the classification and labeling values.



S and L is vector. L is final result that is determined by the person.
D function means distance value.
The smaller distance value means that the result is correct.
Thus, W, b parameter adjust, the D value should be smaller.

This is matlab test.

case 1.
S = [0.7 0.2 0.1];
L = [1.0 0 0];
- sum( L.*log(S) )

=>  0.3567

case 2.
S = [0.7 0.2 0.1];
L = [0 0 1.0];
- sum( L.*log(S) )

 2.3026



In case 1, since the result of softmax is similar to the label, distance is small than case 2.