Showing posts with label solution. Show all posts
Showing posts with label solution. Show all posts

6/06/2021

Real-time stitching multi-video to one screen

* Introduction

- The solution shows panorama image from multi images. The panorama images is processing by real-time stitching algorithm.

- Each cameras has a limited field of view, but the solution can be monitoring large areas from merged into a panorama image.

- The performance is excellent with the following technical configuration.
 . real time image processing using GPU.
 . Accurate calculation of R, T, K (Rotation, Translation, Camera intrinsic) between each camera with nonlinear optimization
 . Color calibration using the exposure blending

- The solution can be applied efficiently and easy in Military Region, tourist attractions, intersections, ports



* Real-time N to 1 stitching algorithm

- Existing stitching algorithm is modified to separate 2 parts of offline and online processing for more efficient realtime processing.

- The Off-Line processing part is calculated first time or if the matching inaccurate. 

- On-Line processing part is a routine to create the panoramic image by warping (Warping) calculated by the matching, the blending value.



No ordered input images

- Feature extraction and to calculate the homography matrix between each image by evaluating (RANSAC), and set image position through matching rate.
- To get correct R, T using bundle adjustment

- Searching the overlap region, the blending coefficient is determined with respect to the non-overlapping region.

- Obtained R, T, K, and connected by warping the images and blending and complete the panorama finally


* Experiment


- 4 real-time video stitching speed of about 10~20 fps (Intel® core™ i5-3570 cpu 3.40GHz, NVIDIA Geforce GTX 650)



See the result on youtube



updating 42/06/2018
I have decided to sell source code ^^
If you have interest, go to here, you can buy source code


** 2021.06 updated ** 
realtime stitching SDK: 

1/26/2015

Real-time yard trailer identification by detection of vehicle ID numbers

Real-time yard trailer identification by detection of vehicle ID numbers

Project period : (2013.09~2013.11)


*Introduction
• Y/T(Yard Trailer) Y/T(Yard Trailer) number identification solution using image processing.
• The solution using camera is easy to installation and maintance compare to the RFID. And It is more free from distance constraint.
• Machine learning methods - SVM (Support Vector Machine), MLP (Multi Layers Perceptron) are used to recognize the number ID
• A high-speed image processing through the GPU parallel programming


*Real-time pre-processing for features extraction
• The process of preprocessing for ID number extraction
-In the first step, we apply different filters, morphological operations, contour algorithms, and validations to retrieve those parts of the image that could have targeted region.
-Especially, we targeted to detect a ventilating opening instead of number ID, because that target is less shape change than the 3 characters of number ID.



*Vehicle Identification
• HOG(Histogram of gradient) feature extraction and SVM machine learning to detect a ventilating opening


• Each segmented character is to extract the features for training and classifying the MLP algorithm
• The feature is horizontal, vertical histogram values from 5x5 low resolution image.


*Experiment
• Recognition rate over the 95%
• Detection speed about 0.05 sec/frame (Image size : 1280x720, Intel® core™ i5-3570 cpu 3.40GHz, NVIDIA Geforce GTX 650)
• The trailer enter speed about 20~30 km/h


#include < stdio.h>

#include "ShinPortOCR.h"


void main()
{

 ShinPortOCR cShinPortOCR;

 //printf("μ—°μ†μœΌλ‘œ 읽을 이미지 파일 갯수? (ex:10 -> ./data/1.jpg, ./data/2.jpg ... ./data/10.jpg) \n");
 printf("How many images do you want to test? (ex:100, 500,  1630\n");
 int num;
 scanf_s("%d", &num);

 int p = 0, n = 0;
 char str[100];
 for (int i = 0; i< num; ++i)
 {
  printf("%d/%d\n", i, num);

  sprintf_s(str, "./data/%d.jpg", i + 1);
  Mat inImg = imread(str, 1);//, CV_LOAD_IMAGE_GRAYSCALE);
  Mat OutImg; 
  if (cShinPortOCR.GoGoXing(inImg, OutImg, 1) == -111) //1 is debug print, 0 is no dubug out
  {
   sprintf_s(str, ".\\Log\\fail\\%d.jpg", i + 1);
   imwrite(str, inImg);
  }
  else{
   sprintf_s(str, ".\\Log\\success\\%d.jpg", i + 1);
   imwrite(str, inImg);
  }


  sprintf_s(str, ".\\Log\\processing\\%d.jpg", i + 1);
  imwrite(str, OutImg);


  imshow("result", OutImg);
  waitKey(10);

 }
}


///

Source code is here
https://github.com/MareArts/Container-Yard-Trailer-ID-number-recognition

you can down opencv dll/lib/header files on here
opencv 249 64bit cuda 60
https://www.amazon.com/clouddrive/share/7bPR5HgbCbNZJHwG0ldq1gwHtydLXRxtQVYc5JYPlSF?ref_=cd_ph_share_link_copy


The method to check that camera is moving or not(stop) using dense optical flow. (applied opencv, gpu version)

This post introduces how to check if the camera(video) is moving or not.


This method can be applied various fields.
This approach also means what scene is important or not.

To solve this problem, I used a dense optical flow.
I introduce dense optical flow on youtube.
http://www.youtube.com/watch?v=yAz1qrN6T_o
http://www.youtube.com/watch?v=iRMqH6y6JKU

and on my blog
http://feelmare.blogspot.kr/search/label/dense%20optical%20flow

In example source code..
The algorithm is checking that how much percent of area is moving?
And, check distance of each pixel movement. Smaller than the threshold value does not include in the movement percent.

Note, the example source code is made by gpu version.
#include < stdio.h>

#include < opencv2\opencv.hpp>
#include < opencv2/core/core.hpp>
#include < opencv2/highgui/highgui.hpp>
#include < opencv2\gpu\gpu.hpp>
#include < opencv2\nonfree\features2d.hpp >    



#ifdef _DEBUG        
#pragma comment(lib, "opencv_core249d.lib")
#pragma comment(lib, "opencv_imgproc249d.lib")   //MAT processing
#pragma comment(lib, "opencv_objdetect249d.lib") //HOGDescriptor
#pragma comment(lib, "opencv_gpu249d.lib")
#pragma comment(lib, "opencv_features2d249d.lib")
#pragma comment(lib, "opencv_highgui249d.lib")
#else
#pragma comment(lib, "opencv_core249.lib")
#pragma comment(lib, "opencv_imgproc249.lib")
#pragma comment(lib, "opencv_objdetect249.lib")
#pragma comment(lib, "opencv_gpu249.lib")
#pragma comment(lib, "opencv_features2d249.lib")
#pragma comment(lib, "opencv_highgui249.lib")
#endif 

using namespace std;
using namespace cv;

#define WIDTH_DENSE (80)
#define HEIGHT_DENSE (60)

#define DENSE_DRAW 0 //dense optical flow arrow drawing or not
#define GLOBAL_MOTION_TH1 1
#define GLOBAL_MOTION_TH2 70


float drawOptFlowMap_gpu (const Mat& flow_x, const Mat& flow_y, Mat& cflowmap, int step, float scaleX, float scaleY, int drawOnOff);


int main()
{
 //stream /////////////////////////////////////////////////
 VideoCapture stream1("C:\\videoSample\\medical\\HUV-03-14.wmv"); 

 //variables /////////////////////////////////////////////
 Mat O_Img; //Mat
 gpu::GpuMat O_Img_gpu; //GPU
 gpu::GpuMat R_Img_gpu_dense; //gpu dense resize
 gpu::GpuMat R_Img_gpu_dense_gray_pre; //gpu dense resize gray
 gpu::GpuMat R_Img_gpu_dense_gray; //gpu dense resize gray
 gpu::GpuMat flow_x_gpu, flow_y_gpu;
 Mat flow_x, flow_y;

 //algorithm *************************************
 //dense optical flow
 gpu::FarnebackOpticalFlow fbOF;
 

 //running once //////////////////////////////////////////
 if(!(stream1.read(O_Img))) //get one frame form video
 {
  printf("Open Fail !!\n");
  return 0; 
 }

  //for rate calucation
 float scaleX, scaleY;
 scaleX = O_Img.cols/WIDTH_DENSE;
 scaleY = O_Img.rows/HEIGHT_DENSE;

 O_Img_gpu.upload(O_Img); 
 gpu::resize(O_Img_gpu, R_Img_gpu_dense, Size(WIDTH_DENSE, HEIGHT_DENSE));
 gpu::cvtColor(R_Img_gpu_dense, R_Img_gpu_dense_gray_pre, CV_BGR2GRAY);


 //unconditional loop   ///////////////////////////////////
 while (true) {
  //reading
  if( stream1.read(O_Img) == 0) //get one frame form video   
   break;

  // ---------------------------------------------------
  //upload cou mat to gpu mat
  O_Img_gpu.upload(O_Img); 
  //resize
  gpu::resize(O_Img_gpu, R_Img_gpu_dense, Size(WIDTH_DENSE, HEIGHT_DENSE));
  //color to gray
  gpu::cvtColor(R_Img_gpu_dense, R_Img_gpu_dense_gray, CV_BGR2GRAY);
  
  //calculate dense optical flow using GPU version
  fbOF.operator()(R_Img_gpu_dense_gray_pre, R_Img_gpu_dense_gray, flow_x_gpu, flow_y_gpu);
  flow_x_gpu.download( flow_x );
  flow_y_gpu.download( flow_y );


  //calculate motion rate in whole image
  float motionRate = drawOptFlowMap_gpu(flow_x, flow_y, O_Img, 1, scaleX, scaleY, DENSE_DRAW);
  //update pre image
  R_Img_gpu_dense_gray_pre = R_Img_gpu_dense_gray.clone();



  //display "moving" or "stop" message on the image.
  if(motionRate > GLOBAL_MOTION_TH2 ) //if motion generate over than 70%, this algorithm consider that video is moving.
  {
   char TestStr[100] = "Moving!!";
   putText(O_Img, TestStr, Point(30,60), CV_FONT_NORMAL, 2, Scalar(0,0,255),3,2); //OutImg is Mat class;   
  }else{
   char TestStr[100] = "Stop!!";
   putText(O_Img, TestStr, Point(30,60), CV_FONT_NORMAL, 2, Scalar(255,0,0),3,2); //OutImg is Mat class; 
  }


  // show image ----------------------------------------
  imshow("Origin", O_Img);   

  // wait key
  if( cv::waitKey(100) > 30)
   break;
 }
}



float drawOptFlowMap_gpu (const Mat& flow_x, const Mat& flow_y, Mat& cflowmap, int step, float scaleX, float scaleY, int drawOnOff)
{
 double count=0;

 float countOverTh1 = 0;
 int sx,sy;
 for(int y = 0; y < HEIGHT_DENSE; y += step)
 {
  for(int x = 0; x < WIDTH_DENSE; x += step)
  {
   
   if(drawOnOff)
   {
    Point2f fxy;    
    fxy.x = cvRound( flow_x.at< float >(y, x)*scaleX + x*scaleX );   
    fxy.y = cvRound( flow_y.at< float >(y, x)*scaleY + y*scaleY );   
    line(cflowmap, Point(x*scaleX,y*scaleY), Point(fxy.x, fxy.y), CV_RGB(0, 255, 0));   
    circle(cflowmap, Point(fxy.x, fxy.y), 1, CV_RGB(0, 255, 0), -1);   
   }

   float xx = fabs(flow_x.at< float >(y, x) );
   float yy = fabs(flow_y.at< float >(y, x) );

   float xxyy = sqrt(xx*xx + yy*yy);
   if( xxyy > GLOBAL_MOTION_TH1 )
    countOverTh1 = countOverTh1 +1;
   
   count=count+1;
  }
 }
 return (countOverTh1 / count) * 100;

}