Showing posts with label Geometry. Show all posts
Showing posts with label Geometry. Show all posts

12/09/2021

IP 2 Geometry information python

 Here is official page:

https://pypi.org/project/ip2geotools/

https://github.com/tomas-net/ip2geotools


This is my try

..

# pip install ip2geotools
from ip2geotools.databases.noncommercial import DbIpCity
response = DbIpCity.get('165.132.116.105', api_key='free')
print(response)

..

The results :

165.132.116.105

Seodaemun-gu

Seoul

KR

37.5790747

126.9367861


Thank you. ๐Ÿ™‡๐Ÿป‍♂️

www.marearts.com


11/08/2011

8 point algorithm (Matlab source code) / The method to get the Fundamental Matrix and the Essential matrix

Created Date : 2011.8
Language : Matlab
Tool : Matlab 2010
Library & Utilized : -
Reference : Multiple View Geometry (Hartly and Zisserman)
etc. : Intrinsic Parameter, 2 adjacent images, matching points




This code is 8 point algorithm.
If we know over 8 corresponding points between two images, we can know Rotation and Translation of camera movement using 8 point algorithm.
The 8 point algorithm is well known in the vision major field.
The algorihtm is introduced at the Multiple View Geometry Book and many websites.

Have you ever listened Fundamental matrix song? The song is very cheerful. ^^

You can download 8 point algorithm at the Peter Covesi homepage.
My code is very simple. so I believe my code will be useful to you.
I will upload RANSAC version later.
Thank you.

github url :https://github.com/MareArts/8point-algorithm
(I used 'getCorrectCameraMatrix' function of Isaac Esteban author.)

main M code..
%//////////////////////////////////////////////////////////////////////////
%// Made by J.H.KIM, 2011 / feelmare@daum.net, feelmare@gmail.com        //
%// blog : http://feelmare.blogspot.com                                  //
%// Eight-Point Algorithm
%//////////////////////////////////////////////////////////////////////////

clc; clear all; close all;

% Corresponding points between two images
% sample #1 I11.jpg, I22.jpg
%{
load I11.txt; load I22.txt;
m1 = I11; m2 = I22;
%}

%sample #2 I1.jpg, I2.jpg
load I1.txt; load I2.txt;
m1 = I1; m2 = I2;

s = length(m1);
m1=[m1(:,1) m1(:,2) ones(s,1)];
m2=[m2(:,1) m2(:,2) ones(s,1)];
Width = 800; %image width
Height = 600; %image height

% Intrinsic Matrix
load intrinsic_matrix.txt
K = intrinsic_matrix;

% The matrix for normalization(Centroid)
N=[2/Width 0 -1;
    0 2/Height -1;
    0   0   1];

%%
% Data Centroid
x1=N*m1'; x2=N*m2';
x1=[x1(1,:)' x1(2,:)'];  
x2=[x2(1,:)' x2(2,:)']; 

% Af=0 
A=[x1(:,1).*x2(:,1) x1(:,2).*x2(:,1) x2(:,1) x1(:,1).*x2(:,2) x1(:,2).*x2(:,2) x2(:,2) x1(:,1) x1(:,2), ones(s,1)];

% Get F matrix
[U D V] = svd(A);
F=reshape(V(:,9), 3, 3)';
% make rank 2 
[U D V] = svd(F);
F=U*diag([D(1,1) D(2,2) 0])*V';

% Denormalize
F = N'*F*N;
%Verification
%L1=F*m1'; m2(1,:)*L1(:,1); m2(2,:)*L1(:,2); m2(3,:)*L1(:,3);

%%
%Get E
E=K'*F*K;
% Multiple View Geometry 259page
%Get 4 Possible P matrix 
P4 = get4possibleP(E);
%Get Correct P matrix 
inX = [m1(1,:)' m2(1,:)'];
P1 = [eye(3) zeros(3,1)];
P2 = getCorrectCameraMatrix(P4, K, K, inX)

%%
%Get 3D Data using Direct Linear Transform(Linear Triangular method)
Xw = Triangulation(m1',K*P1, m2',K*P2);
xx=Xw(1,:);
yy=Xw(2,:);
zz=Xw(3,:);

figure(1);
plot3(xx, yy, zz, 'r+');


%{
%This code is also run well instead of Triangulation Function.
nm1=inv(K)*m1';
nm2=inv(K)*m2';
% Direct Linear Transform
for i=1:s
    A=[P1(3,:).*nm1(1,i) - P1(1,:);
    P1(3,:).*nm1(2,i) - P1(2,:);
    P2(3,:).*nm2(1,i) - P2(1,:);
    P2(3,:).*nm2(2,i) - P2(2,:)];

    A(1,:) = A(1,:)./norm(A(1,:));
    A(2,:) = A(2,:)./norm(A(2,:));
    A(3,:) = A(3,:)./norm(A(3,:));
    A(4,:) = A(4,:)./norm(A(4,:));

    [U D V] = svd(A);
    X(:,i) = V(:,4)./V(4,4);
end
%}


11/07/2011

Rotation Matrix Converting Matlab Source (Euler Angle, Rotation Matrix, Quanternion)

There are many expression to show the rotation value.
(Ex. : Euler, Matrix, Quaternion.. )

This code is the test source to convert each other.
Euler -> Matrix -> Quanternion -> Matrix -> Euler
We can show the first Euler value is same with the last Euler value.

The source code is like below:
--------------------------------------------------------------

% Rotation vector of x,y,z axis.
Rv = [13 20 50];

% 3x3 matrix of R vector (Results of the Rm1 and Rm2 is similar.)
Rm1 = rodrigues(Rv*pi/180)
Rm2 = mRotMat(Rv)

% Quntenion vector of R matrix
Rq1 = matrix2quaternion(Rm1)
Rq2 = matrix2quaternion(Rm2)

% R matrix of Q vector
Rm1_1 = quaternion2matrix(Rq1)
Rm2_2 = quaternion2matrix(Rq1)

% R vector of R matrix
Rv_1 = rodrigues(Rm1_1(1:3,1:3)) * 180/pi
Rv_2 = rodrigues(Rm2_2(1:3,1:3)) * 180/pi

-----------------------------------------------------------------------
<Source Code>

The copyright of "rodrigues' and 'quaternion' functions is reserved by Peter Kovesi.

I wish this source code is useful to you.
Thank you.


9/10/2011

How to calculate R,T between C1 and C2? / matlab source / ๋‘ ๋Œ€์˜ ์นด๋ฉ”๋ผ Rotation, Translation์€ ์–ด๋–ป๊ฒŒ ๊ณ„์‚ฐํ• ๊นŒ?


 Created Date : 2011.8
Language : matlab
Tool : matlab
Library & Utilized : rodrigues function (Jean-Yves Bouguet)
Reference : -
Etc. : -



There are two camera. These camera is arranged as reference of O.
Rotation and Translation of C1 is 

R1 =  -0.009064 0.985541 -0.169195
       0.969714 -0.032636 -0.242051
      -0.244074 -0.166265 -0.955397

T1 = 4.684369
    -7.384014
    29.614508

And Rotation and Translation of C2 is

R2 =  0.078149 0.984739 -0.155505
      0.971378 -0.040117 0.234128
      0.224317 -0.169351 -0.959688

T2 = -10.151448
     -7.261157
      29.69228


So, What is R, T between C1, C2?

wR = R2*inv(R1);
wT = T2-T1;

This matlab code show this process.
We get wR, wT. And C1 is rotated and Translated by wR, wT. Then, the axis of C1 is laid on C2 exactly. 


<source code>

--------------------------------------------------------------------------
๋‘ ๊ฐœ์˜ ์นด๋ฉ”๋ผ๊ฐ€ ์›์ ์„ ๊ธฐ์ค€์œผ๋กœ ์žˆ๋‹ค.
C1 ์นด๋ฉ”๋ผ์˜ R,T๊ฐ€ ์žˆ๊ณ  C2์นด๋ฉ”๋ผ์˜ R,T๊ฐ€ ์›์ ์„ ๊ธฐ์ค€์œผ๋กœ ์žˆ์„๋•Œ, C1๊ณผ C2 ์‚ฌ์ด์˜ R, T๋Š” ์–ด๋–ป๊ฒŒ ๋ ๊นŒ? ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ฐ„๋‹จํ•˜๊ฒŒ ๊ณ„์‚ฐ์ด ๊ฐ€๋Šฅํ•˜๋‹ค.


wR = R2*inv(R1);
wT = T2-T1;


์ด ๋งคํŠธ๋žฉ ์†Œ์Šค ์ฝ”๋“œ๋Š” C1, C2 ์‚ฌ์ด์˜ R,T๋ฅผ ๊ตฌํ•˜๊ณ  ๊ทธ R,T๋ฅผ ์ด์šฉํ•ด์„œ C1์„ C2๋กœ ์ •ํ™•ํžˆ ์˜ฎ๊ฒจ์ง์„ ํ™•์ธํ•˜๋Š” ์†Œ์Šค์ž…๋‹ˆ๋‹ค.

<source code>

9/02/2011

Stereo Feature Tracking for visual odometry (document)




Created Date : 2011.2
Reference :
Robust and efficient stereo feature tracking for visual odometry
stereo odometry - a review of approaches
multiple view geometry




How to get 3D point when we know feature image points of right and left camera?
How to propagate error? if we get the 3D point that calculated including stereo distance error.
If we know translated two 3D point, How to optimize error of R, T?
This document introduces about these problems.


- contents -

① Stereo Image Point :
Left Image Image
Camera Parameters :
Focal Length f, Principal point , Baseline B
Homogeneous Point
,
Non-Homogeneous Coordinate
-(stereo odometry A Review of Approaches)
ing in Stereo Navigation L. Matthies
Noise Propagation
X point Gaussian
Mean , Covariance
X Point(3D) mean, covariance ?
f is nonlinear function of a random vector with mean , covariance
② 3D point Covariance
~ Multiple view Geometry Nonlinear Error Forward propagation
③ Estimation of motion parameters
3D points ,
X:before motion, i-th:interest point, Y:after motion
Unique solution
(X, Y will be disturbed by same amount of noise)
Mean square error
Becomes minimal?
Several solutions.
- A solution based a singular value decomposition.
- A solution based on Essential Matrix.
- A maximum likelihood solution.
④ Maximum likelihood solution


<Doc> <PDF>


If you have good idea or advanced opinion, please reply me. Thank you
(Please understand my bad english ability. If you point out my mistake, I would correct pleasurably. Thank you!!)

----------------------------------------------------------------------------

์Šคํ…Œ๋ ˆ์˜ค ์นด๋ฉ”๋ผ์—์„œ ํŠน์ • ์ ์— ๋Œ€ํ•œ ์™ผ์ชฝ ์˜์ƒ์—์„œ x,y์  ์˜ค๋ฅธ์ชฝ ์˜์ƒ์—์„œ x,y์  ์„ ์•Œ๋•Œ 3D point๋ฅผ ์–ด๋–ป๊ฒŒ ๊ตฌํ• ๊นŒ?
3D์„ ๊ตฌํ–ˆ์„๋•Œ ์Šคํ…Œ๋ ˆ์˜ค ์˜์ƒ์—์„œ ํฌํ•จ๋œ ์—๋Ÿฌ๊ฐ€ 3D point์— ์—๋Ÿฌ๊ฐ€ ์–ด๋–ป๊ฒŒ ์ „ํŒŒ๋ ๊นŒ?
์ด๋™๋œ ๋‘ 3D point๊ฐ€ ์ผ์„๋•Œ ์–ด๋–ป๊ฒŒ ํ•˜๋ฉด ์—๋Ÿฌ๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” R, T๋ฅผ ๊ตฌํ• ์ˆ˜ ์žˆ์„๊นŒ?
์ด๋Ÿฐ ์งˆ๋ฌธ๋“ค์— ๋Œ€ํ•œ ๋‚ด์šฉ์— ๋Œ€ํ•œ ์†”๋ฃจ์…˜์„ ๋‹ค๋ฃฌ๋‹ค.


<Doc> <PDF>


์ข‹์€ ์˜๊ฒฌ์ด๋‚˜ ๋‹ต๋ณ€ ๋‚จ๊ฒจ ์ฃผ์„ธ์š”.

8/21/2011

Two Image mosaic (paranoma) based on SIFT / C++ source (OpenCV) / SIFT ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์ด์šฉํ•œ ๋‘์žฅ์˜ ์˜์ƒ์„ ๋ชจ์ž์ต(ํŒŒ๋ผ๋…ธ๋งˆ) ์˜์ƒ์œผ๋กœ ๋งŒ๋“ค๊ธฐ

Created Date : 2011.2
Language : C/C++
Tool : Microsoft Visual C++ 2010
Library & Utilized : OpenCV 2.2
Reference : Interent Reference
etc. : 2 adjacent images


two adjacent iamges

Feature extraction by Surf(SIFT)

Feature matching

Mosaic (paranoma)

This program is conducted as follow process.
First, the program finds feature point in each image using SURF.
->cvExtractSURF
Second, feature points on each images is matched by similarity.
->FindMatchingPoints
Third, We get the Homography matrix.
->cvFindHomography
Last, we warp the image for attaching into one image.
->cvWarpPerspective

You can download source here.
If you have good idea or advanced opinion, please reply me.
Thank you.

-----------------------------------------------------------------------------

์ด์›ƒ๋œ ๋‘ ์žฅ์˜ ์˜์ƒ์„ ์ž…๋ ฅ ๋ฐ›์•„ ํ•˜๋‚˜์˜ ๋ชจ์ž์ดํฌ ์˜์ƒ(ํŒŒ๋ผ๋…ธ๋งˆ)์œผ๋กœ ๋งŒ๋“ ๋‹ค.
ํŠน์ง• ์ถ”์ถœ ๋ฐ ๋น„๊ต ๋ฐฉ๋ฒ• : suft ->cvExtractSURF
ํŠน์ง• ๋งค์นญ ๋ฐฉ๋ฒ• : FindMatchingPoints
ํ˜ธ๋ชจ๊ทธ๋ผํ”ผ ํ–‰๋ ฌ ๊ตฌํ•˜๊ธฐ : cvFindHomography
์˜์ƒ ๋ชจ์ž์ดํฌ ๋ฐฉ๋ฒ• : warpping

์ „์ฒด ์†Œ์Šค ์ฝ”๋“œ๋Š” ์—ฌ๊ธฐ์„œ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
https://github.com/MareArts/Two-Image-mosaic-paranoma-based-on-SIFT
๊ฐœ์„  ์‚ฌํ•ญ์ด๋‚˜ ์ข‹์€ ์˜๊ฒฌ ์žˆ์œผ์‹œ๋ฉด ๋‹ต๋ณ€ ์ฃผ์„ธ์š”.
๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค.

< gist >

< /gist >

8/18/2011

Camera calibration using pattern image / C++ source (OpenCV) / ํŒจํ„ด ๊ทธ๋ฆผ์„ ์ด์šฉํ•œ ์นด๋ฉ”๋ผ ์บ˜๋ฆฌ๋ธŒ๋ ˆ์ด์…˜ ์†Œ์Šค




Created Date : 2008.11
Language : C/C++
Tool : Microsoft Visual C++
Library & Utilized : OpenCV 1.0
Reference : Learning OpenCV Book


I made Camera calibration source for my convenience using the book that is "Learning OpenCV".
First of all, prepare several pattern images. And make same file name with ordered index.
Ex) pattern1.jpg, pattern2.jpg, pattern3.jpg ...
And answer the some questions.

Pattern width box count(black box number), height box count, the number of image
and file path and file name. Then point extraction is processed.

After all processing, below file is saved in the folder that is same location as images.

Distortion.xml
Distortion.txt
intrinsic.xml
intrinsic_matrix.txt
rotation_matricex.txt
rotation_matricex.xml
translation_matrices.txt
translation_matrices.xml

The points of each pattern image is saved as below file names.
pattern1.txt, pattern2.txt, pattern3.txt......

This source uses 'cvFindChessboardCorners' and 'cvFindCornerSubPix' functions for detecting pattern point. And for drawing, 'cvDrawChessboardCorners' function is used. 'cvCalibrateCamera2' is used for calibration.

This code is programed by Microsoft Visual studio 6.0 and OpenCV 1.0 Lib.
All OpenCv 1.0 dlls are included in the zip file.

You can download heare - > < entire source >

If you have any progressive agenda, Plz give me using reply.
Thank you.


(Please understand my bad english ability. If you point out my mistake, I would correct pleasurably. Thank you!!)

------------------------------------------------------------------------------




OpenCV๋ฅผ ์ด์šฉํ•œ ์นด๋ฉ”๋ผ ์บ˜๋ฆฌ๋ธŒ๋ ˆ์ด์…˜(์บก์ณ๋œ ์ด๋ฏธ์ง€๋ฅผ ์ด์šฉํ•จ)

์ฒด์Šค ๋ณด๋“œ ํŒจํ„ด์˜ ๊ต์ •ํŒ์„ ์‚ฌ์šฉํ•จ
ํ”„๋กœ๊ทธ๋žจ ์‹คํ–‰ํ›„
ํŒจํ„ด์˜ ๊ฐ€๋กœ, ์„ธ๋กœ ์  ๊ฐœ์ˆ˜ ์ž…๋ ฅ
ํŒจํ„ด์˜ ๊ฒฝ๋กœ ์ž…๋ ฅ (์˜ˆ: ./p1/image)
ํŒจํ„ด์˜ ๊ฐœ์ˆ˜ ์ž…๋ ฅ (10์œผ๋กœ ์ž…๋ ฅํ•˜๋ฉด ./p1/image1.jpg ./p1/image2.jpg ... ./p1/image10.jpg)

์ด๋ฏธ์ง€ ํ•œ ์žฅ์”ฉ ๊ต์ •์ ์„ ์ฐพ๋Š”๋‹ค.
๊ต์ •์ ์„ ์ œ๋Œ€๋กœ ์ฐพ์ง€ ๋ชปํ•œ ์ด๋ฏธ์ง€๋Š” ์‹คํŒจ๋กœ ๊ฐ„์ฃผํ•˜๊ณ  ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š”๋‹ค.
์ž…๋ ฅ๋œ ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋‘ ๋‹ค ์ฒ˜๋ฆฌํ•œ ํ›„
Zhang's ์บ˜๋ฆฌ๋ธŒ๋ ˆ์ด์…˜์„ ์ˆ˜ํ–‰
์ฝ˜์†” ์ฐฝ์— ๋‚ด๋ถ€ ํŒŒ๋ผ๋ฏธํ„ฐ Instrinsic Matrirx์™€ ์™œ๊ณก ๊ณ„์ˆ˜ Distortion Vector ๊ฐ’์„ ์ถœ๋ ฅ ํ•ด์ฃผ๊ณ 
๋˜ํ•œ ๊ฐ๊ฐ์„ xml ํŒŒ์ผ๋กœ ์ €์žฅํ•œ๋‹ค.
๊ทธ๋ฆฌ๊ณ  ๊ฐ ํŒจํ„ด์— ๋Œ€ํ•œ Rotion ๊ณผ Translation Matrix ๋˜ํ•œ ์ฝ˜์†” ์ฐฝ์— ์ถœ๋ ฅํ•˜๊ณ  xmlํŒŒ์ผ๋กœ ์ €์žฅํ•œ๋‹ค.

์ €์žฅ๋œ Instrinsic parameter, distortion coffecient๋ฅผ ์ด์šฉํ•˜์—ฌ ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ตฌํ˜„์ด ๊ฐ€๋Šฅํ•˜๋‹ค.

์ฒด์Šค๋ณด๋“œ์—์„œ ๊ต์ •์ ์„ ์ฐพ๋Š” ๋ฐฉ๋ฒ•์€
cvFindChessboardCorners, cvFindCornerSubPix ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์˜€์œผ๋ฉฐ
์ฐพ์€ ๊ต์ •์ ์„ ๊ทธ๋ ค์ฃผ๋Š” ํ•จ์ˆ˜๋Š” cvDrawChessboardCorners๋ฅผ ์ด์šฉ
์บ˜๋ฆฌ๋ธŒ๋ ˆ์ด์…˜์€ cvCalibrateCamera2 ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•จ


๋ณธ ์†Œ์Šค๋Š” microsoft visual studio 6.0, opencv 1.0์„ ์ด์šฉํ•˜์—ฌ ๋งŒ๋“  ์†Œ์Šค์ž…๋‹ˆ๋‹ค.

์—ฌ๊ธฐ์„œ ๋‹ค์šด ๋ฐ›์„์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - > < entire source >

8/17/2011

World 3D point reconstruction using Direct Linear Transformation(DLT) / Matlab source / ์„ ํ˜• ์‚ผ๊ฐ๋ฒ•์„ ์ด์šฉํ•œ ์›”๋“œ์ขŒํ‘œ ๋ณต์›



Created Date : 2011.07
Language : Matlab
Tool : Matlab
Library & Utilized: rodrigues function(obtain from internet)
Reference : Multiple View Geometry Book


Calculate 3D world coordination using Direct Linear Transformation(DLT)
Firstly, I prepared 2D coordinate of Left, Right image and rotation, translation and camera calibration matrix.
Make Projection matrix. P1 is left camera projection matrix, P2 is right.
But, P1 is reference camera, so P1 is just P1=C[I|0].
Make World coordination using DLT, The DLT method is introduced in Multiple view geometry page 312.
Again, calculate image coordinate using W, P1, P2 for confirmation.
We can confirm that calculated coordinate and input image coordinate is same.

You can download entire matlab source code < here >



--------------------------------------------------------------


์„ ํ˜• ์‚ผ๊ฐ๋ฒ•์„ ์ด์šฉํ•œ 3D ์›”๋“œ ์ขŒํ‘œ ๊ณ„์‚ฐ.
์•„๋ž˜ ์†Œ์Šค๋Š” ์ž„์˜๋กœ ์ •ํ•œ ์™ผ์ชฝ, ์˜ค๋ฅธ์ชฝ ์ด๋ฏธ์ง€์˜ ๋งค์นญ๋œ ์ขŒํ‘œ ์Œ๊ณผ
์ž„์˜๋กœ ์ •ํ•œ ํšŒ์ „, ์ด๋™ ํ–‰๋ ฌ ๊ทธ๋ฆฌ๊ณ   ์นด๋ฉ”๋ผ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ์›”๋“œ 3D์ ์„ ๊ณ„์‚ฐํ•˜๋Š” ๊ณผ์ •์ด๋‹ค.
๋ณต์›๋œ 3D ์ขŒํ‘œ๋Š” ํ™•์ธ์„ ์œ„ํ•˜์—ฌ ๋‹ค์‹œ 2D์˜ ์ด๋ฏธ์ง€ ์ขŒํ‘œ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค.
๋ณ€ํ™˜ํ•œ 2D ์ด๋ฏธ์ง€ ์ขŒํ‘œ๊ฐ€ ์ฒ˜์Œ ์ž…๋ ฅํ•œ ์ขŒํ‘œ์™€ ๊ฐ’์ด ๊ฐ™์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.

์—ฌ๊ธฐ์—์„œ ์ „์ฒด ๋งคํŠธ๋žฉ ์ฝ”๋“œ๋ฅผ ๋‹ค์šด ๋ฐ›์œผ์„ธ์š” < here >
----------------------------------------------------------------------------------------------
%%
clc;
clear all;
close all;
%% image coordinate 
m1 = [ -88.6          1019.21739130435          1046.63913043478         -9643.52173913043          1488.59518796992;
      -216         -14137.7391304348          -629.04347826087          18775.3043478261         -464.421052631579];
m2 = [ 644.916058797683          8264.25802702142          2735.93716970091         -3264.00791696585          4601.62993487106;
    237.341114262516          -16276.926674065         -591.505245744724          4076.20064528107         -313.261770012357];
% rotation matrix
RealR = rodrigues( [-10 30 20]*pi/180 );
% translation matrix
RealT = [20 30 200]';
RealA=rodrigues(RealR)*180/pi; % rotation matrix angle Test
K = [1000 1 512;0 1000 384;0 0 1;]; %calibration matrix
%% P1, P2 ๋งŒ๋“ค๊ธฐ
P1 = K*[eye(3) zeros(3,1)]; % left camera Projection matrix 
P2 = K*[RealR RealT]; % right camera projection matrix
%% 3์ฐจ์› ์  ๋งŒ๋“ค๊ธฐ
%P1, P2๋ฅผ ์ด์šฉํ•œ 3์ฐจ์› ์  ๋ณต์›
% World 3D coordination reconstruction by Direct Linear Transformation 
% Multiple view geometry P.312
W=[];
for i=1:5
    A=[ m1(1,i)*P1(3,:) - P1(1,:); 
        m1(2,i)*P1(3,:) - P1(2,:);
        m2(1,i)*P2(3,:) - P2(1,:);
        m2(2,i)*P2(3,:) - P2(2,:)];
    A(1,:) = A(1,:)/norm(A(1,:));
    A(2,:) = A(2,:)/norm(A(2,:));
    A(3,:) = A(3,:)/norm(A(3,:));
    A(4,:) = A(4,:)/norm(A(4,:));
    
    [u d v] = svd(A);
    W=[W v(:,4)/v(4,4)];    
end
%% ๋‹ค์‹œ 3์ฐจ์› ์ ์—์„œ ํ”ฝ์…€ ์  ๋งŒ๋“ค๊ธฐ
% again calculate 2D image point from World coordinate, projection matrix
reip1 = P1*W;
reip1 = [reip1(1,:)./reip1(3,:); reip1(2,:)./reip1(3,:)]
m1(:,1:5)
reip2 = P2*W;
reip2 = [reip2(1,:)./reip2(3,:); reip2(2,:)./reip2(3,:)]
m2(:,1:5)
% m1 and reip1 is same. Also, m2 and reip2 is same.






(Please understand my bad english ability. If you point out my mistake, I would correct pleasurably. Thank you!!)

Test Direct Linear Transformation in real image.(matlab source)


I tested the Direct Linear Transformation.
But the method doesn't run well in real image.


I tested below process.
1.Left image coordinate, Right image coordinate in real image. It is also matched point.
2.Get R,T between Left, Right Camera. and Calibration matrix from file.
3.make projection P matrix, (P1,P2)
4.Get World 3D coordinate using DLT.
5.Again, Get image coordinate from World 3D coordinate.


The problem..
recovered image point and input image point is not matched.

Below is the matlab code.
And You can down load. here->< Entire Source Code >

--------------------------------------------------------------------------
%%
clc;
clear all;
close all;

%% ๋ฐ์ดํ„ฐ ๋กœ๋”ฉ
%% Data Loading
load rotation_matrices.txt
load translation_vectors.txt
load intrinsic_matrix.txt
load distortion_coeffs.txt

R = rotation_matrices;

%R1์€ Left์นด๋ฉ”๋ผ์˜ ํŒจํ„ดํŒ์œผ๋กœ ๋ถ€ํ„ฐ์˜ Rotation Matrix
%R1 is Rotation Matrix of Left Camera from Pattern board. Pattern Board has origin
%coordinate (0,0,0)
R1 = reshape(R(19,:),3,3);

%R2 is Left Camera Rotation Matrix.
R2 = reshape(R(20,:),3,3);
%T1 is Left Camera Translation Matrix
T1 = translation_vectors(19,:)';
%T1 is right Camera Translation Matrix
T2 = translation_vectors(20,:)';
K = intrinsic_matrix;
%Load Matched Coordinate
load pattern19.txt;
load pattern20.txt;
m1 = pattern19';
m2 = pattern20';




%% Real R,T ๋งŒ๋“ค๊ธฐ
%% Make Real R,T that is relation between Left, Right Camera
% R,T is made by R1,R2 and T1, T1
RealT = T2-T1; %T๋Š” ๊ทธ๋ƒฅ ๋นผ๋ฉด ๋œ๋‹ค.
RealR = R2*inv(R1);
RealA=rodrigues(RealR)*180/pi; %This is Angle




%% P1, P2 ๋งŒ๋“ค๊ธฐ
% Make Projection matrix
P1 = K*[eye(3) zeros(3,1)]; %P1 is reference Camera so P1=K[I:O]
P2 = K*[RealR RealT];




%% 3์ฐจ์› ์  ๋งŒ๋“ค๊ธฐ
%P1, P2๋ฅผ ์ด์šฉํ•œ 3์ฐจ์› ์  ๋ณต์›
W=[];




%Make 3D coordinate using Direct Linear Transformation
%W is wrold coordinate.
for i=1:5
A=[ m1(1,i)*P1(3,:) - P1(1,:);
m1(2,i)*P1(3,:) - P1(2,:);
m2(1,i)*P2(3,:) - P2(1,:);
m2(2,i)*P2(3,:) - P2(2,:)];
A(1,:) = A(1,:)/norm(A(1,:));
A(2,:) = A(2,:)/norm(A(2,:));
A(3,:) = A(3,:)/norm(A(3,:));
A(4,:) = A(4,:)/norm(A(4,:));



[u d v] = svd(A);
W=[W v(:,4)/v(4,4)];
end




%% ๋‹ค์‹œ 3์ฐจ์› ์ ์—์„œ ํ”ฝ์…€ ์  ๋งŒ๋“ค๊ธฐ
% Now, make image coordiante using P1, P2 from W matrix.
reip1 = P1*W;
% reip1 is recovered image coordiante
reip1 = [reip1(1,:)./reip1(3,:); reip1(2,:)./reip1(3,:)] %3์ฐจ์›์—์„œ ๋ณต์›๋œ ์ด๋ฏธ์ง€ ์ขŒํ‘œ
% m1 is origin image coordinate
m1(:,1:5) %์›๋ž˜ ์ด๋ฏธ์ง€ ์ขŒํ‘œ
reip2 = P2*W;
%reip2 is recovered image coordiante
reip2 = [reip2(1,:)./reip2(3,:); reip2(2,:)./reip2(3,:)] %3์ฐจ์›์—์„œ ๋ณต์›๋œ ์ด๋ฏธ์ง€ ์ขŒํ‘œ
%m2 is origin image coordinate
m2(:,1:5) %์›๋ž˜ ์ด๋ฏธ์ง€ ์ขŒํ‘œ
-------------------------------------------------------------------------



But, Why reip1 != m1 and reip2 != m2 ??
I cann't know the reason..
Please discuss with me the reason~ ^^


(Please understand my bad english ability. If you point out my mistake, I would correct pleasurably. Thank you!!)