Ocean
|
This class implements epipolar geometry functions. More...
Static Public Member Functions | |
static bool | fundamentalMatrix (const ImagePoint *leftPoints, const ImagePoint *rightPoints, const size_t correspondences, SquareMatrix3 &fundamental) |
Calculates the fundamental matrix by two sets of at least eight corresponding image points. More... | |
static SquareMatrix3 | inverseFundamentalMatrix (const SquareMatrix3 &fundamental) |
Returns the inverse fundamental matrix. More... | |
static SquareMatrix3 | essentialMatrix (const HomogenousMatrix4 extrinsic) |
Calculates the essential matrix by the rotation and translation between two cameras. More... | |
static SquareMatrix3 | essential2fundamental (const SquareMatrix3 &essential, const SquareMatrix3 &leftIntrinsic, const SquareMatrix3 &rightIntrinsic) |
Calculates the fundamental matrix by the given essential matrix and the two intrinsic camera matrices. More... | |
static SquareMatrix3 | essential2fundamental (const SquareMatrix3 &essential, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera) |
Calculates the fundamental matrix by the given essential matrix and the two cameras. More... | |
static SquareMatrix3 | fundamental2essential (const SquareMatrix3 &fundamental, const SquareMatrix3 &leftIntrinsic, const SquareMatrix3 &rightIntrinsic) |
Calculates the essential matrix by the given fundamental matrix and the two intrinsic camera matrices. More... | |
static SquareMatrix3 | fundamental2essential (const SquareMatrix3 &fundamental, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera) |
Calculates the essential matrix by the given fundamental matrix and the two cameras. More... | |
static bool | epipoles (const SquareMatrix3 &fundamental, Vector2 &leftEpipole, Vector2 &rightEpipole) |
Determines the two epipoles corresponding to a fundamental matrix. More... | |
static bool | epipoles (const HomogenousMatrix4 &extrinsic, const SquareMatrix3 &leftIntrinsic, const SquareMatrix3 &rightIntrinsic, Vector2 &leftEpipole, Vector2 &rightEpipole) |
Determines the two epipoles corresponding to two cameras separated by an extrinsic camera matrix. More... | |
static bool | epipolesFast (const SquareMatrix3 &fundamental, Vector2 &leftEpipole, Vector2 &rightEpipole) |
Finds the two epipoles corresponding to a fundamental matrix. More... | |
static Line2 | leftEpipolarLine (const SquareMatrix3 &fundamental, const Vector2 &rightPoint) |
Returns the epipolar line in the left image corresponding to a given point in the right image. More... | |
static Line2 | rightEpipolarLine (const SquareMatrix3 &fundamental, const Vector2 &leftPoint) |
Returns the epipolar line in the right image corresponding to a given point in the left image. More... | |
static bool | factorizeEssential (const SquareMatrix3 &essential, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera, const ImagePoint &leftPoint, const ImagePoint &rightPoint, HomogenousMatrix4 &transformation) |
Factorizes the essential matrix into rotation and translation. More... | |
static unsigned int | factorizeEssential (const SquareMatrix3 &essential, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera, const ImagePoints &leftPoints, const ImagePoints &rightPoints, HomogenousMatrix4 &transformation) |
Factorizes an essential matrix into a camera pose composed of rotation and translation. More... | |
static bool | rectificationHomography (const HomogenousMatrix4 &transformation, const PinholeCamera &pinholeCamera, SquareMatrix3 &leftHomography, SquareMatrix3 &rightHomography, Quaternion &appliedRotation, PinholeCamera *newCamera) |
Determines the homograph for two (stereo) frames rectifying both images using the transformation between the left and the right camera. More... | |
static Vectors3 | triangulateImagePoints (const HomogenousMatrix4 &world_T_cameraA, const HomogenousMatrix4 &world_T_cameraB, const AnyCamera &anyCameraA, const AnyCamera &anyCameraB, const Vector2 *imagePointsA, const Vector2 *imagePointsB, const size_t numberPoints, const bool onlyFrontObjectPoints=true, const Vector3 &invalidObjectPoint=Vector3(Numeric::minValue(), Numeric::minValue(), Numeric::minValue()), Indices32 *invalidIndices=nullptr) |
Calculates the 3D positions for a pair of image point correspondences with corresponding extrinsic camera transformations. More... | |
static ObjectPoints | triangulateImagePointsIF (const PinholeCamera &camera1, const HomogenousMatrix4 &iFlippedPose1, const PinholeCamera &camera2, const HomogenousMatrix4 &iFlippedPose2, const ImagePoint *points1, const ImagePoint *points2, const size_t correspondences, const Vector3 &invalidObjectPoint=Vector3(Numeric::minValue(), Numeric::minValue(), Numeric::minValue()), Indices32 *invalidIndices=nullptr) |
Calculates the 3D positions for a set of image point correspondences with corresponding poses (Rt) in inverted flipped camera system. More... | |
static ObjectPoints | triangulateImagePointsIF (const ConstIndexedAccessor< HomogenousMatrix4 > &posesIF, const ConstIndexedAccessor< ImagePoints > &imagePointsPerPose, const PinholeCamera *pinholeCamera=nullptr, const Vector3 &invalidObjectPoint=Vector3(Numeric::minValue(), Numeric::minValue(), Numeric::minValue()), Indices32 *invalidIndices=nullptr) |
Calculates the 3D positions for a set of image point correspondences in multiple views with corresponding camera projection matrices (K * Rt) or poses (Rt) in inverted flipped camera system. More... | |
Static Protected Member Functions | |
static unsigned int | solveAmbiguousTransformations (const HomogenousMatrix4 &transformation0, const HomogenousMatrix4 &transformation1, const HomogenousMatrix4 &transformation2, const HomogenousMatrix4 &transformation3, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera, const ImagePoints &leftPoints, const ImagePoints &rightPoints, HomogenousMatrix4 &transformation) |
Determines one transformation from a set of four transformations with most given image point correspondences provide 3D object points in front of the two cameras. More... | |
static unsigned int | validateTransformation (const HomogenousMatrix4 &transformation, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera, const Vector2 *leftPoints, const Vector2 *rightPoints, const size_t correspondences) |
Returns the number of 3D object points lying in front of two cameras for a given transformation between the two cameras. More... | |
static Line2 | epipolarLine2Line (const Vector3 &line) |
Converts a epipolar line to a line object. More... | |
This class implements epipolar geometry functions.
|
inlinestaticprotected |
Converts a epipolar line to a line object.
line | The epipolar line to be converted |
|
static |
Determines the two epipoles corresponding to two cameras separated by an extrinsic camera matrix.
The matrix will be calculated by the extrinsic camera matrix of the right camera relative to the left camera,
and the two intrinsic camera matrices of both cameras.
extrinsic | The extrinsic camera matrix of the right camera relative to the left camera (rightTleft) |
leftIntrinsic | Intrinsic camera matrix of the left camera |
rightIntrinsic | Intrinsic camera matrix of the right camera |
leftEpipole | Resulting left epipole |
rightEpipole | Resulting right epipole |
|
static |
Determines the two epipoles corresponding to a fundamental matrix.
This method uses singular values decomposition for the calculation.
fundamental | The fundamental matrix to extract the epipoles from |
leftEpipole | Resulting left epipole |
rightEpipole | Resulting right epipole |
|
static |
Finds the two epipoles corresponding to a fundamental matrix.
This method calculates the intersection of two epipolar lines. If no intersection can be found the SVD calculation is used.
fundamental | The fundamental matrix to extract the epipoles from |
leftEpipole | Resulting left epipole |
rightEpipole | Resulting right epipole |
|
static |
Calculates the fundamental matrix by the given essential matrix and the two cameras.
essential | The essential matrix to convert into fundamental matrix |
leftCamera | Left camera |
rightCamera | Right camera |
|
static |
Calculates the fundamental matrix by the given essential matrix and the two intrinsic camera matrices.
essential | The essential matrix to convert into fundamental matrix |
leftIntrinsic | Left intrinsic camera matrix |
rightIntrinsic | Right intrinsic camera matrix |
|
static |
Calculates the essential matrix by the rotation and translation between two cameras.
The matrix will be calculated by the extrinsic camera matrix of the right camera relative to the left camera.
The camera is pointing into the negative z-direction with positive y-direction as up-vector.
The right extrinsic camera matrix transforms points defined in the right camera coordinate system in the left camera coordinate system.
However, as the essential matrix needs the inverted extrinsic matrix of the right camera, the given extrinsic matrix will be inverted before creating the extrinsic matrix.
The extrinsic matrix then is defined by the product of the skew-symmetric matrix of the translation and the rotation matrix of the (now inverted) extrinsic (right) camera matrix:
Further, the essential matrix is defined for cameras pointing into the positive z-direction.
Thus, the given extrinsic camera matrix will be flipped around the x-axis (by 180 deg) before computing the essential matrix.
Thus E is defined by: E = skew[T.translation()] * T.rotationMatrix(), T = extrinsicFlipped.inverted(), extrinsicFlipped = flipMatrix * extrinsic * flipMatrix
extrinsic | The extrinsic camera matrix of the right camera relative to the left camera (rightTleft) |
|
static |
Factorizes the essential matrix into rotation and translation.
Beware: The translation can be determined up to a scale factor only.
The resulting factorization provides the extrinsic camera matrix for the right camera while the left camera has the identity extrinsic camera matrix.
Thus, the resulting transformation transforms points defined inside the right camera coordinate system into point defined inside the left camera coordinate system.
essential | The essential matrix to be factorized |
leftCamera | Left camera corresponding to the left image point, must be valid |
rightCamera | Right camera corresponding to the right image point, must be valid |
leftPoint | One image point inside the left image, this point must correspond to the given right image point |
rightPoint | One image point inside the right image, this point must correspond to the given left image point |
transformation | Resulting transformation of between the left and the right camera |
|
static |
Factorizes an essential matrix into a camera pose composed of rotation and translation.
Beware: The translation can be determined up to a scale factor only.
The factorization provides the extrinsic camera matrix (camera pose) for the right camera while the left camera is expected to have the identity as extrinsic camera matrix.
The resulting transformation transforms points defined inside the right camera coordinate system into point defined inside the left camera coordinate system: pointLeft = transformation * pointRight.
essential | The essential matrix to be factorized |
leftCamera | Left camera corresponding to the left image point, must be valid |
rightCamera | Right camera corresponding to the right image point, must be valid |
leftPoints | All image point inside the left image to be checked whether they produce 3D object points lying in front of the camera |
rightPoints | All image points inside the right image, one for each left point |
transformation | Resulting transformation between the left and the right camera |
|
static |
Calculates the essential matrix by the given fundamental matrix and the two cameras.
fundamental | The fundamental matrix to convert into essential matrix |
leftCamera | Left camera |
rightCamera | Right camera |
|
static |
Calculates the essential matrix by the given fundamental matrix and the two intrinsic camera matrices.
fundamental | The fundamental matrix to convert into essential matrix |
leftIntrinsic | Left intrinsic camera matrix |
rightIntrinsic | Right intrinsic camera matrix |
|
static |
Calculates the fundamental matrix by two sets of at least eight corresponding image points.
leftPoints | Left image points |
rightPoints | Right image points |
correspondences | Number of point correspondences (at least 8) |
fundamental | Resulting fundamental matrix |
|
static |
Returns the inverse fundamental matrix.
Actually the matrix will be transposed only.
fundamental | The fundamental matrix F with property xr F xl = 0 |
|
inlinestatic |
Returns the epipolar line in the left image corresponding to a given point in the right image.
fundamental | The fundamental matrix |
rightPoint | Point in the right image |
|
static |
Determines the homograph for two (stereo) frames rectifying both images using the transformation between the left and the right camera.
As the resulting homography may not cover the entire input images using the same camera profile a new camera (perfect) profile can be calculated instead.
Thus, the resulting rectified images will have a larger field of view but will cover the entire input frame data.
The projection center of the left camera is expected to be at the origin of the world coordinate system.
The viewing directions of both cameras is towards the negative z-axis in their particular coordinate systems.
The given transformation is equal to the extrinsic camera matrix of the right camera
and thus transforms points defined inside the right camera coordinate system to points defined inside the left camera coordinate system.
The resulting homography transformations transform 3D rectified image points (homogenous 2D coordinates) into 3D unrectified image points for their particular coordinate system.
The coordinate system of the 3D image points has the origin in the top left corner, while the x-axis points to right, the y-axis points to the bottom and the z-axis to the back of the image.
transformation | Extrinsic camera matrix of the right camera with negative z-axis as viewing direction |
pinholeCamera | The pinhole camera profile used for both images |
leftHomography | Resulting left homography |
rightHomography | Resulting right homography |
appliedRotation | Resulting rotation applied to both cameras |
newCamera | Optional resulting new camera profile used to cover the entire input image data into the output frames, otherwise nullptr |
|
inlinestatic |
Returns the epipolar line in the right image corresponding to a given point in the left image.
fundamental | The fundamental matrix |
leftPoint | Point in the left image |
|
staticprotected |
Determines one transformation from a set of four transformations with most given image point correspondences provide 3D object points in front of the two cameras.
The transformation for the first camera is defined as unit transformation, the four transformation between second and first camera is provided instead.
transformation0 | First transformation candidate |
transformation1 | Second transformation candidate |
transformation2 | Third transformation candidate |
transformation3 | Fourth transformation candidate |
leftCamera | Left camera profile |
rightCamera | Right camera profile |
leftPoints | Left image points |
rightPoints | Right image points, each point corresponds to one from the left set |
transformation | Resulting transformation with most 3D object points in front of both cameras |
|
static |
Calculates the 3D positions for a pair of image point correspondences with corresponding extrinsic camera transformations.
world_T_cameraA | The extrinsic camera transformations of the first camera, with the camera pointing towards the negative z-space, y-axis pointing up, and the x-axis pointing to the right, must be valid |
world_T_cameraB | The extrinsic camera transformations of the second camera, with the camera pointing towards the negative z-space, y-axis pointing up, and the x-axis pointing to the right, must be valid |
anyCameraA | The first camera profile, must be valid |
anyCameraB | The second camera profile, must be valid |
imagePointsA | The set of 2D image points in the first image, each point must correspond to the point with the same index from the second image |
imagePointsB | The set of 2D image points in the second image, each point must correspond to the point with the same index from the first image |
numberPoints | The number of point correspondences, with range [0, infinity) |
onlyFrontObjectPoints | If true, only points that are located in front of the camera will be used for the optimization, otherwise all points will be used. |
invalidObjectPoint | Optional, the location of an invalid object point which will be used as value for all object points which cannot be determined e.g., because of parallel projection rays |
invalidIndices | Optional, the resulting indices of the resulting object points with invalid location |
|
static |
Calculates the 3D positions for a set of image point correspondences in multiple views with corresponding camera projection matrices (K * Rt) or poses (Rt) in inverted flipped camera system.
This linear triangulation uses singular value decomposition.
posesIF | Given poses or projection matrices per view |
imagePointsPerPose | Set of 2D image points per the view, each point must correspond the one in the other views |
pinholeCamera | The pinhole camera profile, one for all views. If no camera profile is given, posesIF act as projection matrices |
invalidObjectPoint | Optional, the location of an invalid object point which will be used as value for all object points which cannot be determined e.g., because of parallel projection rays |
invalidIndices | Optional resulting indices of the resulting object points with invalid location |
|
static |
Calculates the 3D positions for a set of image point correspondences with corresponding poses (Rt) in inverted flipped camera system.
This linear triangulation uses singular value decomposition.
If an object point cannot be determined than the resulting object point will have value (0, 0, 0).
camera1 | The camera profile used for the first image |
iFlippedPose1 | Given projection matrix for the first camera |
camera2 | The camera profile used for the second image |
iFlippedPose2 | Given projection matrix for the second camera |
points1 | Set of 2D image points in the first image, each point must correspond the one in the right image |
points2 | Set of 2D image points in the second image |
correspondences | Number of point correspondences, with range [1, infinity) |
invalidObjectPoint | Optional, the location of an invalid object point which will be used as value for all object points which cannot be determined e.g., because of parallel projection rays |
invalidIndices | Optional resulting indices of the resulting object points with invalid location |
|
staticprotected |
Returns the number of 3D object points lying in front of two cameras for a given transformation between the two cameras.
The transformation for the first camera is defined as unit transformation, the transformation between second and first camera is provided instead.
transformation | The transformation between the right and left camera, transforming points defined in the right coordinate system to points defined in the left (identity) coordinate system, must be valid |
leftCamera | Left camera profile, must be valid |
rightCamera | Right camera profile, must be valid |
leftPoints | Left image points, can be nullptr if 'correspondences' is 0 |
rightPoints | Right image points, one for each left point, can be nullptr if 'correspondences' is 0 |
correspondences | The number of provided point correspondences, with range [0, infinity) |