|
Ocean
|
This class implements epipolar geometry functions. More...
#include <EpipolarGeometry.h>
Static Public Member Functions | |
| static bool | fundamentalMatrix (const Vector2 *leftPoints, const Vector2 *rightPoints, const size_t correspondences, SquareMatrix3 &right_F_left) |
| Calculates the fundamental matrix based on corresponding image points in a 'left' and in a 'right' camera image. | |
| static SquareMatrix3 | reverseFundamentalMatrix (const SquareMatrix3 &right_F_left) |
| Returns the reverted fundamental matrix. | |
| static bool | essentialMatrix (const Vector3 *leftImageRays, const Vector3 *rightImageRays, const size_t correspondences, SquareMatrix3 &normalizedRight_E_normalizedLeft) |
| Calculates the essential matrix based on corresponding viewing rays from the 'left' and 'right' camera. | |
| static bool | essentialMatrixF (const Vector3 *flippedLeftImageRays, const Vector3 *flippedRightImageRays, const size_t correspondences, SquareMatrix3 &normalizedRight_E_normalizedLeft) |
| Calculates the essential matrix based on corresponding viewing rays from the 'left' and 'right' camera. | |
| static SquareMatrix3 | essentialMatrix (const HomogenousMatrix4 &rightCamera_T_leftCamera) |
| Calculates the essential matrix based on 6-DOF camera pose between two cameras. | |
| static SquareMatrix3 | essential2fundamental (const SquareMatrix3 &normalizedRight_E_normalizedLeft, const SquareMatrix3 &leftIntrinsic, const SquareMatrix3 &rightIntrinsic) |
| Calculates the fundamental matrix from a given essential matrix and the two intrinsic camera matrices. | |
| static SquareMatrix3 | essential2fundamental (const SquareMatrix3 &normalizedRight_E_normalizedLeft, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera) |
| Calculates the fundamental matrix from a given essential matrix and the two intrinsic camera matrices. | |
| static SquareMatrix3 | fundamental2essential (const SquareMatrix3 &right_F_left, const SquareMatrix3 &leftIntrinsic, const SquareMatrix3 &rightIntrinsic) |
| Calculates the essential matrix from a given fundamental matrix and the two intrinsic camera matrices. | |
| static SquareMatrix3 | fundamental2essential (const SquareMatrix3 &right_F_left, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera) |
| Calculates the essential matrix by the given fundamental matrix and the two cameras. | |
| static bool | epipoles (const SquareMatrix3 &right_F_left, Vector2 &leftEpipole, Vector2 &rightEpipole) |
| Determines the two epipoles corresponding to a fundamental matrix. | |
| static bool | epipoles (const HomogenousMatrix4 &extrinsic, const SquareMatrix3 &leftIntrinsic, const SquareMatrix3 &rightIntrinsic, Vector2 &leftEpipole, Vector2 &rightEpipole) |
| Determines the two epipoles corresponding to two cameras separated by an extrinsic camera matrix. | |
| static bool | epipolesFast (const SquareMatrix3 &fundamental, Vector2 &leftEpipole, Vector2 &rightEpipole) |
| Finds the two epipoles corresponding to a fundamental matrix. | |
| static Line2 | leftEpipolarLine (const SquareMatrix3 &fundamental, const Vector2 &rightPoint) |
| Returns the epipolar line in the left image corresponding to a given point in the right image. | |
| static Line2 | rightEpipolarLine (const SquareMatrix3 &fundamental, const Vector2 &leftPoint) |
| Returns the epipolar line in the right image corresponding to a given point in the left image. | |
| static size_t | factorizeEssential (const SquareMatrix3 &normalizedRight_E_normalizedLeft, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera, const Vector2 *leftPoints, const Vector2 *rightPoints, const size_t correspondences, HomogenousMatrix4 &left_T_right) |
| Factorizes an essential matrix into a 6-DOF camera pose composed of rotation and translation. | |
| static bool | rectificationHomography (const HomogenousMatrix4 &transformation, const PinholeCamera &pinholeCamera, SquareMatrix3 &leftHomography, SquareMatrix3 &rightHomography, Quaternion &appliedRotation, PinholeCamera *newCamera) |
| Determines the homograph for two (stereo) frames rectifying both images using the transformation between the left and the right camera. | |
| static Vectors3 | triangulateImagePoints (const HomogenousMatrix4 &world_T_cameraA, const HomogenousMatrix4 &world_T_cameraB, const AnyCamera &anyCameraA, const AnyCamera &anyCameraB, const Vector2 *imagePointsA, const Vector2 *imagePointsB, const size_t numberPoints, const bool onlyFrontObjectPoints=true, const Vector3 &invalidObjectPoint=Vector3(Numeric::minValue(), Numeric::minValue(), Numeric::minValue()), Indices32 *invalidIndices=nullptr) |
| Calculates the 3D positions for a pair of image point correspondences with corresponding extrinsic camera transformations. | |
| static ObjectPoints | triangulateImagePointsIF (const PinholeCamera &camera1, const HomogenousMatrix4 &iFlippedPose1, const PinholeCamera &camera2, const HomogenousMatrix4 &iFlippedPose2, const Vector2 *points1, const Vector2 *points2, const size_t correspondences, const Vector3 &invalidObjectPoint=Vector3(Numeric::minValue(), Numeric::minValue(), Numeric::minValue()), Indices32 *invalidIndices=nullptr) |
| Calculates the 3D positions for a set of image point correspondences with corresponding poses (Rt) in inverted flipped camera system. | |
| static ObjectPoints | triangulateImagePointsIF (const ConstIndexedAccessor< HomogenousMatrix4 > &posesIF, const ConstIndexedAccessor< Vectors2 > &imagePointsPerPose, const PinholeCamera *pinholeCamera=nullptr, const Vector3 &invalidObjectPoint=Vector3(Numeric::minValue(), Numeric::minValue(), Numeric::minValue()), Indices32 *invalidIndices=nullptr) |
| Calculates the 3D positions for a set of image point correspondences in multiple views with corresponding camera projection matrices (K * Rt) or poses (Rt) in inverted flipped camera system. | |
Static Protected Member Functions | |
| template<bool tRaysAreFlipped> | |
| static bool | essentialMatrix (const Vector3 *leftImageRays, const Vector3 *rightImageRays, const size_t correspondences, SquareMatrix3 &normalizedRight_E_normalizedLeft) |
| Calculates the essential matrix based on corresponding viewing rays from the 'left' and 'right' camera. | |
| static size_t | validateCameraPose (const HomogenousMatrix4 &leftCamera_T_rightCamera, const PinholeCamera &leftCamera, const PinholeCamera &rightCamera, const Vector2 *leftPoints, const Vector2 *rightPoints, const size_t correspondences) |
| Returns the number of 3D object points lying in front of two cameras for a given transformation between the two cameras. | |
| static Line2 | epipolarLine2Line (const Vector3 &line) |
| Converts a epipolar line to a line object. | |
This class implements epipolar geometry functions.
|
inlinestaticprotected |
Converts a epipolar line to a line object.
| line | The epipolar line to be converted |
|
static |
Determines the two epipoles corresponding to two cameras separated by an extrinsic camera matrix.
The matrix will be calculated by the extrinsic camera matrix of the right camera relative to the left camera,
and the two intrinsic camera matrices of both cameras.
| extrinsic | The extrinsic camera matrix of the right camera relative to the left camera (rightTleft) |
| leftIntrinsic | Intrinsic camera matrix of the left camera |
| rightIntrinsic | Intrinsic camera matrix of the right camera |
| leftEpipole | Resulting left epipole |
| rightEpipole | Resulting right epipole |
|
static |
Determines the two epipoles corresponding to a fundamental matrix.
This method uses singular values decomposition for the calculation.
| right_F_left | The fundamental matrix to convert into essential matrix, must be valid |
| leftEpipole | Resulting left epipole |
| rightEpipole | Resulting right epipole |
|
static |
Finds the two epipoles corresponding to a fundamental matrix.
This method calculates the intersection of two epipolar lines. If no intersection can be found the SVD calculation is used.
| fundamental | The fundamental matrix to extract the epipoles from |
| leftEpipole | Resulting left epipole |
| rightEpipole | Resulting right epipole |
|
static |
Calculates the fundamental matrix from a given essential matrix and the two intrinsic camera matrices.
| normalizedRight_E_normalizedLeft | The essential matrix to convert, must be valid |
| leftCamera | The left camera profile defining the projection, must be a pure pinhole model without any distortion parameters |
| rightCamera | The right camera profile defining the projection, must be a pure pinhole model without any distortion parameters |
|
static |
Calculates the fundamental matrix from a given essential matrix and the two intrinsic camera matrices.
| normalizedRight_E_normalizedLeft | The essential matrix to convert, must be valid |
| leftIntrinsic | The left intrinsic camera matrix, must be valid |
| rightIntrinsic | The right intrinsic camera matrix, must be valid |
| fundamental | The resulting fundamental matrix 'right_F_left' |
|
static |
Calculates the essential matrix based on 6-DOF camera pose between two cameras.
The resulting essential matrix is defined by the following equation:
normalizedRightPoint^T * normalizedRight_E_normalizedLeft * normalizedLeftPoint = 0 with normalizedRightPoint = [flippedObjectPoint.x() / flippedObjectPoint.z(), flippedObjectPoint.y() / flippedObjectPoint.z(), 1]^T
The flipped object points and the normalized image points are defined in the flipped camera, with default flipped camera pointing towards the positive z-space with y-axis upwards.
| rightCamera_T_leftCamera | The transformation transforming the left camera to the right camera, with default camera pointing towards the negative z-space with y-axis upwards, must be valid |
|
static |
Calculates the essential matrix based on corresponding viewing rays from the 'left' and 'right' camera.
This function implements the 8-point algorithm based on corresponding viewing rays. The resulting essential matrix is defined by the following equation:
rightViewingRay^T * normalizedRight_E_normalizedLeft * leftViewingRay = 0
The normalized image points are defined in the flipped camera, with default flipped camera pointing towards the positive z-space with y-axis upwards.
| leftImageRays | Three 3D rays with unit length, defined in the coordinate system of the left camera, starting at the camera's center of projection (equal to the origin) and hitting the image plane at a known image points |
| rightImageRays | The 3D rays with unit length, defined in the coordinate system of the right camera, starting at the camera's center of projection (equal to the origin) and hitting the image plane at a known image points, one for each left image ray |
| correspondences | The number of bearing vector correspondences, with range [8, infinity) |
| normalizedRight_E_normalizedLeft | The resulting essential matrix |
|
staticprotected |
Calculates the essential matrix based on corresponding viewing rays from the 'left' and 'right' camera.
This function implements the 8-point algorithm based on corresponding viewing rays. The resulting essential matrix is defined by the following equation:
rightViewingRay^T * normalizedRight_E_normalizedLeft * leftViewingRay = 0
The normalized image points are defined in the flipped camera, with default flipped camera pointing towards the positive z-space with y-axis upwards.
| leftImageRays | Three 3D rays with unit length, defined in the coordinate system of the left camera, starting at the camera's center of projection (equal to the origin) and hitting the image plane at a known image points |
| rightImageRays | The 3D rays with unit length, defined in the coordinate system of the right camera, starting at the camera's center of projection (equal to the origin) and hitting the image plane at a known image points, one for each left image ray |
| correspondences | The number of bearing vector correspondences, with range [8, infinity) |
| normalizedRight_E_normalizedLeft | The resulting essential matrix |
| tRaysAreFlipped | True, to interpret the rays as flipped (pointing towards the positive z-space with y-axis upwards); False, to interpret the rays as normal (pointing towards the negative z space with y-axis downwards) |
|
static |
Calculates the essential matrix based on corresponding viewing rays from the 'left' and 'right' camera.
This function implements the 8-point algorithm based on corresponding viewing rays. The resulting essential matrix is defined by the following equation:
rightViewingRay^T * normalizedRight_E_normalizedLeft * leftViewingRay = 0
The normalized image points are defined in the flipped camera, with default flipped camera pointing towards the positive z-space with y-axis upwards.
| flippedLeftImageRays | Three flipped 3D rays with unit length, defined in the flipped coordinate system of the left camera, starting at the camera's center of projection (equal to the origin) and hitting the image plane at a known image points |
| flippedRightImageRays | The flipped 3D rays with unit length, defined in the flipped coordinate system of the right camera, starting at the camera's center of projection (equal to the origin) and hitting the image plane at a known image points, one for each left image ray |
| correspondences | The number of bearing vector correspondences, with range [8, infinity) |
| normalizedRight_E_normalizedLeft | The resulting essential matrix |
|
static |
Factorizes an essential matrix into a 6-DOF camera pose composed of rotation and translation.
Beware: The translation can be determined up to a scale factor only.
The factorization provides the camera pose for the right camera while the left camera is located at the origin with identity pose.
The resulting transformation transforms points defined in the right camera coordinate system into points defined in the left camera coordinate system: pointLeft = left_T_right * pointRight.
| normalizedRight_E_normalizedLeft | The essential matrix to be factorized, must be valid |
| leftCamera | The left camera profile defining the projection, must be valid |
| rightCamera | The right camera profile defining the projection, must be valid |
| leftPoints | All image points in the left image to be checked whether they produce 3D object points lying in front of the camera, must be valid |
| rightPoints | All image points in the right image, one for each left point, must be valid |
| correspondences | The number of point correspondences, with range [1, infinity) |
| left_T_right | The resulting transformation between the left and the right camera, transforming points from right to left |
|
static |
Calculates the essential matrix by the given fundamental matrix and the two cameras.
| right_F_left | The fundamental matrix to convert into essential matrix, must be valid |
| leftCamera | The left camera profile defining the projection, must be a pure pinhole model without any distortion parameters |
| rightCamera | The right camera profile defining the projection, must be a pure pinhole model without any distortion parameters |
|
static |
Calculates the essential matrix from a given fundamental matrix and the two intrinsic camera matrices.
| right_F_left | The fundamental matrix to convert into essential matrix, must be valid |
| leftIntrinsic | Left intrinsic camera matrix |
| rightIntrinsic | Right intrinsic camera matrix |
|
static |
Calculates the fundamental matrix based on corresponding image points in a 'left' and in a 'right' camera image.
The resulting fundamental matrix is defined by the following equation:
rightPoint^T * right_F_left * leftPoint = 0
| leftPoints | The left image points, must be valid |
| rightPoints | The right image points, must be valid |
| correspondences | The number of point correspondences, with range [8, infinity) |
| right_F_left | The resulting fundamental matrix |
|
inlinestatic |
Returns the epipolar line in the left image corresponding to a given point in the right image.
| fundamental | The fundamental matrix |
| rightPoint | Point in the right image |
|
static |
Determines the homograph for two (stereo) frames rectifying both images using the transformation between the left and the right camera.
As the resulting homography may not cover the entire input images using the same camera profile a new camera (perfect) profile can be calculated instead.
Thus, the resulting rectified images will have a larger field of view but will cover the entire input frame data.
The projection center of the left camera is expected to be at the origin of the world coordinate system.
The viewing directions of both cameras is towards the negative z-axis in their particular coordinate systems.
The given transformation is equal to the extrinsic camera matrix of the right camera
and thus transforms points defined inside the right camera coordinate system to points defined inside the left camera coordinate system.
The resulting homography transformations transform 3D rectified image points (homogenous 2D coordinates) into 3D unrectified image points for their particular coordinate system.
The coordinate system of the 3D image points has the origin in the top left corner, while the x-axis points to right, the y-axis points to the bottom and the z-axis to the back of the image.
| transformation | Extrinsic camera matrix of the right camera with negative z-axis as viewing direction |
| pinholeCamera | The pinhole camera profile used for both images |
| leftHomography | Resulting left homography |
| rightHomography | Resulting right homography |
| appliedRotation | Resulting rotation applied to both cameras |
| newCamera | Optional resulting new camera profile used to cover the entire input image data into the output frames, otherwise nullptr |
|
inlinestatic |
Returns the reverted fundamental matrix.
Actually the matrix will be transposed only, because the fundamental matrix is singular.
| right_F_left | The fundamental matrix satisfying the equation rightPoint^T * right_F_left * leftPoint = 0 |
|
inlinestatic |
Returns the epipolar line in the right image corresponding to a given point in the left image.
| fundamental | The fundamental matrix |
| leftPoint | Point in the left image |
|
static |
Calculates the 3D positions for a pair of image point correspondences with corresponding extrinsic camera transformations.
| world_T_cameraA | The extrinsic camera transformations of the first camera, with the camera pointing towards the negative z-space, y-axis pointing up, and the x-axis pointing to the right, must be valid |
| world_T_cameraB | The extrinsic camera transformations of the second camera, with the camera pointing towards the negative z-space, y-axis pointing up, and the x-axis pointing to the right, must be valid |
| anyCameraA | The first camera profile, must be valid |
| anyCameraB | The second camera profile, must be valid |
| imagePointsA | The set of 2D image points in the first image, each point must correspond to the point with the same index from the second image |
| imagePointsB | The set of 2D image points in the second image, each point must correspond to the point with the same index from the first image |
| numberPoints | The number of point correspondences, with range [0, infinity) |
| onlyFrontObjectPoints | If true, only points that are located in front of the camera will be used for the optimization, otherwise all points will be used. |
| invalidObjectPoint | Optional, the location of an invalid object point which will be used as value for all object points which cannot be determined e.g., because of parallel projection rays |
| invalidIndices | Optional, the resulting indices of the resulting object points with invalid location |
|
static |
Calculates the 3D positions for a set of image point correspondences in multiple views with corresponding camera projection matrices (K * Rt) or poses (Rt) in inverted flipped camera system.
This linear triangulation uses singular value decomposition.
| posesIF | Given poses or projection matrices per view |
| imagePointsPerPose | Set of 2D image points per the view, each point must correspond the one in the other views |
| pinholeCamera | The pinhole camera profile, one for all views. If no camera profile is given, posesIF act as projection matrices |
| invalidObjectPoint | Optional, the location of an invalid object point which will be used as value for all object points which cannot be determined e.g., because of parallel projection rays |
| invalidIndices | Optional resulting indices of the resulting object points with invalid location |
|
static |
Calculates the 3D positions for a set of image point correspondences with corresponding poses (Rt) in inverted flipped camera system.
This linear triangulation uses singular value decomposition.
If an object point cannot be determined than the resulting object point will have value (0, 0, 0).
| camera1 | The camera profile used for the first image |
| iFlippedPose1 | Given projection matrix for the first camera |
| camera2 | The camera profile used for the second image |
| iFlippedPose2 | Given projection matrix for the second camera |
| points1 | Set of 2D image points in the first image, each point must correspond the one in the right image |
| points2 | Set of 2D image points in the second image |
| correspondences | Number of point correspondences, with range [1, infinity) |
| invalidObjectPoint | Optional, the location of an invalid object point which will be used as value for all object points which cannot be determined e.g., because of parallel projection rays |
| invalidIndices | Optional resulting indices of the resulting object points with invalid location |
|
staticprotected |
Returns the number of 3D object points lying in front of two cameras for a given transformation between the two cameras.
The pose of the first camera is located in the origin (identity transformation) while the pose of the second camera is defined by the given transformation.
| leftCamera_T_rightCamera | The transformation between the right and the left camera, must be valid |
| leftCamera | The left camera profile defining the projection, must be valid |
| rightCamera | The right camera profile defining the projection, must be valid |
| leftPoints | The left image points, must be valid if correspondences != 0 |
| rightPoints | The right image points, one for each left point, must be valid if correspondences != 0 |
| correspondences | The number of provided point correspondences, with range [0, infinity) |