Ocean
|
This class implements a Structure From Motion solver for unconstrained 3D object points and unconstrained 6-DOF camera poses. More...
Data Structures | |
class | ObjectPointToPoseImagePointCorrespondenceAccessor |
This class implements an accessor for groups of pairs of pose indices (not pose ids) and image points. More... | |
class | ObjectPointToPoseIndexImagePointCorrespondenceAccessor |
This class implements an accessor providing access to observation pairs (the observation of a projected object points in camera poses/frames) for a set of object points. More... | |
class | PoseToObjectPointIdImagePointCorrespondenceAccessor |
This class implements an accessor for groups of pairs of object point ids and image points. More... | |
class | RelativeThreshold |
Definition of a class allowing to define a relative threshold with lower and upper boundary for individual reference values. More... | |
Public Types | |
enum | CameraMotion { CM_INVALID = 0 , CM_STATIC = (1 << 0) , CM_ROTATIONAL = (1 << 1) , CM_TRANSLATIONAL = (1 << 2) , CM_ROTATIONAL_TINY = CM_ROTATIONAL | (1 << 3) , CM_ROTATIONAL_MODERATE = CM_ROTATIONAL | (1 << 4) , CM_ROTATIONAL_SIGNIFICANT = CM_ROTATIONAL | (1 << 5) , CM_TRANSLATIONAL_TINY = CM_TRANSLATIONAL | (1 << 6) , CM_TRANSLATIONAL_MODERATE = CM_TRANSLATIONAL | (1 << 7) , CM_TRANSLATIONAL_SIGNIFICANT = CM_TRANSLATIONAL | (1 << 8) , CM_UNKNOWN = CM_ROTATIONAL | CM_TRANSLATIONAL | (1 << 9) } |
Definition of individual camera motion types. More... | |
enum | AccuracyMethod { AM_INVALID , AM_MEAN_DIRECTION_MIN_COSINE , AM_MEAN_DIRECTION_MEAN_COSINE , AM_MEAN_DIRECTION_MEDIAN_COSINE } |
Definition of individual methods to determine the accuracy of object points. More... | |
Static Public Member Functions | |
static bool | determineInitialObjectPointsFromSparseKeyFrames (const Database &database, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const Scalar maximalStaticImagePointFilterRatio, Vectors3 &initialObjectPoints, Indices32 &initialObjectPointIds, const RelativeThreshold &pointsThreshold=RelativeThreshold(20u, Scalar(0.5), 100u), const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Scalar *finalSqrError=nullptr, Scalar *finalImagePointDistance=nullptr, bool *abort=nullptr) |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More... | |
static bool | determineInitialObjectPointsFromDenseFrames (const Database &database, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const CV::SubRegion ®ionOfInterest, const Scalar maximalStaticImagePointFilterRatio, Vectors3 &initialObjectPoints, Indices32 &initialObjectPointIds, const RelativeThreshold &pointsThreshold=RelativeThreshold(20u, Scalar(0.5), 100u), const Scalar minimalTrackedFramesRatio=Scalar(0.1), const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Scalar *finalSqrError=nullptr, bool *abort=nullptr) |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More... | |
static bool | determineInitialObjectPointsFromSparseKeyFramesBySteps (const Database &database, const unsigned int steps, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const Scalar maximalStaticImagePointFilterRatio, Vectors3 &initialObjectPoints, Indices32 &initialObjectPointIds, const RelativeThreshold &pointsThreshold=RelativeThreshold(20u, Scalar(0.5), 100u), const unsigned int minimalKeyFrames=2u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Worker *worker=nullptr, bool *abort=nullptr) |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More... | |
static bool | determineInitialObjectPointsFromSparseKeyFramesRANSAC (const PinholeCamera &pinholeCamera, const Database::ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, HomogenousMatrices4 &validPoses, Indices32 &validPoseIndices, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const unsigned int iterations=20u, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5), const Database *database=nullptr, const Indices32 *keyFrameIds=nullptr, const Indices32 *objectPointIds=nullptr, bool *abort=nullptr) |
Determines the initial object point positions for a set of key frames (image point groups) observing the unique object points in individual camera poses. More... | |
static bool | determineInitialObjectPointsFromDenseFramesRANSAC (const PinholeCamera &pinholeCamera, const ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, HomogenousMatrices4 &validPoses, Indices32 &validPoseIds, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const unsigned int iterations=20u, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5), Worker *worker=nullptr, bool *abort=nullptr) |
Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses. More... | |
static bool | determineInitialObjectPointsFromSparseKeyFrames (const PinholeCamera &pinholeCamera, const Database::ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, const unsigned int firstGroupIndex, const unsigned int secondGroupIndex, HomogenousMatrices4 &poses, Indices32 &validPoseIndices, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5)) |
Determines the initial object point positions for a set of key frames (image point groups) observing unique object points. More... | |
static bool | determineInitialObjectPointsFromDenseFrames (const PinholeCamera &pinholeCamera, const ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, const unsigned int firstGroupIndex, const unsigned int secondGroupIndex, HomogenousMatrices4 &validPoses, Indices32 &validPoseIds, Scalar &totalSqrError, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5)) |
Determines the initial object point positions for a set of image point groups (covering a range of image frames) observing the unique object points in individual frames. More... | |
static bool | optimizeInitialObjectPoints (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const Vectors3 &initialObjectPoints, const Indices32 &initialObjectPointIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, const unsigned int minimalObjectPoints=5u, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Scalar *initialSqrError=nullptr, Scalar *finalSqrError=nullptr, bool *abort=nullptr) |
Optimizes the positions of already known initial 3D object points when a given database holds neither valid 3D positions or valid 6DOF poses. More... | |
static bool | determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const unsigned int lowerFrame, const unsigned int upperFrame, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Worker *worker=nullptr, bool *abort=nullptr) |
Determines the positions of new object points from a database within a specified frame range. More... | |
static bool | determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, const Indices32 &unknownObjectPointIds, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, RandomGenerator &randomGenerator, Indices32 *newObjectPointObservations=nullptr, const unsigned int minimalObservations=2u, const bool useAllObservations=true, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar averageRobustError=Scalar(3.5 *3.5), const Scalar maximalSqrError=Numeric::maxValue(), Worker *worker=nullptr, bool *abort=nullptr) |
Determines the positions of a set of (currently unknown) object points. More... | |
static bool | determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, RandomGenerator &randomGenerator, Indices32 *newObjectPointObservations=nullptr, const Scalar minimalObjectPointPriority=Scalar(-1), const unsigned int minimalObservations=10u, const bool useAllObservations=true, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar averageRobustError=Scalar(3.5 *3.5), const Scalar maximalSqrError=Numeric::maxValue(), Worker *worker=nullptr, bool *abort=nullptr) |
Determines the positions of all (currently unknown) object points. More... | |
template<bool tVisibleInAllPoses> | |
static bool | determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, const Index32 lowerPoseId, const Index32 upperPoseId, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, RandomGenerator &randomGenerator, Indices32 *newObjectPointObservations=nullptr, const Scalar minimalObjectPointPriority=Scalar(-1), const unsigned int minimalObservations=10u, const bool useAllObservations=true, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar averageRobustError=Scalar(3.5 *3.5), const Scalar maximalSqrError=Numeric::maxValue(), Worker *worker=nullptr, bool *abort=nullptr) |
Determines the positions of (currently unknown) object points which are visible in specified poses (the poses are specified by a lower and upper frame range). More... | |
static bool | optimizeObjectPointsWithFixedPoses (const Database &database, const PinholeCamera &pinholeCamera, const CameraMotion cameraMotion, const Indices32 &objectPointIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar maximalRobustError=Scalar(3.5 *3.5), Worker *worker=nullptr, bool *abort=nullptr) |
Optimizes a set of 3D object points (having a quite good accuracy already) without optimizing the camera poses concurrently. More... | |
static bool | optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, Indices32 *optimizedKeyFramePoseIds=nullptr, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=20u, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently. More... | |
static bool | optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &keyFrameIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently. More... | |
static bool | optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &keyFrameIds, const Indices32 &objectPointIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently. More... | |
static bool | optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, const Index32 lowerPoseId, const Index32 upperPoseId, const Indices32 &objectPointIds, Indices32 &usedKeyFrameIds, Vectors3 &optimizedObjectPoints, const unsigned int minimalObservations=10u, const unsigned int minimalKeyFrames=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
static bool | supposeRotationalCameraMotion (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrame, const unsigned int upperFrame, const bool findInitialFieldOfView, const PinholeCamera::OptimizationStrategy optimizationStrategy, PinholeCamera &optimizedCamera, Database &optimizedDatabase, const unsigned int minimalObservations=0u, const unsigned int minimalKeyframes=3u, const unsigned int maximalKeyframes=20u, const Scalar lowerFovX=Numeric::deg2rad(20), const Scalar upperFovX=Numeric::deg2rad(140), const Scalar maximalSqrError=(1.5 *1.5), Worker *worker=nullptr, bool *abort=nullptr, Scalar *finalMeanSqrError=nullptr) |
Supposes pure rotational camera motion for a given database with stable camera poses determined by initial but stable object points. More... | |
static bool | optimizeCamera (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrame, const unsigned int upperFrame, const bool findInitialFieldOfView, const PinholeCamera::OptimizationStrategy optimizationStrategy, PinholeCamera &optimizedCamera, Database &optimizedDatabase, const unsigned int minimalObservationsInKeyframes=2u, const unsigned int minimalKeyframes=3u, const unsigned int maximalKeyframes=20u, const Scalar lowerFovX=Numeric::deg2rad(20), const Scalar upperFovX=Numeric::deg2rad(140), Worker *worker=nullptr, bool *abort=nullptr, Scalar *finalMeanSqrError=nullptr) |
Optimizes the camera profile for a given database with stable camera poses determined by initial but stable object points. More... | |
static bool | optimizeCameraWithVariableObjectPointsAndPoses (const Database &database, const PinholeCamera &pinholeCamera, const PinholeCamera::OptimizationStrategy optimizationStrategy, PinholeCamera &optimizedCamera, Vectors3 *optimizedObjectPoints=nullptr, Indices32 *optimizedObjectPointIds=nullptr, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, Indices32 *optimizedKeyFramePoseIds=nullptr, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=20u, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently. More... | |
static bool | optimizeCameraWithVariableObjectPointsAndPoses (const Database &database, const PinholeCamera &pinholeCamera, const PinholeCamera::OptimizationStrategy optimizationStrategy, const Indices32 &keyFrameIds, PinholeCamera &optimizedCamera, Vectors3 *optimizedObjectPoints=nullptr, Indices32 *optimizedObjectPointIds=nullptr, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently. More... | |
static bool | optimizeCameraWithVariableObjectPointsAndPoses (const Database &database, const PinholeCamera &pinholeCamera, const PinholeCamera::OptimizationStrategy optimizationStrategy, const Indices32 &keyFrameIds, const Indices32 &objectPointIds, PinholeCamera &optimizedCamera, Vectors3 *optimizedObjectPoints=nullptr, Indices32 *optimizedObjectPointIds=nullptr, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr) |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently. More... | |
static bool | updatePoses (Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar maximalRobustError=Scalar(3.5 *3.5), Scalar *finalAverageError=nullptr, size_t *validPoses=nullptr, bool *abort=nullptr) |
Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames. More... | |
static bool | updatePoses (Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar maximalRobustError=Scalar(3.5 *3.5), Scalar *finalAverageError=nullptr, size_t *validPoses=nullptr, Worker *worker=nullptr, bool *abort=nullptr) |
Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames. More... | |
static bool | determinePoses (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, const IndexSet32 &priorityObjectPointIds, const bool solePriorityPoints, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, ShiftVector< HomogenousMatrix4 > &poses, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar maximalRobustError=Scalar(3.5 *3.5), Scalar *finalAverageError=nullptr, Worker *worker=nullptr, bool *abort=nullptr) |
Determines the camera poses depending on valid 2D/3D points correspondence within a range of camera frames. More... | |
static bool | trackObjectPoints (const Database &database, const Indices32 &objectPointIds, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const unsigned int minimalTrackedObjectPoints, const unsigned int minimalTrackedFrames, const unsigned int maximalTrackedObjectPoints, Indices32 &trackedObjectPointIds, ImagePointGroups &trackedImagePointGroups, Indices32 *trackedValidIndices=nullptr, bool *abort=nullptr) |
This functions tracks image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number or as long as a minimal number of sibling frames has been processed. More... | |
static bool | trackObjectPoints (const Database &database, const Indices32 &priorityObjectPointIds, const Indices32 &remainingObjectPointIds, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const unsigned int minimalTrackedPriorityObjectPoints, const Scalar minimalRemainingFramesRatio, const unsigned int maximalTrackedPriorityObjectPoints, const unsigned int maximalTrackedRemainingObjectPoints, Indices32 &trackedObjectPointIds, ImagePointGroups &trackedImagePointGroups, Indices32 *trackedValidPriorityIndices=nullptr, Indices32 *trackedValidRemainingIndices=nullptr, bool *abort=nullptr) |
This functions tracks two individual groups (disjoined) image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number. More... | |
static Indices32 | trackObjectPointsToNeighborFrames (const Database &database, const Indices32 &objectPointIds, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame) |
This function tracks a group of object points from one frame to both (if available) neighbor frames and counts the minimal number of tracked points. More... | |
static Indices32 | determineRepresentativePoses (const Database &database, const unsigned int lowerFrame, const unsigned int upperFrame, const size_t numberRepresentative) |
Determines a set of representative camera poses from a given database within a specified frame range. More... | |
static Indices32 | determineRepresentativePoses (const Database &database, const Indices32 &poseIds, const size_t numberRepresentative) |
Determines a set of representative camera poses from a given database from a set of given camera poses. More... | |
static HomogenousMatrix4 | determinePose (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr) |
Determines the camera 6-DOF pose for a specific camera frame. More... | |
static HomogenousMatrix4 | determinePose (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const IndexSet32 &priorityObjectPointIds, const bool solePriorityPoints, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr) |
Determines the camera 6-DOF pose for a specific camera frame. More... | |
static HomogenousMatrix4 | determinePose (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< Index32 > &objectPointIds, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr) |
Determines the camera 6-DOF pose for a specific camera frame. More... | |
static HomogenousMatrix4 | determinePose (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, Indices32 *validIndices=nullptr) |
Determines the camera 6-DOF pose for a set of object point and image point correspondences. More... | |
static HomogenousMatrix4 | determinePose (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const size_t priorityCorrespondences, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr) |
Determines the camera 6-DOF pose for a set of object point and image point correspondences. More... | |
static SquareMatrix3 | determineOrientation (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr) |
Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame. More... | |
static SquareMatrix3 | determineOrientation (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const IndexSet32 &priorityObjectPointIds, const bool solePriorityPoints, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr) |
Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame. More... | |
static SquareMatrix3 | determineOrientation (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const ObjectPoint *objectPoints, const Index32 *objectPointIds, const size_t numberObjectPoints, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr) |
Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame. More... | |
static SquareMatrix3 | determineOrientation (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, Indices32 *validIndices=nullptr) |
Determines the camera 3-DOF orientation for a set of object point and image point correspondences. More... | |
static SquareMatrix3 | determineOrientation (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const size_t priorityCorrespondences, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr) |
Determines the camera 3-DOF orientation for a set of object point and image point correspondences. More... | |
static size_t | determineValidPoses (const AnyCamera &camera, const Vectors3 &objectPoints, const ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, const CameraMotion cameraMotion, const unsigned int firstValidPoseIndex, const HomogenousMatrix4 &firstValidPose, const unsigned int secondValidPoseIndex, const HomogenousMatrix4 &secondValidPose, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *validObjectPointIndices=nullptr, HomogenousMatrices4 *poses=nullptr, Indices32 *poseIds=nullptr, Scalar *totalSqrError=nullptr) |
Determines valid poses for a range of camera frames while for each frame a group of image points is given which correspond to the given object points. More... | |
static CameraMotion | determineCameraMotion (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrame, const unsigned int upperFrame, const bool onlyVisibleObjectPoints=true, Worker *worker=nullptr, const Scalar minimalTinyTranslationObservationAngle=Numeric::deg2rad(Scalar(0.15)), const Scalar minimalModerateTranslationObservationAngle=Numeric::deg2rad(1), const Scalar minimalSignificantTranslationObservationAngle=Numeric::deg2rad(5), const Scalar minimalTinyRotationAngle=Numeric::deg2rad(Scalar(0.25)), const Scalar minimalModerateRotationAngle=Numeric::deg2rad(5), const Scalar minimalSignificantRotationAngle=Numeric::deg2rad(10)) |
Determines the camera motion from the camera poses within a specified frame range covering only valid poses. More... | |
static Scalar | determineObjectPointAccuracy (const PinholeCamera &pinholeCamera, const HomogenousMatrix4 *poses, const Vector2 *imagePoints, const size_t observations, const AccuracyMethod accuracyMethod) |
Measures the accuracy of a 3D object point in combination with a set of camera poses and image points (the projections of the object point). More... | |
static Scalars | determineObjectPointsAccuracy (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &objectPointIds, const AccuracyMethod accuracyMethhod, const unsigned int lowerFrame=(unsigned int)(-1), const unsigned int upperFrame=(unsigned int)(-1), Worker *worker=nullptr) |
Measures the accuracy of several 3D object points. More... | |
static void | determineProjectionErrors (const AnyCamera &camera, const Vector3 &objectPoint, const ConstIndexedAccessor< HomogenousMatrix4 > &world_T_cameras, const ConstIndexedAccessor< Vector2 > &imagePoints, Scalar *minimalSqrError=nullptr, Scalar *averageSqrError=nullptr, Scalar *maximalSqrError=nullptr) |
Determines the projection errors of a 3D object point in combination with a set of camera poses and image points (the projections of the object point). More... | |
static bool | determineProjectionError (const Database &database, const PinholeCamera &pinholeCamera, const Index32 poseId, const bool useDistortionParameters, unsigned int *validCorrespondences=nullptr, Scalar *minimalSqrError=nullptr, Scalar *averageSqrError=nullptr, Scalar *maximalSqrError=nullptr) |
Determines the accuracy of a camera pose for all valid object points visible in the frame by measuring the projection error between the projected object points and their corresponding image points. More... | |
static bool | determineProjectionErrors (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &objectPointIds, const bool useDistortionParameters, const unsigned int lowerFrame=(unsigned int)(-1), const unsigned int upperFrame=(unsigned int)(-1), Scalar *minimalSqrErrors=nullptr, Scalar *averagedSqrErrors=nullptr, Scalar *maximalSqrErrors=nullptr, unsigned int *observations=nullptr, Worker *worker=nullptr) |
Determines the averaged and maximal squared pixel errors between the projections of individual 3D object points and their corresponding image points in individual camera frames. More... | |
static void | determinePosesOrientation (const Database &database, const unsigned int lowerFrame, const unsigned int upperFrame, Scalar *xOrientations, Scalar *yOrientations, Scalar *zOrientations) |
Determines the individual cosine values between the mean coordinate axis of a range of poses and the coordinate axis of the individual poses. More... | |
static bool | determineNumberCorrespondences (const Database &database, const bool needValidPose, const unsigned int lowerFrame, const unsigned int upperFrame, unsigned int *minimalCorrespondences=nullptr, Scalar *averageCorrespondences=nullptr, unsigned int *medianCorrespondences=nullptr, unsigned int *maximalCorrespondences=nullptr, Worker *worker=nullptr) |
Determines the number of valid correspondences between image points and object points for each frame within a specified frame range. More... | |
static bool | determinePlane (const ConstIndexedAccessor< Vector3 > &objectPoints, RandomGenerator &randomGenerator, Plane3 &plane, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *validIndices=nullptr) |
Determines a 3D plane best fitting to a set of given 3D object points. More... | |
static bool | determinePlane (const Database &database, const Indices32 &objectPointIds, RandomGenerator &randomGenerator, Plane3 &plane, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *validIndices=nullptr) |
Determines a 3D plane best fitting to a set of given 3D object point ids. More... | |
static bool | determinePlane (const Database &database, const Index32 frameIndex, const CV::SubRegion &subRegion, RandomGenerator &randomGenerator, Plane3 &plane, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *usedObjectPointIds=nullptr) |
Determines a 3D plane best fitting to a set of given 3D object point ids which are specified by a given sub-region in the camera frame. More... | |
static bool | determinePlane (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrameIndex, const unsigned int subRegionFrameIndex, const unsigned int upperFrameIndex, const CV::SubRegion &subRegion, RandomGenerator &randomGenerator, Plane3 &plane, const bool useDistortionParameters, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Scalar medianDistanceFactor=Scalar(6), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *usedObjectPointIds=nullptr) |
Determines a 3D plane best fitting to image points in a specified sub-region in a specified frame and best fitting to this area visible in a specified frame range. More... | |
static bool | determinePerpendicularPlane (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int frameIndex, const Vector2 &imagePoint, const Scalar distance, Plane3 &plane, const bool useDistortionParameters, Vector3 *pointOnPlane=nullptr) |
Determines a 3D plane perpendicular to the camera with specified distance to the camera. More... | |
static bool | determinePerpendicularPlane (const PinholeCamera &pinholeCamera, const HomogenousMatrix4 &pose, const Vector2 &imagePoint, const Scalar distance, Plane3 &plane, const bool useDistortionParameters, Vector3 *pointOnPlane=nullptr) |
Determines a 3D plane perpendicular to the camera with specified distance to the camera. More... | |
static bool | removeSparseObjectPoints (Database &database, const Scalar minimalBoundingBoxDiagonal=Scalar(1e+7), const Scalar medianFactor=Scalar(100), const Scalar maximalSparseObjectPointRatio=Scalar(0.05)) |
Removes very far object points from the database if the amount of object points does not exceed a specified ratio (compared to the remaining object points). More... | |
static size_t | removeObjectPointsNotInFrontOfCamera (Database &database, Indices32 *removedObjectPointIds=nullptr) |
Removes all valid 3D object points (and their corresponding 2D image points) from the database which are at least in one frame not in front of the camera while having an existing 2D image point as observation. More... | |
static size_t | removeObjectPointsWithoutEnoughObservations (Database &database, const size_t minimalNumberObservations, Indices32 *removedObjectPointIds=nullptr) |
Removes any 3D object point (and it's corresponding 2D image points) from the database with less then a specified number of observations. More... | |
static size_t | removeObjectPointsWithSmallBaseline (Database &database, const Scalar minimalBoxDiagonal, Indices32 *removedObjectPointIds=nullptr) |
Removes any 3D object point (and it's corresponding 2D image points) from the database if all their corresponding camera poses are located within a too small bounding box. More... | |
static std::string | translateCameraMotion (const CameraMotion cameraMotion) |
Translates a camera motion value to a string providing the detailed motion as readable string. More... | |
Protected Types | |
typedef std::map< unsigned int, unsigned int > | IndexMap32 |
Definition of a map mapping 32 bit indices to 32 bit indices. More... | |
typedef ShiftVector< Vectors2 > | ImagePointGroups |
Definition of a shift vector holding groups of image points. More... | |
typedef std::pair< Index32, Scalar > | PoseErrorPair |
Definition of a pair combining a pose id and an error parameter. More... | |
typedef std::vector< PoseErrorPair > | PoseErrorPairs |
Definition of a vector holding pose error pairs. More... | |
Static Protected Member Functions | |
static size_t | filterStaticImagePoints (ImagePointGroups &imagePointGroups, Indices32 &objectPointIds, const Scalar maximalStaticImagePointFilterRatio) |
Determines a subset of perfectly static image points which may be image points located (visible) at static logos in the frames. More... | |
static void | determineInitialObjectPointsFromSparseKeyFramesByStepsSubset (const Database *database, const PinholeCamera *pinholeCamera, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const Indices32 *startFrames, const unsigned int upperFrame, const Scalar maximalStaticImagePointFilterRatio, Vectors3 *initialObjectPoints, Indices32 *initialObjectPointIds, Indices32 *initialPoseIds, Scalar *initialPointDistance, const RelativeThreshold *pointsThreshold, const unsigned int minimalKeyFrames, const unsigned int maximalKeyFrames, const Scalar maximalSqrError, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne) |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More... | |
static void | determineInitialObjectPointsFromDenseFramesRANSACSubset (const PinholeCamera *pinholeCamera, const ImagePointGroups *imagePointGroups, RandomGenerator *randomGenerator, HomogenousMatrices4 *validPoses, Indices32 *validPoseIds, Vectors3 *objectPoints, Indices32 *validObjectPointIndices, Scalar *totalError, const RelativeThreshold *minimalValidObjectPoints, const Scalar maximalSqrError, unsigned int *remainingIterations, Lock *lock, bool *abort, unsigned int firstIteration, unsigned int numberIterations) |
Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses by a RANSAC algorithm. More... | |
static void | updatePosesSubset (Database *database, const AnyCamera *camera, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, size_t *validPoses, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne) |
Updates a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames. More... | |
static void | updateOrientationsSubset (Database *database, const AnyCamera *camera, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, size_t *validPoses, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne) |
Updates a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames. More... | |
static void | determinePosesSubset (const Database *database, const AnyCamera *camera, const IndexSet32 *priorityObjectPointIds, const bool solePriorityPoints, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, ShiftVector< HomogenousMatrix4 > *poses, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne) |
Determines a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames. More... | |
static void | determineOrientationsSubset (const Database *database, const AnyCamera *camera, const IndexSet32 *priorityObjectPointIds, const bool solePriorityPoints, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, ShiftVector< HomogenousMatrix4 > *poses, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne) |
Determines a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames. More... | |
static bool | updateDatabaseToRotationalMotion (Database &database, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalObservations, IndexSet32 *relocatedObjectPointIds) |
Determines the semi-precise location of 3D object points and the camera poses for a sole rotational camera motion. More... | |
static void | determineUnknownObjectPointsSubset (const AnyCamera *camera, const Database *database, const Database::PoseImagePointTopologyGroups *objectPointsData, RandomGenerator *randomGenerator, const Scalar maximalSqrError, bool *abort, Lock *lock, Vectors3 *newObjectPoints, Indices32 *newObjectPointIds, unsigned int firstObjectPoint, unsigned int numberObjectPoints) |
Determines the positions of new object points from a database within a specified frame range. More... | |
static void | determineUnknownObjectPointsSubset (const Database *database, const AnyCamera *camera, const CameraMotion cameraMotion, const Index32 *objectPointIds, Vectors3 *newObjectPoints, Indices32 *newObjectPointIds, Indices32 *newObjectPointObservations, RandomGenerator *randomGenerator, const unsigned int minimalObservations, const bool useAllObservations, const Geometry::Estimator::EstimatorType estimator, const Scalar ransacMaximalSqrError, const Scalar averageRobustError, const Scalar maximalSqrError, Lock *look, bool *abort, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints) |
Determines the positions of a subset of (currently unknown) object points. More... | |
static void | optimizeObjectPointsWithFixedPosesSubset (const Database *database, const PinholeCamera *pinholeCamera, const CameraMotion cameraMotion, const Index32 *objectPointIds, Vectors3 *optimizedObjectPoints, Indices32 *optimizedObjectPointIds, const unsigned int minimalObservations, const Geometry::Estimator::EstimatorType estimator, const Scalar maximalRobustError, Lock *look, bool *abort, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints) |
Optimizes a subset of a set of 3D object points which have a quite good accuracy already without optimizing the camera poses concurrently. More... | |
static void | determineObjectPointsAccuracySubset (const Database *database, const PinholeCamera *pinholeCamera, const Index32 *objectPointIds, const AccuracyMethod accuracyMethhod, const unsigned int lowerFrame, const unsigned int upperFrame, Scalar *values, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints) |
Measures the accuracy of a subset of several 3D object points. More... | |
static void | determineProjectionErrorsSubset (const Database *database, const PinholeCamera *pinholeCamera, const Index32 *objectPointIds, const HomogenousMatrix4 *posesIF, const Index32 lowerPoseId, const unsigned int upperPoseId, const bool useDistortionParameters, Scalar *minimalSqrErrors, Scalar *averagedSqrErrors, Scalar *maximalSqrErrors, unsigned int *observations, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints) |
Determines the maximal squared pixel errors between the projections of a subset of individual 3D object points and their corresponding image points in individual camera frames. More... | |
static Scalar | averagePointDistance (const Vector2 *points, const size_t size) |
Determines the average distance between the center of a set of given points and each of the points. More... | |
This class implements a Structure From Motion solver for unconstrained 3D object points and unconstrained 6-DOF camera poses.
|
protected |
Definition of a shift vector holding groups of image points.
|
protected |
Definition of a map mapping 32 bit indices to 32 bit indices.
|
protected |
Definition of a pair combining a pose id and an error parameter.
|
protected |
Definition of a vector holding pose error pairs.
Definition of individual methods to determine the accuracy of object points.
Definition of individual camera motion types.
|
staticprotected |
Determines the average distance between the center of a set of given points and each of the points.
points | The set of points for which the average distance will be determined |
size | The number of points in the set, with range [1, infinity) |
|
static |
Determines the camera motion from the camera poses within a specified frame range covering only valid poses.
database | The database from which the camera pose are taken |
pinholeCamera | The pinhole camera profile which is applied |
lowerFrame | The index of the frame defining the lower border of the camera frames which will be investigated |
upperFrame | The index of the frame defining the upper border of the camera frames which will be investigated, with range [lowerFrame, infinity) |
onlyVisibleObjectPoints | True, to use only object points which are visible within the defined frame range; False, to use all object points |
worker | Optional worker object to distribute the computation |
minimalTinyTranslationObservationAngle | The minimal angle of observation rays for 3D object points so that the motion contains a tiny translational motion, with range (0, PI/2) |
minimalModerateTranslationObservationAngle | The minimal angle of observation rays for 3D object points so that the motion contains a moderate translational motion, with range (minimalTinyTranslationObservationAngle, PI/2) |
minimalSignificantTranslationObservationAngle | The minimal angle of observation rays for 3D object points so that the motion contains a significant translational motion, with range (minimalSignificantTranslationObservationAngle, PI/2) |
minimalTinyRotationAngle | The minimal angle between the viewing directions so that the motion contains a tiny rotational motion, with range (0, PI/2) |
minimalModerateRotationAngle | The minimal angle between the viewing directions so that the motion contains a moderate rotational motion, with range (minimalTinyRotationAngle, PI/2) |
minimalSignificantRotationAngle | The minimal angle between the viewing directions so that the motion contains a significant rotational motion, with range (minimalSignificantRotationAngle, PI/2) |
|
static |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.
Feature points are tracked from frame to frame within a defined camera frame range as long as the number of tracked points fall under a defined threshold.
The entire range of frames with tracked points are use to determine the locations of the 3D object points.
This function can be configured so that (perfectly) static image points located in all frames at the same position are identified not used for calculations.
Static image points can be located (visible) at static logos (bands) in the frames so that these image points must not be used.
database | The database defining the topology of 3D object points and corresponding 2D image points |
pinholeCamera | The pinhole camera profile which will be applied |
randomGenerator | a random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range (lowerFrame, infinity) |
regionOfInterest | Optional region of interest defining a specific area in the start frame so that the object points lying in the region are handled with higher priority, an invalid region to avoid any special region of interest handling |
maximalStaticImagePointFilterRatio | The maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering |
initialObjectPoints | The resulting initial 3D positions of object points that could be extracted |
initialObjectPointIds | The resulting ids of the resulting object points, one id for each resulting object point |
pointsThreshold | The threshold of image points which must be visible in each camera frame |
minimalTrackedFramesRatio | The minimal number of frames that finally have been tracked (the entire range of frames in which the object points are visible) defined as ratio of the entire frame range, with range (0, 1], does not have any meaning if no start frame or region of interest is defined |
minimalKeyFrames | The minimal number of keyframes that will be extracted |
maximalKeyFrames | The maximal number of keyframes that will be extracted |
maximalSqrError | The maximal square distance between an image points and a projected object point |
usedPoseIds | Optional resulting ids of all camera poses which have been used to determine the initial object points |
finalSqrError | Optional resulting final average error |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Determines the initial object point positions for a set of image point groups (covering a range of image frames) observing the unique object points in individual frames.
This function starts with two explicit frames (image point groups) and then tries to find more matching frames (image point groups).
The set of given image points should not contain image points located (visible) at a static logo in the frame as these points may violate the pose determination algorithms.
All frames (image point groups) within the frame range provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):
op_1, op_2, op_3, op_4, ..., op_n ... dense_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n dense_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n ...
pinholeCamera | The pinhole camera profile to be applied |
imagePointGroups | Frames (groups) of image points, all points in one frame (group) are located in the same camera frame and the individual points correspond to the same unique object points |
randomGenerator | A random generator object |
firstGroupIndex | The index of the first frame (image point group) which is applied as the first stereo frame, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()] |
secondGroupIndex | The index of the second frame (image point group) which is applied as the second stereo frame, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()], with firstGroupIndex != secondGroupIndex |
validPoses | The resulting poses that could be determined |
validPoseIds | The ids of resulting valid poses, one id for each valid resulting pose (the order of the ids is arbitrary) |
totalSqrError | The resulting sum of square pixel errors for all valid poses |
objectPoints | The resulting object points that could be determined |
validObjectPointIndices | The indices of resulting valid object points in relation to the given image point groups, with range [5, infinity) |
minimalValidObjectPoints | The minimal number of valid object points which must be reached |
maximalSqrError | The maximal square distance between an image points and a projected object point |
|
static |
Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses.
This function applies a RANSAC mechanism randomly selecting individual start key frames (pairs of image points).
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):
op_1, op_2, op_3, op_4, ..., op_n ... dense_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n dense_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n ...
pinholeCamera | The pinhole camera profile to be applied |
imagePointGroups | Frames of image points, all points in one group are located in the same camera frame and the individual points correspond to the same unique object points |
randomGenerator | A random generator object |
validPoses | The resulting poses that could be determined |
validPoseIds | The ids of resulting valid poses, one id for each resulting valid pose (the order of the ids is arbitrary) |
objectPoints | The resulting object points that could be determined |
validObjectPointIndices | The indices of resulting valid object points in relation to the given image point groups |
iterations | The number of RANSAC iterations trying to find a better result than already determined |
minimalValidObjectPoints | The threshold of object points that must be valid |
maximalSqrError | The maximal square distance between an image points and a projected object point |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
staticprotected |
Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses by a RANSAC algorithm.
This function applies a RANSAC mechanism randomly selecting individual start key frames (pairs of image points).
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):
op_1, op_2, op_3, op_4, ..., op_n ... dense_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n dense_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n ...
pinholeCamera | The pinhole camera profile to be applied |
imagePointGroups | Frames of image points, all points in one group are located in the same camera frame and the individual points correspond to the same unique object points |
randomGenerator | A random generator object |
validPoses | The resulting poses that could be determined |
validPoseIds | The ids of resulting valid poses, one id for each resulting valid pose (the order of the ids is arbitrary) |
objectPoints | The resulting object points that could be determined |
validObjectPointIndices | The indices of resulting valid object points in relation to the given image point groups |
totalError | The resulting total error of the best RANSAC iteration |
minimalValidObjectPoints | The threshold of object points that must be valid |
maximalSqrError | The maximal square distance between an image points and a projected object point |
remainingIterations | The number of RANSAC iterations that still need to be applied |
lock | The lock object which must be defined if this function is executed in parallel on several threads, otherwise nullptr |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
firstIteration | The first RANSAC iteration to apply, has no meaning as 'remainingIterations' is used instead |
numberIterations | The number of RANSAC iterations to apply, has no meaning as 'remainingIterations' is used instead |
|
static |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.
Feature points are tracked from frame to frame within a defined camera frame range as long as the number of tracked points fall under a defined threshold.
Key frames are selected from this range of (tracked) camera frames with representative geometry information.
This function can be configured so that (perfectly) static image points located in all frames at the same position are identified not used for calculations.
Static image points can be located (visible) at static logos (bands) in the frames so that these image points must not be used.
database | The database defining the topology of 3D object points and corresponding 2D image points |
pinholeCamera | The pinhole camera profile which will be applied |
randomGenerator | a random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range (lowerFrame, infinity) |
maximalStaticImagePointFilterRatio | The maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering |
initialObjectPoints | The resulting initial 3D positions of object points that could be extracted |
initialObjectPointIds | The resulting ids of the resulting object points, one id for each resulting object point |
pointsThreshold | The threshold of image points which must be visible in each camera frame |
minimalKeyFrames | The minimal number of keyframes that will be extracted |
maximalKeyFrames | The maximal number of keyframes that will be extracted |
maximalSqrError | The maximal square distance between an image points and a projected object point |
usedPoseIds | Optional resulting ids of all camera poses which have been used to determine the initial object points |
finalSqrError | Optional resulting final average error |
finalImagePointDistance | Optional resulting final average distance between the individual image points and the center of these image points |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Determines the initial object point positions for a set of key frames (image point groups) observing unique object points.
This function starts with two explicit key frames (image point groups) and then tries to find more matching key frames (image point groups).
The set of given image points should not contain image points located (visible) at a static logo in the frame as these points may violate the pose determination algorithms.
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual key frames we have n object points (op) and n * m overall image points (ip):
op_1, op_2, op_3, op_4, ..., op_n sparse_pose_0 -> ip_1_1, ip_1_2, ip_1_3, ip_1_4, ..., ip_1_n sparse_pose_1 -> ip_2_1, ip_2_2, ip_2_3, ip_2_4, ..., ip_2_n sparse_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n sparse_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n ... sparse pose_m -> ip_m_1, ip_m_2, ip_m_3, ip_m_4, ..., ip_y_n
pinholeCamera | The pinhole camera profile to be applied |
imagePointGroups | Key frames (groups) of image points, all points in one key frame (group) are located in the same camera key frame and the individual points correspond to the same unique object points |
randomGenerator | A random generator object |
firstGroupIndex | The index of the first key frame (image point group) which is applied as the first stereo frame, with range [0, imagePointGroups.size()) |
secondGroupIndex | The index of the second key frame (image point group) which is applied as the second stereo frame, with range [0, imagePointGroups.size()), with firstGroupIndex != secondGroupIndex |
poses | The resulting poses that could be determined |
validPoseIndices | The indices of resulting valid poses in relation to the given image point groups |
objectPoints | The resulting object points that could be determined |
validObjectPointIndices | The indices of resulting valid object points in relation to the given image point groups |
minimalValidObjectPoints | The minimal number of valid object points which must be reached |
maximalSqrError | The maximal square distance between an image points and a projected object point |
|
static |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.
Feature points are tracked from frame to frame within a defined camera frame range as long as the number of tracked points fall under a defined threshold.
Key frames are selected from this range of (tracked) camera frames with representative geometry information.
This function internally applies several individual iterations beginning from individual start frames so that the best result within the entire frame range is returned.
database | The database defining the topology of 3D object points and corresponding 2D image points |
steps | The number of steps that are applied within the defined frame range, with range [1, upperFrame - lowerFrame + 1] |
pinholeCamera | The pinhole camera profile which will be applied |
randomGenerator | A random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range (lowerFrame, infinity) |
maximalStaticImagePointFilterRatio | The maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering |
initialObjectPoints | The resulting initial 3D positions of object points that could be extracted |
initialObjectPointIds | The resulting ids of the resulting object points, one id for each resulting object point |
pointsThreshold | The threshold of image points which must be visible in each camera frame |
minimalKeyFrames | The minimal number of keyframes that will be extracted |
maximalKeyFrames | The maximal number of keyframes that will be extracted |
maximalSqrError | The maximal square distance between an image points and a projected object point |
usedPoseIds | Optional resulting ids of all camera poses which have been used to determine the initial object points |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
staticprotected |
Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.
This functions processes a subset of pre-defined start frames from which the point tracking starts.
database | The database defining the topology of 3D object points and corresponding 2D image points |
pinholeCamera | The pinhole camera profile which will be applied |
randomGenerator | A random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrames | The entire set of start frames from which a subset will be processed |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated |
maximalStaticImagePointFilterRatio | The maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering |
initialObjectPoints | The resulting initial 3D positions of object points that could be extracted |
initialObjectPointIds | The resulting ids of the resulting object points, one id for each resulting object point |
initialPoseIds | The resulting ids of all camera poses which have been used to determine the resulting initial object points |
initialPointDistance | The resulting distance between the image points which have been used to determine the initial object points, which is a measure for the reliability of the resulting 3D object points |
pointsThreshold | The threshold of image points which must be visible in each camera frame |
minimalKeyFrames | The minimal number of keyframes that will be extracted |
maximalKeyFrames | The maximal number of keyframes that will be extracted |
maximalSqrError | The maximal square distance between an image points and a projected object point |
lock | The lock object which must be defined if this function is executed in parallel on several threads, otherwise nullptr |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
numberThreads | The number of threads on which this function is executed in parallel, with range [1, infinity) |
threadIndex | The index of the thread on which this function is executed |
numberThreadsOne | Must be 1 |
|
static |
Determines the initial object point positions for a set of key frames (image point groups) observing the unique object points in individual camera poses.
This function applies a RANSAC mechanism randomly selecting individual start key frames (pairs of image points).
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):
op_1, op_2, op_3, op_4, ..., op_n sparse_pose_0 -> ip_1_1, ip_1_2, ip_1_3, ip_1_4, ..., ip_1_n sparse_pose_1 -> ip_2_1, ip_2_2, ip_2_3, ip_2_4, ..., ip_2_n sparse_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n sparse_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n ... sparse pose_m -> ip_m_1, ip_m_2, ip_m_3, ip_m_4, ..., ip_y_n
pinholeCamera | The pinhole camera profile to be applied |
imagePointGroups | Key frames of image points, all points in one group are located in the same camera frame and the individual points correspond to the same unique object points |
randomGenerator | A random generator object |
validPoses | The resulting poses that could be determined |
validPoseIndices | The indices of resulting valid poses in relation to the given image point groups |
objectPoints | The resulting object points that could be determined |
validObjectPointIndices | The indices of resulting valid object points in relation to the given image point groups |
iterations | The number of RANSAC iterations trying to find a better result than already determined |
minimalValidObjectPoints | The threshold of object points that must be valid |
maximalSqrError | The maximal square distance between an image points and a projected object point |
database | Optional database holding the image points from the imagePointGroups to validate the resulting 3D object positions even for camera poses not corresponding to the provided groups of image points; if defined also 'keyFrameIds' and 'objectPointIds' must be defined |
keyFrameIds | Optional ids of the individual keyframes to which the set of image point groups from 'imagePointGroups' belong, each key frame id corresponds with one group of image points, if defined also 'database' and 'objectPointIds' must be defined |
objectPointIds | Optional ids of the individual object points which projections are provided as groups of image points in 'imagePointGroups', if defined also 'database' and 'keyFrameIds' must be defined |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Determines the number of valid correspondences between image points and object points for each frame within a specified frame range.
database | The database providing the 3D object points, the 2D image points and the topology between image and object points |
needValidPose | True, if the pose must be valid so that the number of valid correspondences will be determined, otherwise the number of correspondences will be zero |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | Optional resulting minimal number of correspondences for all frames within the defined frame range |
averageCorrespondences | Optional resulting averaged number of correspondences for all frames within the defined frame range |
medianCorrespondences | Optional resulting median of all correspondences for all frames within the defined frame range |
maximalCorrespondences | Optional resulting maximal number correspondences for all frames within the defined frame range |
worker | Optional worker object to distribute the computation |
|
static |
Measures the accuracy of a 3D object point in combination with a set of camera poses and image points (the projections of the object point).
The accuracy of the point can be determined by individual methods, while the basic idea is to use the angles between the individual observation rays of the object point.
pinholeCamera | The pinhole camera profile which is applied |
poses | The camera poses in which the object point is visible |
imagePoints | The individual image points in the individual camera frames |
observations | The number of observations (pairs of camera poses and image points) |
accuracyMethod | The method which is applied to determine the accuracy, must be valid |
|
static |
Measures the accuracy of several 3D object points.
This methods extracts the 3D object point locations from the given database.
The accuracy of the points can be determined by individual methods, while the basic idea is to use the angles between the individual observation rays of the object points.
database | The database providing the location of the 3D object points, the camera poses and the image point positions. |
pinholeCamera | The pinhole camera profile which is applied |
objectPointIds | The ids of the object points for which the accuracies will be determined, each object point must be valid |
accuracyMethhod | The method which is applied to determine the accuracy, must be valid |
lowerFrame | Optional index of the frame defining the lower border of camera poses which will be investigated, -1 if no lower and no upper border is defined |
upperFrame | Optional index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity), -1 if also 'lowerFrame' is -1 |
worker | Optional worker object to distribute the computation |
|
staticprotected |
Measures the accuracy of a subset of several 3D object points.
database | The database providing the location of the 3D object points, the camera poses and the image point positions. |
pinholeCamera | The pinhole camera profile which is applied |
objectPointIds | The ids of the object points for which the accuracies will be determined, each object point must be valid |
accuracyMethhod | The method which is applied to determine the accuracy |
lowerFrame | Optional index of the frame defining the lower border of camera poses which will be investigated, -1 if no lower and no upper border is defined |
upperFrame | Optional index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity), -1 if also 'lowerFrame' is -1 |
values | The resulting accuracy parameters depending on the specified method, one parameter of each object point |
firstObjectPoint | First object point to be handled |
numberObjectPoints | The number of object points to be handled |
|
inlinestatic |
Determines the camera 3-DOF orientation for a set of object point and image point correspondences.
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
objectPoints | The object points which are visible in a frame, first all priority object points followed by the remaining object points |
imagePoints | The image points which are projections of the given object points, one image point corresponds to one object point |
priorityCorrespondences | The number of priority point correspondences |
roughOrientation | Optional a rough camera orientation to speedup the computation and accuracy |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
|
inlinestatic |
Determines the camera 3-DOF orientation for a set of object point and image point correspondences.
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
objectPoints | The object points which are visible in a frame |
imagePoints | The image points which are projections of the given object points, one image point corresponds to one object point |
roughOrientation | Optional a rough camera orientation to speedup the computation and accuracy |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
validIndices | Optional resulting indices of the valid point correspondences |
|
inlinestatic |
Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame.
database | The database from which the object point and image point correspondences are extracted |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
frameId | The id of the frame for which the camera orientation will be determined |
priorityObjectPointIds | Ids of object points for which the poses will be optimized |
solePriorityPoints | True, to apply only the priority object points for pose determination |
roughOrientation | Optional a rough camera orientation to speedup the computation and accuracy |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientation, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
correspondences | Optional resulting number of 2D/3D point correspondences which were available |
|
inlinestatic |
Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame.
database | The database from which the image points are extracted |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
frameId | The id of the frame for which the camera orientation will be determined |
objectPoints | The object points which are all visible in the specified frame |
objectPointIds | The ids of the object points, one id for each object points |
numberObjectPoints | The number of given object points, with range [5, infinity) |
roughOrientation | Optional a rough camera orientation to speedup the computation and accuracy |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
|
inlinestatic |
Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame.
database | The database from which the object point and image point correspondences are extracted |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
frameId | The id of the frame for which the camera orientation will be determined |
roughOrientation | Optional a rough camera orientation to speedup the computation and accuracy |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientation, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
correspondences | Optional resulting number of 2D/3D point correspondences which were available |
|
staticprotected |
Determines a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames.
The camera orientations (their poses respectively) will be set to invalid if no valid orientation can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
database | The database from which the point correspondences are extracted and which receives the determined camera orientations (the 6-DOF poses with zero translation) |
camera | The camera profile defining the projection, must be valid |
priorityObjectPointIds | Optional ids of the object points for which the poses will be optimized, may be zero so that all object points are investigated with the same priority |
solePriorityPoints | True, to apply only the priority object points for pose determination |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientations, with range [5, infinity) |
poses | The resulting determined poses starting with the lower frame and ending with the upper frame |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a orientation counts as valid, with range (0, infinity) |
totalError | The resulting accumulated total error for all poses (orientations) |
lock | The lock object which must be defined if this function is executed in parallel on several individual threads |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
numberThreads | The overall number of threads which are used in parallel |
threadIndex | The index of the thread executing this function, with range [0, numberThreads) |
numberThreadsOne | Must be 1 |
|
static |
Determines a 3D plane perpendicular to the camera with specified distance to the camera.
This function may be used for e.g., rotational camera motion as e.g., initial guess.
database | The database holding the 3D object point locations |
pinholeCamera | The pinhole camera profile defining the projection, must be valid |
frameIndex | The index of the frame in which the given image point is visible |
imagePoint | The image point to which (to the viewing ray respectively) the resulting plane will be perpendicular, must lie inside the camera frame |
distance | The distance of the plane to the camera, with range (0, infinity) |
plane | The resulting 3D plane best fitting for the given data |
useDistortionParameters | True, to use the distortion parameters of the camera |
pointOnPlane | Optional resulting 3D intersection point of resulting plane and the viewing ray of the provided image point |
|
static |
Determines a 3D plane perpendicular to the camera with specified distance to the camera.
This function may be used for e.g., rotational camera motion as e.g., initial guess.
pinholeCamera | The pinhole camera profile defining the projection, must be valid |
pose | The pose of the camera, must be valid |
imagePoint | The image point to which (to the viewing ray respectively) the resulting plane will be perpendicular, must lie inside the camera frame |
distance | The distance of the plane to the camera, with range (0, infinity) |
plane | The resulting 3D plane best fitting for the given data |
useDistortionParameters | True, to use the distortion parameters of the camera |
pointOnPlane | Optional resulting 3D intersection point of resulting plane and the viewing ray of the provided image point |
|
inlinestatic |
Determines a 3D plane best fitting to a set of given 3D object points.
objectPoints | The object points for which the best matching plane will be determined, at least 3 |
randomGenerator | Random number generator |
plane | The resulting 3D plane |
minimalValidObjectPoints | The minimal number of valid object points so that a valid plane will be determined |
estimator | The robust estimator which will be applied to determine the 3D plane |
finalError | Optional resulting final error |
validIndices | Optional resulting indices of all valid object points |
|
static |
Determines a 3D plane best fitting to a set of given 3D object point ids which are specified by a given sub-region in the camera frame.
database | The database holding the 3D object point locations |
frameIndex | The index of the frame in which the plane is visible for which the given sub-region defines the area of image points for which the corresponding object points define the 3D plane, the pose must be valid |
subRegion | The sub-region which defines the plane area in the camera frame |
randomGenerator | Random number generator |
plane | The resulting 3D plane |
minimalValidObjectPoints | The minimal number of valid object points so that a valid plane will be determined |
estimator | The robust estimator which will be applied to determine the 3D plane |
finalError | Optional resulting final error |
usedObjectPointIds | Optional resulting ids of the used object points |
|
inlinestatic |
Determines a 3D plane best fitting to a set of given 3D object point ids.
database | The database holding the 3D object point locations |
objectPointIds | The ids of the object points for which the best matching plane will be determined, at least 3, must have valid locations in the database |
randomGenerator | Random number generator |
plane | The resulting 3D plane |
minimalValidObjectPoints | The minimal number of valid object points so that a valid plane will be determined |
estimator | The robust estimator which will be applied to determine the 3D plane |
finalError | Optional resulting final error |
validIndices | Optional resulting indices of all valid object points |
|
static |
Determines a 3D plane best fitting to image points in a specified sub-region in a specified frame and best fitting to this area visible in a specified frame range.
database | The database holding the 3D object point locations |
pinholeCamera | The pinhole camera profile defining the projection, must be valid |
lowerFrameIndex | The index of the frame defining the lower border of camera poses which will be investigated |
subRegionFrameIndex | The index of the frame for which the sub-region is specified |
upperFrameIndex | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
subRegion | The sub-region defining the area in the image frame for which the 3D plane will be determined |
randomGenerator | The random number generator object |
plane | The resulting 3D plane best fitting for the given data |
useDistortionParameters | True, to use the distortion parameters of the camera |
minimalValidObjectPoints | The minimal number of valid 3D points in relation to the 3D object points which are projected into the sub-region in the sub-region frame |
medianDistanceFactor | The factor with which the median distance between the initial 3D plane and the initial 3D object points is multiplied to determine the maximal distance between the finial plane and any 3D object point which defines the plane, with range (0, infinity) |
estimator | The robust estimator used to determine the initial plane for the sub-region frame |
finalError | Optional resulting final square error |
usedObjectPointIds | Optional resulting ids of all 3D object points which have been used to determine the 3D plane |
|
inlinestatic |
Determines the camera 6-DOF pose for a set of object point and image point correspondences.
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
objectPoints | The object points which are visible in a frame |
imagePoints | The image points which are projections of the given object points, one image point corresponds to one object point |
roughPose | Optional a rough camera pose to speedup the computation and accuracy |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
validIndices | Optional resulting indices of the valid point correspondences |
|
inlinestatic |
Determines the camera 6-DOF pose for a set of object point and image point correspondences.
The point correspondences are separated to a set of priority correspondences and remaining correspondences ensuring that the pose mainly matches for the priority point correspondences.
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
objectPoints | The object points which are visible in a frame, first all priority object points followed by the remaining object points |
imagePoints | The image points which are projections of the given object points, one image point corresponds to one object point |
priorityCorrespondences | The number of priority point correspondences |
roughPose | Optional a rough camera pose to speedup the computation and accuracy |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
|
inlinestatic |
Determines the camera 6-DOF pose for a specific camera frame.
database | The database from which the image points are extracted |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
frameId | The id of the frame for which the camera pose will be determined |
objectPoints | The object points which are all visible in the specified frame |
objectPointIds | The ids of the object points, one id for each object points |
roughPose | Optional a rough camera pose to speedup the computation and accuracy |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
|
inlinestatic |
Determines the camera 6-DOF pose for a specific camera frame.
database | The database from which the object point and image point correspondences are extracted |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
frameId | The id of the frame for which the camera pose will be determined |
roughPose | Optional a rough camera pose to speedup the computation and accuracy |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
correspondences | Optional resulting number of 2D/3D point correspondences which were available |
|
inlinestatic |
Determines the camera 6-DOF pose for a specific camera frame.
database | The database from which the object point and image point correspondences are extracted |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
frameId | The id of the frame for which the camera pose will be determined |
roughPose | Optional a rough camera pose to speedup the computation and accuracy |
priorityObjectPointIds | Ids of object points for which the poses will be optimized |
solePriorityPoints | True, to apply only the priority object points for pose determination |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalSqrError | The maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity) |
finalRobustError | Optional resulting final average robust error, in relation to the defined estimator |
correspondences | Optional resulting number of 2D/3D point correspondences which were available |
|
static |
Determines the camera poses depending on valid 2D/3D points correspondence within a range of camera frames.
The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
The resulting poses will have either a sole rotational motion or a rotational and translational motion, this depends on the defined camera motion.
database | The database from which the point correspondences are extracted |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined |
priorityObjectPointIds | Optional ids of the object points for which the poses will be optimized with higher priority, may be zero so that all object points are investigated with the same priority |
solePriorityPoints | True, to apply only the priority object points for pose determination, has no meaning if no priority points are provided |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
poses | The resulting determined poses starting with the lower frame and ending with the upper frame |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity) |
finalAverageError | Optional resulting average final error for all valid poses, the error depends on the selected robust estimator |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Determines the individual cosine values between the mean coordinate axis of a range of poses and the coordinate axis of the individual poses.
The specified range of camera pose must cover a range with valid poses.
database | The database providing the camera poses |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
xOrientations | The resulting cosine values for the poses' xAxis, one for each camera pose |
yOrientations | The resulting cosine values for the poses' yAxis, one for each camera pose |
zOrientations | The resulting cosine values for the poses' zAxis, one for each camera pose |
|
staticprotected |
Determines a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames.
The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
database | The database from which the point correspondences are extracted and which receives the determined camera poses |
camera | The camera profile defining the projection, must be valid |
priorityObjectPointIds | Optional ids of the object points for which the poses will be optimized, may be zero so that all object points are investigated with the same priority |
solePriorityPoints | True, to apply only the priority object points for pose determination |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
poses | The resulting determined poses starting with the lower frame and ending with the upper frame |
estimator | The robust estimator which is applied for the non-linear pose optimization |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity) |
totalError | The resulting accumulated total error for all poses |
lock | The lock object which must be defined if this function is executed in parallel on several individual threads |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
numberThreads | The overall number of threads which are used in parallel |
threadIndex | The index of the thread executing this function, with range [0, numberThreads) |
numberThreadsOne | Must be 1 |
|
static |
Determines the accuracy of a camera pose for all valid object points visible in the frame by measuring the projection error between the projected object points and their corresponding image points.
database | The database providing the locations of the 3D object points, the camera poses and the image points |
pinholeCamera | The pinhole camera profile which is applied |
poseId | The id of the camera frame for which the accuracy of the pose will be determined |
useDistortionParameters | True, to apply the distortion parameter of the camera |
validCorrespondences | Optional resulting number of valid pose correspondences |
minimalSqrError | Optional resulting minimal (best) projection error for the pose |
averageSqrError | Optional resulting averaged projection error for the pose |
maximalSqrError | Optional resulting maximal (worst) projection error for the pose |
|
static |
Determines the projection errors of a 3D object point in combination with a set of camera poses and image points (the projections of the object point).
camera | The camera profile defining the projection, must be valid |
objectPoint | The 3D object point for which the quality will be measured |
world_T_cameras | The camera poses in which the object point is visible |
imagePoints | The individual image points in the individual camera frames |
minimalSqrError | Optional resulting minimal (best) projection error for the object point |
averageSqrError | Optional resulting averaged projection error for the object point |
maximalSqrError | Optional resulting maximal (worst) projection error for the object point |
|
static |
Determines the averaged and maximal squared pixel errors between the projections of individual 3D object points and their corresponding image points in individual camera frames.
database | The database from which the camera poses, the object points and the image points are extracted |
pinholeCamera | The pinhole camera profile which is applied |
objectPointIds | The ids of all object points for which the maximal squared pixel errors are determined |
useDistortionParameters | True, to use the distortion parameters of the camera to distort the projected object points |
lowerFrame | Optional index of the frame defining the lower border of camera poses which will be investigated, -1 if no lower and no upper border is defined |
upperFrame | Optional index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity), -1 if also 'lowerFrame' is -1 |
minimalSqrErrors | Optional resulting minimal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error |
averagedSqrErrors | Optional resulting averaged pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error |
maximalSqrErrors | Optional resulting maximal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error |
observations | Optional resulting observations for each object point, one number of observations for each given object point id |
worker | Optional worker object to distribute the computation |
|
staticprotected |
Determines the maximal squared pixel errors between the projections of a subset of individual 3D object points and their corresponding image points in individual camera frames.
database | The database from which the camera poses, the object points and the image points are extracted |
pinholeCamera | The pinhole camera profile which is applied |
objectPointIds | The ids of all object points for which the maximal squared pixel errors are determined |
posesIF | The inverted and flipped poses of all camera frames which will be investigated, the poses can be valid or invalid, the first pose is the camera pose for the frame with id 'lowerPoseId' |
lowerPoseId | The id of the first provided pose |
upperPoseId | The id of the last provided pose, thus posesIF must store (upperPoseId - lowerPoseId + 1) poses |
useDistortionParameters | True, to use the distortion parameters of the camera to distort the projected object points |
minimalSqrErrors | Optional resulting minimal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error |
averagedSqrErrors | Optional resulting averaged pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error |
maximalSqrErrors | Optional resulting maximal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error |
observations | Optional resulting observations for each object point, one number of observations for each given object point id |
firstObjectPoint | The first object point to handle |
numberObjectPoints | The number of object points to handle |
|
static |
Determines a set of representative camera poses from a given database from a set of given camera poses.
database | The database from which the representative camera poses are extracted |
poseIds | The camera pose ids from which the representative camera poses are extracted, all poses must be valid |
numberRepresentative | The number of representative poses that will be determined |
|
static |
Determines a set of representative camera poses from a given database within a specified frame range.
Only valid camera poses from the database will be investigated.
database | The database from which the representative camera poses are extracted |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
numberRepresentative | The number of representative poses that will be determined |
|
inlinestatic |
Determines the positions of (currently unknown) object points which are visible in specified poses (the poses are specified by a lower and upper frame range).
Only camera frames with valid camera pose are used to determined the locations of the new object points.
All unknown object points with more or equal observations (in valid poses) than specified will be handled.
However, the number of resulting object points with valid 3D position may be small than the maximal possible number due to e.g., the defined maximal error parameters.
database | The database form which the object point, image point and pose information is extracted |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL |
lowerPoseId | The lower id of the camera pose in which the unknown object points can/must be visible |
upperPoseId | The upper id of the camera pose in which the unknown object points can/must be visible, with range [lowerPoseId, infinity) |
newObjectPoints | The resulting 3D location of the new object points |
newObjectPointIds | The ids of the resulting new object points, one id for each resulting new object point |
randomGenerator | Random generator object to be used for creating random numbers, must be defined |
newObjectPointObservations | Optional resulting number of observations (with valid camera poses) for each determined 3D object point, one number for each resulting 3D object point location |
minimalObjectPointPriority | The minimal priority value of the resulting unknown object points |
minimalObservations | The minimal number of observations (with valid camera poses) for each new object points which are necessary to determine the 3D location |
useAllObservations | True, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations |
estimator | The robust estimator which is applied during optimization of each individual new 3D location, must be defined |
ransacMaximalSqrError | The maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism |
averageRobustError | The (average) robust error for a new 3D object point after optimization of the 3D location |
maximalSqrError | The maximal error for a new valid 3D object point after optimization of the 3D location |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
tVisibleInAllPoses | True, if the object points must be visible in all poses (frames) of the specified pose range; False, if the object point can be visible in any poses (frames) within the specified pose range |
|
static |
Determines the positions of a set of (currently unknown) object points.
Only camera frames with valid camera pose are used to determined the new object points.
database | The database form which the object point, image point and pose information is extracted |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL |
unknownObjectPointIds | The ids of all (currently unknown) object points for which a 3D position will be determined, must all be valid |
newObjectPoints | The resulting 3D location of the new object points |
newObjectPointIds | The ids of the resulting new object points, one id for each resulting new object point |
randomGenerator | Random generator object to be used for creating random numbers, must be defined |
newObjectPointObservations | Optional resulting number of observations for each determined 3D object point, one number for each resulting 3D object point location |
minimalObservations | The minimal number of observations for each new object points which are necessary to determine the 3D location |
useAllObservations | True, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations |
estimator | The robust estimator which is applied during optimization of each individual new 3D location, must be defined |
ransacMaximalSqrError | The maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism |
averageRobustError | The (average) robust error for a new 3D object point after optimization of the 3D location |
maximalSqrError | The maximal error for a new valid 3D object point after optimization of the 3D location |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
inlinestatic |
Determines the positions of all (currently unknown) object points.
Only camera frames with valid camera pose are used to determined the locations of the new object points.
All unknown object points with more or equal observations (in valid poses) than specified will be handled.
However, the number of resulting object points with valid 3D position may be smaller than the maximal possible number due to e.g., the defined maximal error parameters.
database | The database form which the object point, image point and pose information is extracted |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL |
newObjectPoints | The resulting 3D location of the new object points |
newObjectPointIds | The ids of the resulting new object points, one id for each resulting new object point |
randomGenerator | Random generator object to be used for creating random numbers, must be defined |
newObjectPointObservations | Optional resulting number of observations (with valid camera poses) for each determined 3D object point, one number for each resulting 3D object point location |
minimalObjectPointPriority | The minimal priority value of the resulting unknown object points |
minimalObservations | The minimal number of observations (with valid camera poses) for each new object points which are necessary to determine the 3D location, with range [2, infinity) |
useAllObservations | True, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations |
estimator | The robust estimator which is applied during optimization of each individual new 3D location, must be defined |
ransacMaximalSqrError | The maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism |
averageRobustError | The (average) robust error for a new 3D object point after optimization of the 3D location |
maximalSqrError | The maximal error for a new valid 3D object point after optimization of the 3D location |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Determines the positions of new object points from a database within a specified frame range.
Only camera frames with valid camera poses are used to determine the new object points.
This function extracts a subset of representative camera poses and triangulates image points from individual camera poses to determine new 3D object points.
Object points in the database with valid 3D positions are not investigated.
database | The database defining the topology of 3D object points, corresponding 2D image points and corresponding camera poses |
camera | The camera profile defining the projection, must be valid |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
newObjectPoints | The resulting positions of new object points |
newObjectPointIds | The resulting ids of the new object points, each id corresponds with a positions from 'newObjectPoints' |
minimalKeyFrames | The minimal number of key frames which must be valid for a 3D object point, with range [minimalKeyFrames, upperFrame - lowerFrame + 1] |
maximalKeyFrames | The maximal number of key frames which will be use to determine the 3D object point positions, with range [minimalKeyFrames, upperFrame - lowerFrame + 1] |
maximalSqrError | The maximal squared error between a projected 3D object point and an image point so that the combination of object point and image point count as valid |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
staticprotected |
Determines the positions of new object points from a database within a specified frame range.
camera | The camera profile defining the projection, must be valid |
database | The database from which the object point and image point correspondences are extracted |
objectPointsData | The data holding groups of pose ids and image point ids for each individual object point |
randomGenerator | Random generator object to be used for creating random numbers, must be defined |
maximalSqrError | The maximal squared error between a projected 3D object point and an image point so that the combination of object point and image point count as valid |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
lock | The lock object which must be defined if this function is invoked in parallel |
newObjectPoints | The resulting positions of new object points |
newObjectPointIds | The resulting ids of the new object points, each id corresponds with a positions from 'newObjectPoints' |
firstObjectPoint | The first object point to be handled, with range [0, numberObjectPoints) |
numberObjectPoints | The number of object points to be handled, with range [0, objectPointData->size()] |
|
staticprotected |
Determines the positions of a subset of (currently unknown) object points.
database | The database form which the object point, image point and pose information is extracted |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL |
objectPointIds | The ids of all (currently unknown) object points for which a 3D position will be determined, must all be valid |
newObjectPoints | The resulting 3D location of the new object points |
newObjectPointIds | The ids of the resulting new object points, one id for each resulting new object point |
newObjectPointObservations | Optional resulting number of observations for each resulting new object point, one number for each resulting new object point |
randomGenerator | Random generator object to be used for creating random numbers, must be defined |
minimalObservations | The minimal number of observations for each new object points which are necessary to determine the 3D location |
useAllObservations | True, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations |
estimator | The robust estimator which is applied during optimization of each individual new 3D location, must be defined |
ransacMaximalSqrError | The maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism |
averageRobustError | The (average) robust error for a new 3D object point after optimization of the 3D location |
maximalSqrError | The maximal error for a new valid 3D object point after optimization of the 3D location |
look | Lock object which must be defined if this function is executed in parallel on individual threads |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
firstObjectPoint | First object point to be handled |
numberObjectPoints | Number of object points to be handled |
|
static |
Determines valid poses for a range of camera frames while for each frame a group of image points is given which correspond to the given object points.
Two individual camera poses must be known within the range of camera frames.
camera | The camera profile defining the projection, must be valid |
objectPoints | The object points with known locations, each object point has a corresponding image point in the groups of image points |
imagePointGroups | The groups of image points, each set of image points corresponds to the object points, each group of image points represents one camera pose (the observed object points respectively) |
randomGenerator | Random number generator |
cameraMotion | The motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined |
firstValidPoseIndex | The index of the frame for which the first pose is known, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()] |
firstValidPose | The first known pose, must be valid |
secondValidPoseIndex | The index of the frame for which the second pose is known, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()] with firstValidPoseIndex != secondValidPoseIndex |
secondValidPose | The second known pose, must be valid |
minimalValidCorrespondenceRatio | The minimal ratio of valid correspondences (w.r.t. the given object points), if the number of valid correspondences is too low the pose is not valid, with range (0, 1] |
maximalSqrError | The maximal pixel error between a projected object point and the corresponding image point so that the correspondence is valid |
validObjectPointIndices | Optional resulting indices of the object points which are all valid in all determined valid poses |
poses | Optional resulting valid poses (corresponding to poseIds) |
poseIds | Optional resulting ids of all valid poses, each id has a corresponding resulting pose (however the ids themselves have no order) |
totalSqrError | Optional resulting sum of square pixel errors for all valid poses |
|
staticprotected |
Determines a subset of perfectly static image points which may be image points located (visible) at static logos in the frames.
imagePointGroups | Groups of image points where each group holds the projection of the same 3D object points |
objectPointIds | The ids of the object points which have the corresponding projected image points in the groups of image points |
maximalStaticImagePointFilterRatio | The maximal ratio of static image points in relation to the entire number of image points in each group, with range [0, 1] |
|
static |
Optimizes the camera profile for a given database with stable camera poses determined by initial but stable object points.
This function selected a representative subset of the valid poses within the specified frame range and considers all object points visible in the subset of camera frames.
The resulting optimized database (with optimized object point locations) invalidates all object point locations of object points not visible in the selected subset of camera frames.
Therefore, this function should be invoked after the initial set of stable object points are determined but before the database stores too many object points (which would get lost).
Further, this function supposes a translational (and optional rotational) camera motion.
database | The database providing a set initial 3D object points visible in several valid camera poses |
pinholeCamera | The pinhole camera profile which has been used to determine the camera poses and 3D object point locations in the given database |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
findInitialFieldOfView | True, to apply a determination of the initial field of view of the camera (should be done if the camera's field of view is not known) |
optimizationStrategy | The optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters |
optimizedCamera | The resulting optimized camera profile with adjusted field of view and distortion parameters |
optimizedDatabase | The resulting database with optimized camera poses and 3D object point locations |
minimalObservationsInKeyframes | The minimal number of observations an object point must have under all selected keyframes so that it will be investigated to optimized the camera profile and so that this object point will be optimized |
minimalKeyframes | The minimal number of key frames (with valid poses) which are necessary for the determination/optimization, with range [2, minimalKeyFrames) |
maximalKeyframes | The maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, upperFrame - lowerFrame + 1] |
lowerFovX | The lower threshold border for the final (ideal) horizontal field of view of the camera profile, with range (0, upperFovX], |
upperFovX | The upper threshold border for the final (ideal) horizontal field of view of the camera profile, with range [lowerFoVX, PI) |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
finalMeanSqrError | Optional resulting final mean squared pose error (averaged) |
|
static |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently.
The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided, further a set of object point ids must be provided which should be used for optimization, the object points visible in the key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
optimizationStrategy | The optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters |
keyFrameIds | The ids of all poses defining representative key frames for the optimization, at least two |
objectPointIds | The ids of the object points which will be optimized (may be a subset only), at least one |
optimizedCamera | The resulting optimized camera profile |
optimizedObjectPoints | The resulting positions of the optimized object points |
optimizedObjectPointIds | Optional resulting ids of the optimized object points, one id for each positions in 'optimizedObjectPoints', nullptr if not of interest |
optimizedKeyFramePoses | Optional resulting optimized camera poses, one for each key frame id |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity) |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
iterations | The number of optimization iterations which will be applied, with range [1, infinity) |
initialRobustError | Optional the initial average robust error before optimization |
finalRobustError | Optional the final average robust error after optimization |
|
static |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently.
The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
optimizationStrategy | The optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters |
keyFrameIds | The ids of all poses defining representative key frames for the optimization, at least two |
optimizedCamera | The resulting optimized camera profile with adjusted field of view and distortion parameters |
optimizedObjectPoints | The resulting positions of the optimized object points, at least one |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
optimizedKeyFramePoses | Optional resulting optimized camera poses, one for each key frame id |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity) |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
iterations | The number of optimization iterations which will be applied, with range [1, infinity) |
initialRobustError | Optional the initial average robust error before optimization |
finalRobustError | Optional the final average robust error after optimization |
|
static |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently.
The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses are selected and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
optimizationStrategy | The optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters |
optimizedCamera | The resulting optimized camera profile with adjusted field of view and distortion parameters |
optimizedObjectPoints | The resulting positions of the optimized object points |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
optimizedKeyFramePoses | Optional resulting camera poses, one for each keyframe which has been used during optimization, nullptr if not of interest |
optimizedKeyFramePoseIds | Optional resulting ids of the camera poses which have been used as key frame during optimization, one for each 'optimizedKeyFramePoses', nullptr if not of interest |
minimalKeyFrames | The minimal number of key frames (with valid poses) which are necessary for the optimization, with range [2, maximalkeyFrames] |
maximalKeyFrames | The maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, infinity) |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity) |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
iterations | The number of optimization iterations which will be applied, with range [1, infinity) |
initialRobustError | Optional the initial average robust error before optimization |
finalRobustError | Optional the final average robust error after optimization |
|
static |
Optimizes the positions of already known initial 3D object points when a given database holds neither valid 3D positions or valid 6DOF poses.
The optimization is done by a bundle adjustment between the camera poses of distinct keyframes and the given 3D object points, however the optimized camera poses are not provided.
This function can optimize a subset of the given initial object points to allow more camera poses (camera frames) to be involved.
database | The database defining the topology of 3D object points and corresponding 2D image points |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
initialObjectPoints | The already known initial 3D positions of object points |
initialObjectPointIds | The ids of the already known object points, one id for each given initial object point |
optimizedObjectPoints | The resulting optimized 3D positions of the given initial object points |
optimizedObjectPointIds | The resulting ids of the optimized object points, one id for each optimized object point |
minimalObjectPoints | The minimal number of object points that will be optimized (the higher the number the less camera poses may be used as some object points may not be visible in all camera frames), with range [5, initialObjectPoints.size()); however, tue to pose inaccuracies the algorithm finally may use less object points |
minimalKeyFrames | The minimal number of keyframes that will be used, with range [2, maximalKeyFrames] |
maximalKeyFrames | The maximal number of keyframes that will be used, with range [minimalKeyFrames, upperFrame - lowerFrame + 1]; however, due to pose inaccuracies the algorithm finally may use more keyframes |
maximalSqrError | The maximal squared projection error for a 3D object point, points with larger error are excluded after a first optimization iteration |
usedPoseIds | Optional resulting ids of all camera poses which have been used to optimized the object points |
initialSqrError | Optional resulting initial average squared error |
finalSqrError | Optional resulting final average squared error |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Optimizes a set of 3D object points (having a quite good accuracy already) without optimizing the camera poses concurrently.
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
cameraMotion | The motion of the camera, CM_ROTATIONAL if the camera poses do not have a translational part, CM_TRANSLATIONAL otherwise |
objectPointIds | The ids of the object points for which the positions will be optimized (all points must have already initial 3D positions) |
optimizedObjectPoints | The resulting positions of the optimized object points |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
maximalRobustError | The maximal error between a projected object point and the individual image points; beware the error must be defined w.r.t. the selected estimator |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
staticprotected |
Optimizes a subset of a set of 3D object points which have a quite good accuracy already without optimizing the camera poses concurrently.
The database must hold the valid initial 3D object positions and must hold valid camera poses.
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
cameraMotion | The motion of the camera, CM_ROTATIONAL if the camera poses do not have a translational part, CM_TRANSLATIONAL otherwise |
objectPointIds | The ids of the object points for which the positions will be optimized (all points must have already initial 3D positions) |
optimizedObjectPoints | The resulting positions of the optimized object points |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
maximalRobustError | The maximal error between a projected object point and the individual image points; beware the error must be defined w.r.t. the selected estimator |
look | Optional lock object ensuring a safe distribution of the computation, must be defined if this function is executed in parallel |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
firstObjectPoint | First object point to be handled |
numberObjectPoints | The number of object points to be handled |
|
static |
|
static |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently.
The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided, further a set of object point ids must be provided which should be used for optimization, the object points visible in the key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
keyFrameIds | The ids of all poses defining representative key frames for the optimization, at least two |
objectPointIds | The ids of the object points which will be optimized (may be a subset only), at least one |
optimizedObjectPoints | The resulting positions of the optimized object points |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
optimizedKeyFramePoses | Optional resulting optimized camera poses, one for each key frame id |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity) |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
iterations | The number of optimization iterations which will be applied, with range [1, infinity) |
initialRobustError | Optional the initial average robust error before optimization |
finalRobustError | Optional the final average robust error after optimization |
|
static |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently.
The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
keyFrameIds | The ids of all poses defining representative key frames for the optimization, at least two |
optimizedObjectPoints | The resulting positions of the optimized object points, at least one |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
optimizedKeyFramePoses | Optional resulting optimized camera poses, one for each key frame id |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity) |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
iterations | The number of optimization iterations which will be applied, with range [1, infinity) |
initialRobustError | Optional the initial average robust error before optimization |
finalRobustError | Optional the final average robust error after optimization |
|
static |
Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently.
The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses are selected and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!
database | The database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted |
pinholeCamera | The pinhole camera profile to be applied |
optimizedObjectPoints | The resulting positions of the optimized object points |
optimizedObjectPointIds | The ids of the optimized object points, one id for each positions in 'optimizedObjectPoints' |
optimizedKeyFramePoses | Optional resulting camera poses, one for each keyframe which has been used during optimization, nullptr if not of interest |
optimizedKeyFramePoseIds | Optional resulting ids of the camera poses which have been used as key frame during optimization, one for each 'optimizedKeyFramePoses', nullptr if not of interest |
minimalKeyFrames | The minimal number of key frames (with valid poses) which are necessary for the optimization, with range [2, maximalkeyFrames] |
maximalKeyFrames | The maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, infinity) |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity) |
estimator | The robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames |
iterations | The number of optimization iterations which will be applied, with range [1, infinity) |
initialRobustError | Optional the initial average robust error before optimization |
finalRobustError | Optional the final average robust error after optimization |
|
static |
Removes all valid 3D object points (and their corresponding 2D image points) from the database which are at least in one frame not in front of the camera while having an existing 2D image point as observation.
database | The database from which the 3D object points will be removed |
removedObjectPointIds | Optional resulting ids of all object points which have been removed, nullptr if not of interest |
|
static |
Removes any 3D object point (and it's corresponding 2D image points) from the database with less then a specified number of observations.
database | The database from which the 3D object points will be removed |
minimalNumberObservations | The minimal number of observations a 3D object point must have to stay in the database, with range [1, infinity) |
removedObjectPointIds | Optional resulting ids of all object points which have been removed, nullptr if not of interest |
|
static |
Removes any 3D object point (and it's corresponding 2D image points) from the database if all their corresponding camera poses are located within a too small bounding box.
The bounding box is determined based on the translational parts of the camera poses.
database | The database from which the 3D object points will be removed |
minimalBoxDiagonal | The minimal diagonal of the bounding box of all camera poses of supporting an object point to stay in the database |
removedObjectPointIds | Optional resulting ids of all object points which have been removed, nullptr if not of interest |
|
static |
Removes very far object points from the database if the amount of object points does not exceed a specified ratio (compared to the remaining object points).
Optimization functions for camera poses or bundle adjustment functions may fail if the database holds a large set of dense object points and a small number of very sparse object points.
Thus, this function can be used to improve the 'quality' of a database.
database | The database from which the very sparse object points will be removed |
minimalBoundingBoxDiagonal | the minimal size of the diagonal of the bounding box of the object points so that the database can be modified, with range (0, infinity) |
medianFactor | The factor which is multiplied with the median distance between the median object point and the object points of the database to identify very sparse (very far) object points |
maximalSparseObjectPointRatio | The maximal ratio between the very spars object points and the entire number of object points so that the database will be modified |
|
static |
Supposes pure rotational camera motion for a given database with stable camera poses determined by initial but stable object points.
If the camera profile is not well approximated during determination of the camera poses and the initial 3D object points the camera motion may contain translational motion although in reality the motion is only rotational.
Especially, if the camera comes with a significant distortion the motion determination may go wrong.
Therefore, this function supposes sole rotational camera motion, determined the new 3D object points locations, selected a set of suitable keyframes best representing the entire number of valid camera poses, optimizes the camera's field of view and the distortion parameter.
If the projection error between 3D object points and 2D image points falls below a defined threshold (should be strong), than the camera motion can be expected to provide only rotational parts.
Beware: Valid object points (with valid location) not visible within the specified frame range will not be investigated.
database | The database providing a set initial 3D object points visible in several valid camera poses |
pinholeCamera | The pinhole camera profile which has been used to determine the camera poses and 3D object point locations in the given database |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
findInitialFieldOfView | True, to apply a determination of the initial field of view of the camera (should be done if the camera's field of view is not known) |
optimizationStrategy | The optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters |
optimizedCamera | The resulting optimized camera profile with adjusted field of view and distortion parameters |
optimizedDatabase | The resulting database with optimized camera poses and 3D object point locations |
minimalObservations | The minimal number of observations an object points must have so that it will be investigated to measure whether the camera motion is pure rotational |
minimalKeyframes | The minimal number of key frames (with valid poses) which are necessary for the determination/optimization, with range [2, minimalKeyFrames) |
maximalKeyframes | The maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, upperFrame - lowerFrame + 1] |
lowerFovX | The lower threshold border for the final (ideal) horizontal field of view of the camera profile, with range (0, upperFovX], |
upperFovX | The upper threshold border for the final (ideal) horizontal field of view of the camera profile, with range [lowerFoVX, PI) |
maximalSqrError | The maximal average projection error between the 3D object points and the 2D image points so that a correspondence counts as valid |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
finalMeanSqrError | Optional resulting final mean squared pose error (averaged) |
|
static |
This functions tracks image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number or as long as a minimal number of sibling frames has been processed.
Thus, this function supports two individual termination conditions: either the specification of a minimal number of tracked points or the specification of the minimal number of used sibling frames (with at least one tracked point).
If the number of tracked object points exceeds 'maximalTrackedObjectPoints' we select the most 'interesting' (by taking object points widely spread over the start frame) object points and remove the remaining.
The tracking is applied forward and backward starting at a specific frame.
database | The database defining the topology of 3D object points and corresponding 2D image points, object point positions and camera poses may be invalid as this information is not used |
objectPointIds | The ids of the initial object points defining the image points which will be tracked, each object point should have a corresponding image point |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalTrackedObjectPoints | One of two termination conditions: The minimal number of tracked points, with range [1, objectPointIds.size()], must be 0 if minimalTrackedFrames is not 0 |
minimalTrackedFrames | One of two termination conditions: The minimal number of tracked frames, with range [1, upperFrame - lowerFrame + 1u], must be 0 if minimalTrackedObjectPoints is not 0 |
maximalTrackedObjectPoints | The maximal number of tracked points, with range [minimalTrackedObjectPoints, objectPointIds.size()] |
trackedObjectPointIds | The resulting ids of the tracked object points, one id for each tracked object point |
trackedImagePointGroups | The resulting groups of tracked image point, one groups for each camera frame, one image point for each object point |
trackedValidIndices | Optional resulting indices of the given object point ids that could be tracked |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
This functions tracks two individual groups (disjoined) image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number.
The tracking is applied forward and backward starting at a specific frame
. First, the priority points will be tracked as long as possible which defined the tracking range for the remaining points.
Afterwards, the remaining points will be tracked as long as possible but not outside the frame range which results from the tracking of the priority points.
Last, the results of both groups will be joined to one large set of tracked object points, first the priority object points, followed by the remaining object points.
database | The database defining the topology of 3D object points and corresponding 2D image points, object point positions and camera poses may be invalid as this information is not used |
priorityObjectPointIds | The ids of the initial priority object points defining the first group of image points which will be tracked, each object point should have a corresponding image point |
remainingObjectPointIds | The ids of the initial remaining object points defining the second group of image points which will be tracked, each object point should have a corresponding image point |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalTrackedPriorityObjectPoints | The minimal number of tracked priority points, with range [1, priorityObjectPointIds.size()) |
minimalRemainingFramesRatio | The minimal number of frames in which remaining points must be tracked (must be visible) defined as a ratio of the number of frames in which the priority points are visible, with range (0, 1] |
maximalTrackedPriorityObjectPoints | The maximal number of tracked priority points, with range [minimalTrackedPriorityObjectPoints, priorityObjectPointIds.size()] |
maximalTrackedRemainingObjectPoints | The maximal number of tracked remaining points, with range [minimalTrackedRemainingObjectPoints, remainingObjectPointIds.size()] |
trackedObjectPointIds | The resulting ids of the tracked object points, one id for each tracked object point |
trackedImagePointGroups | The resulting groups of tracked image point, one groups for each camera frame, one image point for each object point |
trackedValidPriorityIndices | Optional resulting indices of the given priority object point ids that could be tracked |
trackedValidRemainingIndices | Optional resulting indices of the given remaining object point ids that could be tracked |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
This function tracks a group of object points from one frame to both (if available) neighbor frames and counts the minimal number of tracked points.
Use this function to measure the scene complexity at a specific frame.
The less object points can be tracked the more complex the scene.
database | The database defining the topology of 3D object points and corresponding 2D image points, object point positions and camera poses may be invalid as this information is not used |
objectPointIds | The ids of the object points which will be tracked, each object point should have a corresponding image point |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
|
static |
Translates a camera motion value to a string providing the detailed motion as readable string.
cameraMotion | The camera motion for which a readable string is requested |
|
staticprotected |
Determines the semi-precise location of 3D object points and the camera poses for a sole rotational camera motion.
The locations and the camera poses may not match with a pure rotational camera motion before.
Only object points with an already valid location will receive a precise location matching to the rotational motion.
Only valid camera poses will receive a precise pose matching to the rotational motion.
database | The database providing already known locations of 3D object points (may not match with a sole rotational camera motion), already known valid camera poses (may also not match with a sole rotational camera motion) |
pinholeCamera | The pinhole camera profile defined the projection |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalObservations | The minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [0, infinity) |
relocatedObjectPointIds | Optional resulting ids of all object points which have been relocated |
|
staticprotected |
Updates a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames.
The camera orientations (their poses respectively) will be set to invalid if no valid orientation can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
database | The database from which the point correspondences are extracted and which receives the determined camera orientations (the 6-DOF poses with zero translation) |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientations, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear orientation optimization |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a orientation counts as valid, with range (0, infinity) |
totalError | The resulting accumulated total error for all poses (orientations) |
validPoses | The resulting number of valid poses (orientations) |
lock | The lock object which must be defined if this function is executed in parallel on several individual threads |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
numberThreads | The overall number of threads which are used in parallel |
threadIndex | The index of the thread executing this function, with range [0, numberThreads) |
numberThreadsOne | Must be 1 |
|
static |
Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames.
The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
Pose determination starts at a specified frame and moves to higher and lower frame indices afterwards.
Poses from successive frames are applied as initial guess for a new frame.
The resulting poses will have either a sole rotational motion or a rotational and translational motion, this depends on the defined camera motion.
database | The database from which the point correspondences are extracted and which receives the determined camera poses |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
startFrame | The index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame] |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity) |
finalAverageError | Optional resulting average final error for all valid poses, the error depends on the selected robust estimator |
validPoses | Optional resulting number of valid poses |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
static |
Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames.
The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
If a worker is provided every pose is determined independently.
The resulting poses will have either a sole rotational motion or a rotational and translational motion, this depends on the defined camera motion.
database | The database from which the point correspondences are extracted and which receives the determined camera poses |
camera | The camera profile defining the projection, must be valid |
cameraMotion | The motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear pose optimization |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity) |
finalAverageError | Optional resulting average final error for all valid poses, the error depends on the selected robust estimator |
validPoses | Optional resulting number of valid poses |
worker | Optional worker object to distribute the computation |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
|
staticprotected |
Updates a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames.
The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
database | The database from which the point correspondences are extracted and which receives the determined camera poses |
camera | The camera profile defining the projection, must be valid |
randomGenerator | Random generator object |
lowerFrame | The index of the frame defining the lower border of camera poses which will be investigated |
upperFrame | The index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity) |
minimalCorrespondences | The minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity) |
estimator | The robust estimator which is applied for the non-linear pose optimization |
ransacMaximalSqrError | The maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity) |
minimalValidCorrespondenceRatio | The ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1] |
maximalRobustError | The maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity) |
totalError | The resulting accumulated total error for all poses |
validPoses | The resulting number of valid poses |
lock | The lock object which must be defined if this function is executed in parallel on several individual threads |
abort | Optional abort statement allowing to stop the execution; True, if the execution has to stop |
numberThreads | The overall number of threads which are used in parallel |
threadIndex | The index of the thread executing this function, with range [0, numberThreads) |
numberThreadsOne | Must be 1 |