Ocean
Ocean::Tracking::Solver3 Class Reference

This class implements a Structure From Motion solver for unconstrained 3D object points and unconstrained 6-DOF camera poses. More...

Data Structures

class  ObjectPointToPoseImagePointCorrespondenceAccessor
 This class implements an accessor for groups of pairs of pose indices (not pose ids) and image points. More...
 
class  ObjectPointToPoseIndexImagePointCorrespondenceAccessor
 This class implements an accessor providing access to observation pairs (the observation of a projected object points in camera poses/frames) for a set of object points. More...
 
class  PoseToObjectPointIdImagePointCorrespondenceAccessor
 This class implements an accessor for groups of pairs of object point ids and image points. More...
 
class  RelativeThreshold
 Definition of a class allowing to define a relative threshold with lower and upper boundary for individual reference values. More...
 

Public Types

enum  CameraMotion {
  CM_INVALID = 0 , CM_STATIC = (1 << 0) , CM_ROTATIONAL = (1 << 1) , CM_TRANSLATIONAL = (1 << 2) ,
  CM_ROTATIONAL_TINY = CM_ROTATIONAL | (1 << 3) , CM_ROTATIONAL_MODERATE = CM_ROTATIONAL | (1 << 4) , CM_ROTATIONAL_SIGNIFICANT = CM_ROTATIONAL | (1 << 5) , CM_TRANSLATIONAL_TINY = CM_TRANSLATIONAL | (1 << 6) ,
  CM_TRANSLATIONAL_MODERATE = CM_TRANSLATIONAL | (1 << 7) , CM_TRANSLATIONAL_SIGNIFICANT = CM_TRANSLATIONAL | (1 << 8) , CM_UNKNOWN = CM_ROTATIONAL | CM_TRANSLATIONAL | (1 << 9)
}
 Definition of individual camera motion types. More...
 
enum  AccuracyMethod { AM_INVALID , AM_MEAN_DIRECTION_MIN_COSINE , AM_MEAN_DIRECTION_MEAN_COSINE , AM_MEAN_DIRECTION_MEDIAN_COSINE }
 Definition of individual methods to determine the accuracy of object points. More...
 

Static Public Member Functions

static bool determineInitialObjectPointsFromSparseKeyFrames (const Database &database, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const Scalar maximalStaticImagePointFilterRatio, Vectors3 &initialObjectPoints, Indices32 &initialObjectPointIds, const RelativeThreshold &pointsThreshold=RelativeThreshold(20u, Scalar(0.5), 100u), const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Scalar *finalSqrError=nullptr, Scalar *finalImagePointDistance=nullptr, bool *abort=nullptr)
 Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More...
 
static bool determineInitialObjectPointsFromDenseFrames (const Database &database, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const CV::SubRegion &regionOfInterest, const Scalar maximalStaticImagePointFilterRatio, Vectors3 &initialObjectPoints, Indices32 &initialObjectPointIds, const RelativeThreshold &pointsThreshold=RelativeThreshold(20u, Scalar(0.5), 100u), const Scalar minimalTrackedFramesRatio=Scalar(0.1), const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Scalar *finalSqrError=nullptr, bool *abort=nullptr)
 Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More...
 
static bool determineInitialObjectPointsFromSparseKeyFramesBySteps (const Database &database, const unsigned int steps, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const Scalar maximalStaticImagePointFilterRatio, Vectors3 &initialObjectPoints, Indices32 &initialObjectPointIds, const RelativeThreshold &pointsThreshold=RelativeThreshold(20u, Scalar(0.5), 100u), const unsigned int minimalKeyFrames=2u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Worker *worker=nullptr, bool *abort=nullptr)
 Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More...
 
static bool determineInitialObjectPointsFromSparseKeyFramesRANSAC (const PinholeCamera &pinholeCamera, const Database::ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, HomogenousMatrices4 &validPoses, Indices32 &validPoseIndices, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const unsigned int iterations=20u, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5), const Database *database=nullptr, const Indices32 *keyFrameIds=nullptr, const Indices32 *objectPointIds=nullptr, bool *abort=nullptr)
 Determines the initial object point positions for a set of key frames (image point groups) observing the unique object points in individual camera poses. More...
 
static bool determineInitialObjectPointsFromDenseFramesRANSAC (const PinholeCamera &pinholeCamera, const ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, HomogenousMatrices4 &validPoses, Indices32 &validPoseIds, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const unsigned int iterations=20u, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5), Worker *worker=nullptr, bool *abort=nullptr)
 Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses. More...
 
static bool determineInitialObjectPointsFromSparseKeyFrames (const PinholeCamera &pinholeCamera, const Database::ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, const unsigned int firstGroupIndex, const unsigned int secondGroupIndex, HomogenousMatrices4 &poses, Indices32 &validPoseIndices, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5))
 Determines the initial object point positions for a set of key frames (image point groups) observing unique object points. More...
 
static bool determineInitialObjectPointsFromDenseFrames (const PinholeCamera &pinholeCamera, const ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, const unsigned int firstGroupIndex, const unsigned int secondGroupIndex, HomogenousMatrices4 &validPoses, Indices32 &validPoseIds, Scalar &totalSqrError, Vectors3 &objectPoints, Indices32 &validObjectPointIndices, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(10u, Scalar(0.3), 20u), const Scalar maximalSqrError=Scalar(3.5 *3.5))
 Determines the initial object point positions for a set of image point groups (covering a range of image frames) observing the unique object points in individual frames. More...
 
static bool optimizeInitialObjectPoints (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const Vectors3 &initialObjectPoints, const Indices32 &initialObjectPointIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, const unsigned int minimalObjectPoints=5u, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *usedPoseIds=nullptr, Scalar *initialSqrError=nullptr, Scalar *finalSqrError=nullptr, bool *abort=nullptr)
 Optimizes the positions of already known initial 3D object points when a given database holds neither valid 3D positions or valid 6DOF poses. More...
 
static bool determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const unsigned int lowerFrame, const unsigned int upperFrame, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=10u, const Scalar maximalSqrError=Scalar(3.5 *3.5), Worker *worker=nullptr, bool *abort=nullptr)
 Determines the positions of new object points from a database within a specified frame range. More...
 
static bool determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, const Indices32 &unknownObjectPointIds, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, RandomGenerator &randomGenerator, Indices32 *newObjectPointObservations=nullptr, const unsigned int minimalObservations=2u, const bool useAllObservations=true, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar averageRobustError=Scalar(3.5 *3.5), const Scalar maximalSqrError=Numeric::maxValue(), Worker *worker=nullptr, bool *abort=nullptr)
 Determines the positions of a set of (currently unknown) object points. More...
 
static bool determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, RandomGenerator &randomGenerator, Indices32 *newObjectPointObservations=nullptr, const Scalar minimalObjectPointPriority=Scalar(-1), const unsigned int minimalObservations=10u, const bool useAllObservations=true, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar averageRobustError=Scalar(3.5 *3.5), const Scalar maximalSqrError=Numeric::maxValue(), Worker *worker=nullptr, bool *abort=nullptr)
 Determines the positions of all (currently unknown) object points. More...
 
template<bool tVisibleInAllPoses>
static bool determineUnknownObjectPoints (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, const Index32 lowerPoseId, const Index32 upperPoseId, Vectors3 &newObjectPoints, Indices32 &newObjectPointIds, RandomGenerator &randomGenerator, Indices32 *newObjectPointObservations=nullptr, const Scalar minimalObjectPointPriority=Scalar(-1), const unsigned int minimalObservations=10u, const bool useAllObservations=true, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar averageRobustError=Scalar(3.5 *3.5), const Scalar maximalSqrError=Numeric::maxValue(), Worker *worker=nullptr, bool *abort=nullptr)
 Determines the positions of (currently unknown) object points which are visible in specified poses (the poses are specified by a lower and upper frame range). More...
 
static bool optimizeObjectPointsWithFixedPoses (const Database &database, const PinholeCamera &pinholeCamera, const CameraMotion cameraMotion, const Indices32 &objectPointIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar maximalRobustError=Scalar(3.5 *3.5), Worker *worker=nullptr, bool *abort=nullptr)
 Optimizes a set of 3D object points (having a quite good accuracy already) without optimizing the camera poses concurrently. More...
 
static bool optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, Indices32 *optimizedKeyFramePoseIds=nullptr, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=20u, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently. More...
 
static bool optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &keyFrameIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently. More...
 
static bool optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &keyFrameIds, const Indices32 &objectPointIds, Vectors3 &optimizedObjectPoints, Indices32 &optimizedObjectPointIds, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently. More...
 
static bool optimizeObjectPointsWithVariablePoses (const Database &database, const PinholeCamera &pinholeCamera, const Index32 lowerPoseId, const Index32 upperPoseId, const Indices32 &objectPointIds, Indices32 &usedKeyFrameIds, Vectors3 &optimizedObjectPoints, const unsigned int minimalObservations=10u, const unsigned int minimalKeyFrames=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 
static bool supposeRotationalCameraMotion (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrame, const unsigned int upperFrame, const bool findInitialFieldOfView, const PinholeCamera::OptimizationStrategy optimizationStrategy, PinholeCamera &optimizedCamera, Database &optimizedDatabase, const unsigned int minimalObservations=0u, const unsigned int minimalKeyframes=3u, const unsigned int maximalKeyframes=20u, const Scalar lowerFovX=Numeric::deg2rad(20), const Scalar upperFovX=Numeric::deg2rad(140), const Scalar maximalSqrError=(1.5 *1.5), Worker *worker=nullptr, bool *abort=nullptr, Scalar *finalMeanSqrError=nullptr)
 Supposes pure rotational camera motion for a given database with stable camera poses determined by initial but stable object points. More...
 
static bool optimizeCamera (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrame, const unsigned int upperFrame, const bool findInitialFieldOfView, const PinholeCamera::OptimizationStrategy optimizationStrategy, PinholeCamera &optimizedCamera, Database &optimizedDatabase, const unsigned int minimalObservationsInKeyframes=2u, const unsigned int minimalKeyframes=3u, const unsigned int maximalKeyframes=20u, const Scalar lowerFovX=Numeric::deg2rad(20), const Scalar upperFovX=Numeric::deg2rad(140), Worker *worker=nullptr, bool *abort=nullptr, Scalar *finalMeanSqrError=nullptr)
 Optimizes the camera profile for a given database with stable camera poses determined by initial but stable object points. More...
 
static bool optimizeCameraWithVariableObjectPointsAndPoses (const Database &database, const PinholeCamera &pinholeCamera, const PinholeCamera::OptimizationStrategy optimizationStrategy, PinholeCamera &optimizedCamera, Vectors3 *optimizedObjectPoints=nullptr, Indices32 *optimizedObjectPointIds=nullptr, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, Indices32 *optimizedKeyFramePoseIds=nullptr, const unsigned int minimalKeyFrames=3u, const unsigned int maximalKeyFrames=20u, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently. More...
 
static bool optimizeCameraWithVariableObjectPointsAndPoses (const Database &database, const PinholeCamera &pinholeCamera, const PinholeCamera::OptimizationStrategy optimizationStrategy, const Indices32 &keyFrameIds, PinholeCamera &optimizedCamera, Vectors3 *optimizedObjectPoints=nullptr, Indices32 *optimizedObjectPointIds=nullptr, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently. More...
 
static bool optimizeCameraWithVariableObjectPointsAndPoses (const Database &database, const PinholeCamera &pinholeCamera, const PinholeCamera::OptimizationStrategy optimizationStrategy, const Indices32 &keyFrameIds, const Indices32 &objectPointIds, PinholeCamera &optimizedCamera, Vectors3 *optimizedObjectPoints=nullptr, Indices32 *optimizedObjectPointIds=nullptr, HomogenousMatrices4 *optimizedKeyFramePoses=nullptr, const unsigned int minimalObservations=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const unsigned int iterations=50u, Scalar *initialRobustError=nullptr, Scalar *finalRobustError=nullptr)
 Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently. More...
 
static bool updatePoses (Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar maximalRobustError=Scalar(3.5 *3.5), Scalar *finalAverageError=nullptr, size_t *validPoses=nullptr, bool *abort=nullptr)
 Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames. More...
 
static bool updatePoses (Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar maximalRobustError=Scalar(3.5 *3.5), Scalar *finalAverageError=nullptr, size_t *validPoses=nullptr, Worker *worker=nullptr, bool *abort=nullptr)
 Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames. More...
 
static bool determinePoses (const Database &database, const AnyCamera &camera, const CameraMotion cameraMotion, const IndexSet32 &priorityObjectPointIds, const bool solePriorityPoints, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, ShiftVector< HomogenousMatrix4 > &poses, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar ransacMaximalSqrError=Scalar(3.5 *3.5), const Scalar maximalRobustError=Scalar(3.5 *3.5), Scalar *finalAverageError=nullptr, Worker *worker=nullptr, bool *abort=nullptr)
 Determines the camera poses depending on valid 2D/3D points correspondence within a range of camera frames. More...
 
static bool trackObjectPoints (const Database &database, const Indices32 &objectPointIds, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const unsigned int minimalTrackedObjectPoints, const unsigned int minimalTrackedFrames, const unsigned int maximalTrackedObjectPoints, Indices32 &trackedObjectPointIds, ImagePointGroups &trackedImagePointGroups, Indices32 *trackedValidIndices=nullptr, bool *abort=nullptr)
 This functions tracks image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number or as long as a minimal number of sibling frames has been processed. More...
 
static bool trackObjectPoints (const Database &database, const Indices32 &priorityObjectPointIds, const Indices32 &remainingObjectPointIds, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame, const unsigned int minimalTrackedPriorityObjectPoints, const Scalar minimalRemainingFramesRatio, const unsigned int maximalTrackedPriorityObjectPoints, const unsigned int maximalTrackedRemainingObjectPoints, Indices32 &trackedObjectPointIds, ImagePointGroups &trackedImagePointGroups, Indices32 *trackedValidPriorityIndices=nullptr, Indices32 *trackedValidRemainingIndices=nullptr, bool *abort=nullptr)
 This functions tracks two individual groups (disjoined) image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number. More...
 
static Indices32 trackObjectPointsToNeighborFrames (const Database &database, const Indices32 &objectPointIds, const unsigned int lowerFrame, const unsigned int startFrame, const unsigned int upperFrame)
 This function tracks a group of object points from one frame to both (if available) neighbor frames and counts the minimal number of tracked points. More...
 
static Indices32 determineRepresentativePoses (const Database &database, const unsigned int lowerFrame, const unsigned int upperFrame, const size_t numberRepresentative)
 Determines a set of representative camera poses from a given database within a specified frame range. More...
 
static Indices32 determineRepresentativePoses (const Database &database, const Indices32 &poseIds, const size_t numberRepresentative)
 Determines a set of representative camera poses from a given database from a set of given camera poses. More...
 
static HomogenousMatrix4 determinePose (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr)
 Determines the camera 6-DOF pose for a specific camera frame. More...
 
static HomogenousMatrix4 determinePose (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const IndexSet32 &priorityObjectPointIds, const bool solePriorityPoints, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr)
 Determines the camera 6-DOF pose for a specific camera frame. More...
 
static HomogenousMatrix4 determinePose (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< Index32 > &objectPointIds, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr)
 Determines the camera 6-DOF pose for a specific camera frame. More...
 
static HomogenousMatrix4 determinePose (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, Indices32 *validIndices=nullptr)
 Determines the camera 6-DOF pose for a set of object point and image point correspondences. More...
 
static HomogenousMatrix4 determinePose (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const size_t priorityCorrespondences, const HomogenousMatrix4 &roughPose=HomogenousMatrix4(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr)
 Determines the camera 6-DOF pose for a set of object point and image point correspondences. More...
 
static SquareMatrix3 determineOrientation (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr)
 Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame. More...
 
static SquareMatrix3 determineOrientation (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const IndexSet32 &priorityObjectPointIds, const bool solePriorityPoints, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const unsigned int minimalCorrespondences=10u, const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, unsigned int *correspondences=nullptr)
 Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame. More...
 
static SquareMatrix3 determineOrientation (const Database &database, const AnyCamera &camera, RandomGenerator &randomGenerator, const unsigned int frameId, const ObjectPoint *objectPoints, const Index32 *objectPointIds, const size_t numberObjectPoints, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr)
 Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame. More...
 
static SquareMatrix3 determineOrientation (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr, Indices32 *validIndices=nullptr)
 Determines the camera 3-DOF orientation for a set of object point and image point correspondences. More...
 
static SquareMatrix3 determineOrientation (const AnyCamera &camera, RandomGenerator &randomGenerator, const ConstIndexedAccessor< ObjectPoint > &objectPoints, const ConstIndexedAccessor< ImagePoint > &imagePoints, const size_t priorityCorrespondences, const SquareMatrix3 &roughOrientation=SquareMatrix3(false), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_SQUARE, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Scalar *finalRobustError=nullptr)
 Determines the camera 3-DOF orientation for a set of object point and image point correspondences. More...
 
static size_t determineValidPoses (const AnyCamera &camera, const Vectors3 &objectPoints, const ImagePointGroups &imagePointGroups, RandomGenerator &randomGenerator, const CameraMotion cameraMotion, const unsigned int firstValidPoseIndex, const HomogenousMatrix4 &firstValidPose, const unsigned int secondValidPoseIndex, const HomogenousMatrix4 &secondValidPose, const Scalar minimalValidCorrespondenceRatio=Scalar(1), const Scalar maximalSqrError=Scalar(3.5 *3.5), Indices32 *validObjectPointIndices=nullptr, HomogenousMatrices4 *poses=nullptr, Indices32 *poseIds=nullptr, Scalar *totalSqrError=nullptr)
 Determines valid poses for a range of camera frames while for each frame a group of image points is given which correspond to the given object points. More...
 
static CameraMotion determineCameraMotion (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrame, const unsigned int upperFrame, const bool onlyVisibleObjectPoints=true, Worker *worker=nullptr, const Scalar minimalTinyTranslationObservationAngle=Numeric::deg2rad(Scalar(0.15)), const Scalar minimalModerateTranslationObservationAngle=Numeric::deg2rad(1), const Scalar minimalSignificantTranslationObservationAngle=Numeric::deg2rad(5), const Scalar minimalTinyRotationAngle=Numeric::deg2rad(Scalar(0.25)), const Scalar minimalModerateRotationAngle=Numeric::deg2rad(5), const Scalar minimalSignificantRotationAngle=Numeric::deg2rad(10))
 Determines the camera motion from the camera poses within a specified frame range covering only valid poses. More...
 
static Scalar determineObjectPointAccuracy (const PinholeCamera &pinholeCamera, const HomogenousMatrix4 *poses, const Vector2 *imagePoints, const size_t observations, const AccuracyMethod accuracyMethod)
 Measures the accuracy of a 3D object point in combination with a set of camera poses and image points (the projections of the object point). More...
 
static Scalars determineObjectPointsAccuracy (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &objectPointIds, const AccuracyMethod accuracyMethhod, const unsigned int lowerFrame=(unsigned int)(-1), const unsigned int upperFrame=(unsigned int)(-1), Worker *worker=nullptr)
 Measures the accuracy of several 3D object points. More...
 
static void determineProjectionErrors (const AnyCamera &camera, const Vector3 &objectPoint, const ConstIndexedAccessor< HomogenousMatrix4 > &world_T_cameras, const ConstIndexedAccessor< Vector2 > &imagePoints, Scalar *minimalSqrError=nullptr, Scalar *averageSqrError=nullptr, Scalar *maximalSqrError=nullptr)
 Determines the projection errors of a 3D object point in combination with a set of camera poses and image points (the projections of the object point). More...
 
static bool determineProjectionError (const Database &database, const PinholeCamera &pinholeCamera, const Index32 poseId, const bool useDistortionParameters, unsigned int *validCorrespondences=nullptr, Scalar *minimalSqrError=nullptr, Scalar *averageSqrError=nullptr, Scalar *maximalSqrError=nullptr)
 Determines the accuracy of a camera pose for all valid object points visible in the frame by measuring the projection error between the projected object points and their corresponding image points. More...
 
static bool determineProjectionErrors (const Database &database, const PinholeCamera &pinholeCamera, const Indices32 &objectPointIds, const bool useDistortionParameters, const unsigned int lowerFrame=(unsigned int)(-1), const unsigned int upperFrame=(unsigned int)(-1), Scalar *minimalSqrErrors=nullptr, Scalar *averagedSqrErrors=nullptr, Scalar *maximalSqrErrors=nullptr, unsigned int *observations=nullptr, Worker *worker=nullptr)
 Determines the averaged and maximal squared pixel errors between the projections of individual 3D object points and their corresponding image points in individual camera frames. More...
 
static void determinePosesOrientation (const Database &database, const unsigned int lowerFrame, const unsigned int upperFrame, Scalar *xOrientations, Scalar *yOrientations, Scalar *zOrientations)
 Determines the individual cosine values between the mean coordinate axis of a range of poses and the coordinate axis of the individual poses. More...
 
static bool determineNumberCorrespondences (const Database &database, const bool needValidPose, const unsigned int lowerFrame, const unsigned int upperFrame, unsigned int *minimalCorrespondences=nullptr, Scalar *averageCorrespondences=nullptr, unsigned int *medianCorrespondences=nullptr, unsigned int *maximalCorrespondences=nullptr, Worker *worker=nullptr)
 Determines the number of valid correspondences between image points and object points for each frame within a specified frame range. More...
 
static bool determinePlane (const ConstIndexedAccessor< Vector3 > &objectPoints, RandomGenerator &randomGenerator, Plane3 &plane, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *validIndices=nullptr)
 Determines a 3D plane best fitting to a set of given 3D object points. More...
 
static bool determinePlane (const Database &database, const Indices32 &objectPointIds, RandomGenerator &randomGenerator, Plane3 &plane, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *validIndices=nullptr)
 Determines a 3D plane best fitting to a set of given 3D object point ids. More...
 
static bool determinePlane (const Database &database, const Index32 frameIndex, const CV::SubRegion &subRegion, RandomGenerator &randomGenerator, Plane3 &plane, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *usedObjectPointIds=nullptr)
 Determines a 3D plane best fitting to a set of given 3D object point ids which are specified by a given sub-region in the camera frame. More...
 
static bool determinePlane (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int lowerFrameIndex, const unsigned int subRegionFrameIndex, const unsigned int upperFrameIndex, const CV::SubRegion &subRegion, RandomGenerator &randomGenerator, Plane3 &plane, const bool useDistortionParameters, const RelativeThreshold &minimalValidObjectPoints=RelativeThreshold(3u, Scalar(0.5), 20u), const Scalar medianDistanceFactor=Scalar(6), const Geometry::Estimator::EstimatorType estimator=Geometry::Estimator::ET_HUBER, Scalar *finalError=nullptr, Indices32 *usedObjectPointIds=nullptr)
 Determines a 3D plane best fitting to image points in a specified sub-region in a specified frame and best fitting to this area visible in a specified frame range. More...
 
static bool determinePerpendicularPlane (const Database &database, const PinholeCamera &pinholeCamera, const unsigned int frameIndex, const Vector2 &imagePoint, const Scalar distance, Plane3 &plane, const bool useDistortionParameters, Vector3 *pointOnPlane=nullptr)
 Determines a 3D plane perpendicular to the camera with specified distance to the camera. More...
 
static bool determinePerpendicularPlane (const PinholeCamera &pinholeCamera, const HomogenousMatrix4 &pose, const Vector2 &imagePoint, const Scalar distance, Plane3 &plane, const bool useDistortionParameters, Vector3 *pointOnPlane=nullptr)
 Determines a 3D plane perpendicular to the camera with specified distance to the camera. More...
 
static bool removeSparseObjectPoints (Database &database, const Scalar minimalBoundingBoxDiagonal=Scalar(1e+7), const Scalar medianFactor=Scalar(100), const Scalar maximalSparseObjectPointRatio=Scalar(0.05))
 Removes very far object points from the database if the amount of object points does not exceed a specified ratio (compared to the remaining object points). More...
 
static size_t removeObjectPointsNotInFrontOfCamera (Database &database, Indices32 *removedObjectPointIds=nullptr)
 Removes all valid 3D object points (and their corresponding 2D image points) from the database which are at least in one frame not in front of the camera while having an existing 2D image point as observation. More...
 
static size_t removeObjectPointsWithoutEnoughObservations (Database &database, const size_t minimalNumberObservations, Indices32 *removedObjectPointIds=nullptr)
 Removes any 3D object point (and it's corresponding 2D image points) from the database with less then a specified number of observations. More...
 
static size_t removeObjectPointsWithSmallBaseline (Database &database, const Scalar minimalBoxDiagonal, Indices32 *removedObjectPointIds=nullptr)
 Removes any 3D object point (and it's corresponding 2D image points) from the database if all their corresponding camera poses are located within a too small bounding box. More...
 
static std::string translateCameraMotion (const CameraMotion cameraMotion)
 Translates a camera motion value to a string providing the detailed motion as readable string. More...
 

Protected Types

typedef std::map< unsigned int, unsigned int > IndexMap32
 Definition of a map mapping 32 bit indices to 32 bit indices. More...
 
typedef ShiftVector< Vectors2ImagePointGroups
 Definition of a shift vector holding groups of image points. More...
 
typedef std::pair< Index32, ScalarPoseErrorPair
 Definition of a pair combining a pose id and an error parameter. More...
 
typedef std::vector< PoseErrorPairPoseErrorPairs
 Definition of a vector holding pose error pairs. More...
 

Static Protected Member Functions

static size_t filterStaticImagePoints (ImagePointGroups &imagePointGroups, Indices32 &objectPointIds, const Scalar maximalStaticImagePointFilterRatio)
 Determines a subset of perfectly static image points which may be image points located (visible) at static logos in the frames. More...
 
static void determineInitialObjectPointsFromSparseKeyFramesByStepsSubset (const Database *database, const PinholeCamera *pinholeCamera, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const Indices32 *startFrames, const unsigned int upperFrame, const Scalar maximalStaticImagePointFilterRatio, Vectors3 *initialObjectPoints, Indices32 *initialObjectPointIds, Indices32 *initialPoseIds, Scalar *initialPointDistance, const RelativeThreshold *pointsThreshold, const unsigned int minimalKeyFrames, const unsigned int maximalKeyFrames, const Scalar maximalSqrError, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne)
 Determines the initial positions of 3D object points in a database if no camera poses or structure information is known. More...
 
static void determineInitialObjectPointsFromDenseFramesRANSACSubset (const PinholeCamera *pinholeCamera, const ImagePointGroups *imagePointGroups, RandomGenerator *randomGenerator, HomogenousMatrices4 *validPoses, Indices32 *validPoseIds, Vectors3 *objectPoints, Indices32 *validObjectPointIndices, Scalar *totalError, const RelativeThreshold *minimalValidObjectPoints, const Scalar maximalSqrError, unsigned int *remainingIterations, Lock *lock, bool *abort, unsigned int firstIteration, unsigned int numberIterations)
 Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses by a RANSAC algorithm. More...
 
static void updatePosesSubset (Database *database, const AnyCamera *camera, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, size_t *validPoses, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne)
 Updates a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames. More...
 
static void updateOrientationsSubset (Database *database, const AnyCamera *camera, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, size_t *validPoses, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne)
 Updates a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames. More...
 
static void determinePosesSubset (const Database *database, const AnyCamera *camera, const IndexSet32 *priorityObjectPointIds, const bool solePriorityPoints, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, ShiftVector< HomogenousMatrix4 > *poses, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne)
 Determines a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames. More...
 
static void determineOrientationsSubset (const Database *database, const AnyCamera *camera, const IndexSet32 *priorityObjectPointIds, const bool solePriorityPoints, RandomGenerator *randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalCorrespondences, ShiftVector< HomogenousMatrix4 > *poses, const Geometry::Estimator::EstimatorType estimator, const Scalar minimalValidCorrespondenceRatio, const Scalar ransacMaximalSqrError, const Scalar maximalRobustError, Scalar *totalError, Lock *lock, bool *abort, const unsigned int numberThreads, const unsigned int threadIndex, const unsigned int numberThreadsOne)
 Determines a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames. More...
 
static bool updateDatabaseToRotationalMotion (Database &database, const PinholeCamera &pinholeCamera, RandomGenerator &randomGenerator, const unsigned int lowerFrame, const unsigned int upperFrame, const unsigned int minimalObservations, IndexSet32 *relocatedObjectPointIds)
 Determines the semi-precise location of 3D object points and the camera poses for a sole rotational camera motion. More...
 
static void determineUnknownObjectPointsSubset (const AnyCamera *camera, const Database *database, const Database::PoseImagePointTopologyGroups *objectPointsData, RandomGenerator *randomGenerator, const Scalar maximalSqrError, bool *abort, Lock *lock, Vectors3 *newObjectPoints, Indices32 *newObjectPointIds, unsigned int firstObjectPoint, unsigned int numberObjectPoints)
 Determines the positions of new object points from a database within a specified frame range. More...
 
static void determineUnknownObjectPointsSubset (const Database *database, const AnyCamera *camera, const CameraMotion cameraMotion, const Index32 *objectPointIds, Vectors3 *newObjectPoints, Indices32 *newObjectPointIds, Indices32 *newObjectPointObservations, RandomGenerator *randomGenerator, const unsigned int minimalObservations, const bool useAllObservations, const Geometry::Estimator::EstimatorType estimator, const Scalar ransacMaximalSqrError, const Scalar averageRobustError, const Scalar maximalSqrError, Lock *look, bool *abort, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints)
 Determines the positions of a subset of (currently unknown) object points. More...
 
static void optimizeObjectPointsWithFixedPosesSubset (const Database *database, const PinholeCamera *pinholeCamera, const CameraMotion cameraMotion, const Index32 *objectPointIds, Vectors3 *optimizedObjectPoints, Indices32 *optimizedObjectPointIds, const unsigned int minimalObservations, const Geometry::Estimator::EstimatorType estimator, const Scalar maximalRobustError, Lock *look, bool *abort, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints)
 Optimizes a subset of a set of 3D object points which have a quite good accuracy already without optimizing the camera poses concurrently. More...
 
static void determineObjectPointsAccuracySubset (const Database *database, const PinholeCamera *pinholeCamera, const Index32 *objectPointIds, const AccuracyMethod accuracyMethhod, const unsigned int lowerFrame, const unsigned int upperFrame, Scalar *values, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints)
 Measures the accuracy of a subset of several 3D object points. More...
 
static void determineProjectionErrorsSubset (const Database *database, const PinholeCamera *pinholeCamera, const Index32 *objectPointIds, const HomogenousMatrix4 *posesIF, const Index32 lowerPoseId, const unsigned int upperPoseId, const bool useDistortionParameters, Scalar *minimalSqrErrors, Scalar *averagedSqrErrors, Scalar *maximalSqrErrors, unsigned int *observations, const unsigned int firstObjectPoint, const unsigned int numberObjectPoints)
 Determines the maximal squared pixel errors between the projections of a subset of individual 3D object points and their corresponding image points in individual camera frames. More...
 
static Scalar averagePointDistance (const Vector2 *points, const size_t size)
 Determines the average distance between the center of a set of given points and each of the points. More...
 

Detailed Description

This class implements a Structure From Motion solver for unconstrained 3D object points and unconstrained 6-DOF camera poses.

Member Typedef Documentation

◆ ImagePointGroups

Definition of a shift vector holding groups of image points.

◆ IndexMap32

typedef std::map<unsigned int, unsigned int> Ocean::Tracking::Solver3::IndexMap32
protected

Definition of a map mapping 32 bit indices to 32 bit indices.

◆ PoseErrorPair

Definition of a pair combining a pose id and an error parameter.

◆ PoseErrorPairs

Definition of a vector holding pose error pairs.

Member Enumeration Documentation

◆ AccuracyMethod

Definition of individual methods to determine the accuracy of object points.

Enumerator
AM_INVALID 

Invalid method.

AM_MEAN_DIRECTION_MIN_COSINE 

Determination of the minimal absolute cosine values between the mean observation direction and each observation direction.

AM_MEAN_DIRECTION_MEAN_COSINE 

Determination of the mean absolute cosine value between the mean observation direction and each observation direction.

AM_MEAN_DIRECTION_MEDIAN_COSINE 

Determination of the median absolute cosine value between the mean observation direction and each observation direction.

◆ CameraMotion

Definition of individual camera motion types.

Enumerator
CM_INVALID 

Invalid camera motion.

CM_STATIC 

Static camera.

CM_ROTATIONAL 

Rotational camera motion (panning or tilting).

CM_TRANSLATIONAL 

Translational camera motion.

CM_ROTATIONAL_TINY 

Tiny rotational camera motion.

CM_ROTATIONAL_MODERATE 

Moderate rotational camera motion.

CM_ROTATIONAL_SIGNIFICANT 

Significant rotational camera motion.

CM_TRANSLATIONAL_TINY 

Tiny translational camera motion.

CM_TRANSLATIONAL_MODERATE 

Moderate translational camera motion.

CM_TRANSLATIONAL_SIGNIFICANT 

Significant translational camera motion.

CM_UNKNOWN 

An unknown (arbitrary) camera motion with rotational and translational motion.

Member Function Documentation

◆ averagePointDistance()

static Scalar Ocean::Tracking::Solver3::averagePointDistance ( const Vector2 points,
const size_t  size 
)
staticprotected

Determines the average distance between the center of a set of given points and each of the points.

Parameters
pointsThe set of points for which the average distance will be determined
sizeThe number of points in the set, with range [1, infinity)
Returns
The average distance

◆ determineCameraMotion()

static CameraMotion Ocean::Tracking::Solver3::determineCameraMotion ( const Database database,
const PinholeCamera pinholeCamera,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const bool  onlyVisibleObjectPoints = true,
Worker worker = nullptr,
const Scalar  minimalTinyTranslationObservationAngle = Numeric::deg2rad(Scalar(0.15)),
const Scalar  minimalModerateTranslationObservationAngle = Numeric::deg2rad(1),
const Scalar  minimalSignificantTranslationObservationAngle = Numeric::deg2rad(5),
const Scalar  minimalTinyRotationAngle = Numeric::deg2rad(Scalar(0.25)),
const Scalar  minimalModerateRotationAngle = Numeric::deg2rad(5),
const Scalar  minimalSignificantRotationAngle = Numeric::deg2rad(10) 
)
static

Determines the camera motion from the camera poses within a specified frame range covering only valid poses.

Parameters
databaseThe database from which the camera pose are taken
pinholeCameraThe pinhole camera profile which is applied
lowerFrameThe index of the frame defining the lower border of the camera frames which will be investigated
upperFrameThe index of the frame defining the upper border of the camera frames which will be investigated, with range [lowerFrame, infinity)
onlyVisibleObjectPointsTrue, to use only object points which are visible within the defined frame range; False, to use all object points
workerOptional worker object to distribute the computation
minimalTinyTranslationObservationAngleThe minimal angle of observation rays for 3D object points so that the motion contains a tiny translational motion, with range (0, PI/2)
minimalModerateTranslationObservationAngleThe minimal angle of observation rays for 3D object points so that the motion contains a moderate translational motion, with range (minimalTinyTranslationObservationAngle, PI/2)
minimalSignificantTranslationObservationAngleThe minimal angle of observation rays for 3D object points so that the motion contains a significant translational motion, with range (minimalSignificantTranslationObservationAngle, PI/2)
minimalTinyRotationAngleThe minimal angle between the viewing directions so that the motion contains a tiny rotational motion, with range (0, PI/2)
minimalModerateRotationAngleThe minimal angle between the viewing directions so that the motion contains a moderate rotational motion, with range (minimalTinyRotationAngle, PI/2)
minimalSignificantRotationAngleThe minimal angle between the viewing directions so that the motion contains a significant rotational motion, with range (minimalSignificantRotationAngle, PI/2)
Returns
The resulting motion of the camera

◆ determineInitialObjectPointsFromDenseFrames() [1/2]

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromDenseFrames ( const Database database,
const PinholeCamera pinholeCamera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame,
const CV::SubRegion regionOfInterest,
const Scalar  maximalStaticImagePointFilterRatio,
Vectors3 initialObjectPoints,
Indices32 initialObjectPointIds,
const RelativeThreshold pointsThreshold = RelativeThreshold(20u, Scalar(0.5), 100u),
const Scalar  minimalTrackedFramesRatio = Scalar(0.1),
const unsigned int  minimalKeyFrames = 3u,
const unsigned int  maximalKeyFrames = 10u,
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Indices32 usedPoseIds = nullptr,
Scalar finalSqrError = nullptr,
bool *  abort = nullptr 
)
static

Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.

Feature points are tracked from frame to frame within a defined camera frame range as long as the number of tracked points fall under a defined threshold.
The entire range of frames with tracked points are use to determine the locations of the 3D object points.
This function can be configured so that (perfectly) static image points located in all frames at the same position are identified not used for calculations.
Static image points can be located (visible) at static logos (bands) in the frames so that these image points must not be used.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points
pinholeCameraThe pinhole camera profile which will be applied
randomGeneratora random generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range (lowerFrame, infinity)
regionOfInterestOptional region of interest defining a specific area in the start frame so that the object points lying in the region are handled with higher priority, an invalid region to avoid any special region of interest handling
maximalStaticImagePointFilterRatioThe maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering
initialObjectPointsThe resulting initial 3D positions of object points that could be extracted
initialObjectPointIdsThe resulting ids of the resulting object points, one id for each resulting object point
pointsThresholdThe threshold of image points which must be visible in each camera frame
minimalTrackedFramesRatioThe minimal number of frames that finally have been tracked (the entire range of frames in which the object points are visible) defined as ratio of the entire frame range, with range (0, 1], does not have any meaning if no start frame or region of interest is defined
minimalKeyFramesThe minimal number of keyframes that will be extracted
maximalKeyFramesThe maximal number of keyframes that will be extracted
maximalSqrErrorThe maximal square distance between an image points and a projected object point
usedPoseIdsOptional resulting ids of all camera poses which have been used to determine the initial object points
finalSqrErrorOptional resulting final average error
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ determineInitialObjectPointsFromDenseFrames() [2/2]

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromDenseFrames ( const PinholeCamera pinholeCamera,
const ImagePointGroups imagePointGroups,
RandomGenerator randomGenerator,
const unsigned int  firstGroupIndex,
const unsigned int  secondGroupIndex,
HomogenousMatrices4 validPoses,
Indices32 validPoseIds,
Scalar totalSqrError,
Vectors3 objectPoints,
Indices32 validObjectPointIndices,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(10u, Scalar(0.3), 20u),
const Scalar  maximalSqrError = Scalar(3.5 *3.5) 
)
static

Determines the initial object point positions for a set of image point groups (covering a range of image frames) observing the unique object points in individual frames.

This function starts with two explicit frames (image point groups) and then tries to find more matching frames (image point groups).
The set of given image points should not contain image points located (visible) at a static logo in the frame as these points may violate the pose determination algorithms.
All frames (image point groups) within the frame range provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):

                op_1,   op_2,   op_3,   op_4,   ..., op_n
...
dense_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n
dense_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n
...
Parameters
pinholeCameraThe pinhole camera profile to be applied
imagePointGroupsFrames (groups) of image points, all points in one frame (group) are located in the same camera frame and the individual points correspond to the same unique object points
randomGeneratorA random generator object
firstGroupIndexThe index of the first frame (image point group) which is applied as the first stereo frame, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()]
secondGroupIndexThe index of the second frame (image point group) which is applied as the second stereo frame, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()], with firstGroupIndex != secondGroupIndex
validPosesThe resulting poses that could be determined
validPoseIdsThe ids of resulting valid poses, one id for each valid resulting pose (the order of the ids is arbitrary)
totalSqrErrorThe resulting sum of square pixel errors for all valid poses
objectPointsThe resulting object points that could be determined
validObjectPointIndicesThe indices of resulting valid object points in relation to the given image point groups, with range [5, infinity)
minimalValidObjectPointsThe minimal number of valid object points which must be reached
maximalSqrErrorThe maximal square distance between an image points and a projected object point
Returns
True, if succeeded
See also
determineInitialObjectPointsFromKeyFrames().

◆ determineInitialObjectPointsFromDenseFramesRANSAC()

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromDenseFramesRANSAC ( const PinholeCamera pinholeCamera,
const ImagePointGroups imagePointGroups,
RandomGenerator randomGenerator,
HomogenousMatrices4 validPoses,
Indices32 validPoseIds,
Vectors3 objectPoints,
Indices32 validObjectPointIndices,
const unsigned int  iterations = 20u,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(10u, Scalar(0.3), 20u),
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses.

This function applies a RANSAC mechanism randomly selecting individual start key frames (pairs of image points).
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):

                op_1,   op_2,   op_3,   op_4,   ..., op_n
...
dense_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n
dense_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n
...
Parameters
pinholeCameraThe pinhole camera profile to be applied
imagePointGroupsFrames of image points, all points in one group are located in the same camera frame and the individual points correspond to the same unique object points
randomGeneratorA random generator object
validPosesThe resulting poses that could be determined
validPoseIdsThe ids of resulting valid poses, one id for each resulting valid pose (the order of the ids is arbitrary)
objectPointsThe resulting object points that could be determined
validObjectPointIndicesThe indices of resulting valid object points in relation to the given image point groups
iterationsThe number of RANSAC iterations trying to find a better result than already determined
minimalValidObjectPointsThe threshold of object points that must be valid
maximalSqrErrorThe maximal square distance between an image points and a projected object point
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded
See also
determineInitialObjectPointsFromSparseKeyFramesRANSAC().

◆ determineInitialObjectPointsFromDenseFramesRANSACSubset()

static void Ocean::Tracking::Solver3::determineInitialObjectPointsFromDenseFramesRANSACSubset ( const PinholeCamera pinholeCamera,
const ImagePointGroups imagePointGroups,
RandomGenerator randomGenerator,
HomogenousMatrices4 validPoses,
Indices32 validPoseIds,
Vectors3 objectPoints,
Indices32 validObjectPointIndices,
Scalar totalError,
const RelativeThreshold minimalValidObjectPoints,
const Scalar  maximalSqrError,
unsigned int *  remainingIterations,
Lock lock,
bool *  abort,
unsigned int  firstIteration,
unsigned int  numberIterations 
)
staticprotected

Determines the initial object point positions for a set of frames (image point groups) observing the unique object points in individual camera poses by a RANSAC algorithm.

This function applies a RANSAC mechanism randomly selecting individual start key frames (pairs of image points).
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):

                op_1,   op_2,   op_3,   op_4,   ..., op_n
...
dense_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n
dense_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n
...
Parameters
pinholeCameraThe pinhole camera profile to be applied
imagePointGroupsFrames of image points, all points in one group are located in the same camera frame and the individual points correspond to the same unique object points
randomGeneratorA random generator object
validPosesThe resulting poses that could be determined
validPoseIdsThe ids of resulting valid poses, one id for each resulting valid pose (the order of the ids is arbitrary)
objectPointsThe resulting object points that could be determined
validObjectPointIndicesThe indices of resulting valid object points in relation to the given image point groups
totalErrorThe resulting total error of the best RANSAC iteration
minimalValidObjectPointsThe threshold of object points that must be valid
maximalSqrErrorThe maximal square distance between an image points and a projected object point
remainingIterationsThe number of RANSAC iterations that still need to be applied
lockThe lock object which must be defined if this function is executed in parallel on several threads, otherwise nullptr
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
firstIterationThe first RANSAC iteration to apply, has no meaning as 'remainingIterations' is used instead
numberIterationsThe number of RANSAC iterations to apply, has no meaning as 'remainingIterations' is used instead
See also
determineInitialObjectPointsFromDenseFramesRANSAC().

◆ determineInitialObjectPointsFromSparseKeyFrames() [1/2]

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromSparseKeyFrames ( const Database database,
const PinholeCamera pinholeCamera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame,
const Scalar  maximalStaticImagePointFilterRatio,
Vectors3 initialObjectPoints,
Indices32 initialObjectPointIds,
const RelativeThreshold pointsThreshold = RelativeThreshold(20u, Scalar(0.5), 100u),
const unsigned int  minimalKeyFrames = 3u,
const unsigned int  maximalKeyFrames = 10u,
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Indices32 usedPoseIds = nullptr,
Scalar finalSqrError = nullptr,
Scalar finalImagePointDistance = nullptr,
bool *  abort = nullptr 
)
static

Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.

Feature points are tracked from frame to frame within a defined camera frame range as long as the number of tracked points fall under a defined threshold.
Key frames are selected from this range of (tracked) camera frames with representative geometry information.
This function can be configured so that (perfectly) static image points located in all frames at the same position are identified not used for calculations.
Static image points can be located (visible) at static logos (bands) in the frames so that these image points must not be used.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points
pinholeCameraThe pinhole camera profile which will be applied
randomGeneratora random generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range (lowerFrame, infinity)
maximalStaticImagePointFilterRatioThe maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering
initialObjectPointsThe resulting initial 3D positions of object points that could be extracted
initialObjectPointIdsThe resulting ids of the resulting object points, one id for each resulting object point
pointsThresholdThe threshold of image points which must be visible in each camera frame
minimalKeyFramesThe minimal number of keyframes that will be extracted
maximalKeyFramesThe maximal number of keyframes that will be extracted
maximalSqrErrorThe maximal square distance between an image points and a projected object point
usedPoseIdsOptional resulting ids of all camera poses which have been used to determine the initial object points
finalSqrErrorOptional resulting final average error
finalImagePointDistanceOptional resulting final average distance between the individual image points and the center of these image points
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ determineInitialObjectPointsFromSparseKeyFrames() [2/2]

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromSparseKeyFrames ( const PinholeCamera pinholeCamera,
const Database::ImagePointGroups imagePointGroups,
RandomGenerator randomGenerator,
const unsigned int  firstGroupIndex,
const unsigned int  secondGroupIndex,
HomogenousMatrices4 poses,
Indices32 validPoseIndices,
Vectors3 objectPoints,
Indices32 validObjectPointIndices,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(10u, Scalar(0.3), 20u),
const Scalar  maximalSqrError = Scalar(3.5 *3.5) 
)
static

Determines the initial object point positions for a set of key frames (image point groups) observing unique object points.

This function starts with two explicit key frames (image point groups) and then tries to find more matching key frames (image point groups).
The set of given image points should not contain image points located (visible) at a static logo in the frame as these points may violate the pose determination algorithms.
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual key frames we have n object points (op) and n * m overall image points (ip):

                 op_1,   op_2,   op_3,   op_4,   ..., op_n
sparse_pose_0 -> ip_1_1, ip_1_2, ip_1_3, ip_1_4, ..., ip_1_n
sparse_pose_1 -> ip_2_1, ip_2_2, ip_2_3, ip_2_4, ..., ip_2_n
sparse_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n
sparse_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n
...
sparse pose_m -> ip_m_1, ip_m_2, ip_m_3, ip_m_4, ..., ip_y_n
Parameters
pinholeCameraThe pinhole camera profile to be applied
imagePointGroupsKey frames (groups) of image points, all points in one key frame (group) are located in the same camera key frame and the individual points correspond to the same unique object points
randomGeneratorA random generator object
firstGroupIndexThe index of the first key frame (image point group) which is applied as the first stereo frame, with range [0, imagePointGroups.size())
secondGroupIndexThe index of the second key frame (image point group) which is applied as the second stereo frame, with range [0, imagePointGroups.size()), with firstGroupIndex != secondGroupIndex
posesThe resulting poses that could be determined
validPoseIndicesThe indices of resulting valid poses in relation to the given image point groups
objectPointsThe resulting object points that could be determined
validObjectPointIndicesThe indices of resulting valid object points in relation to the given image point groups
minimalValidObjectPointsThe minimal number of valid object points which must be reached
maximalSqrErrorThe maximal square distance between an image points and a projected object point
Returns
True, if succeeded
See also
determineInitialObjectPointsFromFrameRange().

◆ determineInitialObjectPointsFromSparseKeyFramesBySteps()

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromSparseKeyFramesBySteps ( const Database database,
const unsigned int  steps,
const PinholeCamera pinholeCamera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const Scalar  maximalStaticImagePointFilterRatio,
Vectors3 initialObjectPoints,
Indices32 initialObjectPointIds,
const RelativeThreshold pointsThreshold = RelativeThreshold(20u, Scalar(0.5), 100u),
const unsigned int  minimalKeyFrames = 2u,
const unsigned int  maximalKeyFrames = 10u,
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Indices32 usedPoseIds = nullptr,
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.

Feature points are tracked from frame to frame within a defined camera frame range as long as the number of tracked points fall under a defined threshold.
Key frames are selected from this range of (tracked) camera frames with representative geometry information.
This function internally applies several individual iterations beginning from individual start frames so that the best result within the entire frame range is returned.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points
stepsThe number of steps that are applied within the defined frame range, with range [1, upperFrame - lowerFrame + 1]
pinholeCameraThe pinhole camera profile which will be applied
randomGeneratorA random generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range (lowerFrame, infinity)
maximalStaticImagePointFilterRatioThe maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering
initialObjectPointsThe resulting initial 3D positions of object points that could be extracted
initialObjectPointIdsThe resulting ids of the resulting object points, one id for each resulting object point
pointsThresholdThe threshold of image points which must be visible in each camera frame
minimalKeyFramesThe minimal number of keyframes that will be extracted
maximalKeyFramesThe maximal number of keyframes that will be extracted
maximalSqrErrorThe maximal square distance between an image points and a projected object point
usedPoseIdsOptional resulting ids of all camera poses which have been used to determine the initial object points
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ determineInitialObjectPointsFromSparseKeyFramesByStepsSubset()

static void Ocean::Tracking::Solver3::determineInitialObjectPointsFromSparseKeyFramesByStepsSubset ( const Database database,
const PinholeCamera pinholeCamera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const Indices32 startFrames,
const unsigned int  upperFrame,
const Scalar  maximalStaticImagePointFilterRatio,
Vectors3 initialObjectPoints,
Indices32 initialObjectPointIds,
Indices32 initialPoseIds,
Scalar initialPointDistance,
const RelativeThreshold pointsThreshold,
const unsigned int  minimalKeyFrames,
const unsigned int  maximalKeyFrames,
const Scalar  maximalSqrError,
Lock lock,
bool *  abort,
const unsigned int  numberThreads,
const unsigned int  threadIndex,
const unsigned int  numberThreadsOne 
)
staticprotected

Determines the initial positions of 3D object points in a database if no camera poses or structure information is known.

This functions processes a subset of pre-defined start frames from which the point tracking starts.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points
pinholeCameraThe pinhole camera profile which will be applied
randomGeneratorA random generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFramesThe entire set of start frames from which a subset will be processed
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated
maximalStaticImagePointFilterRatioThe maximal ratio between (perfectly) static image points and the overall number of image points so that these static image points will be filtered and not used, with ratio [0, 1), 0 to avoid any filtering
initialObjectPointsThe resulting initial 3D positions of object points that could be extracted
initialObjectPointIdsThe resulting ids of the resulting object points, one id for each resulting object point
initialPoseIdsThe resulting ids of all camera poses which have been used to determine the resulting initial object points
initialPointDistanceThe resulting distance between the image points which have been used to determine the initial object points, which is a measure for the reliability of the resulting 3D object points
pointsThresholdThe threshold of image points which must be visible in each camera frame
minimalKeyFramesThe minimal number of keyframes that will be extracted
maximalKeyFramesThe maximal number of keyframes that will be extracted
maximalSqrErrorThe maximal square distance between an image points and a projected object point
lockThe lock object which must be defined if this function is executed in parallel on several threads, otherwise nullptr
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
numberThreadsThe number of threads on which this function is executed in parallel, with range [1, infinity)
threadIndexThe index of the thread on which this function is executed
numberThreadsOneMust be 1

◆ determineInitialObjectPointsFromSparseKeyFramesRANSAC()

static bool Ocean::Tracking::Solver3::determineInitialObjectPointsFromSparseKeyFramesRANSAC ( const PinholeCamera pinholeCamera,
const Database::ImagePointGroups imagePointGroups,
RandomGenerator randomGenerator,
HomogenousMatrices4 validPoses,
Indices32 validPoseIndices,
Vectors3 objectPoints,
Indices32 validObjectPointIndices,
const unsigned int  iterations = 20u,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(10u, Scalar(0.3), 20u),
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
const Database database = nullptr,
const Indices32 keyFrameIds = nullptr,
const Indices32 objectPointIds = nullptr,
bool *  abort = nullptr 
)
static

Determines the initial object point positions for a set of key frames (image point groups) observing the unique object points in individual camera poses.

This function applies a RANSAC mechanism randomly selecting individual start key frames (pairs of image points).
The key frames (image point groups) provide the following topology:
For n unique object points visible in m individual frames we have n object points (op) and n * m overall image points (ip):

                 op_1,   op_2,   op_3,   op_4,   ..., op_n
sparse_pose_0 -> ip_1_1, ip_1_2, ip_1_3, ip_1_4, ..., ip_1_n
sparse_pose_1 -> ip_2_1, ip_2_2, ip_2_3, ip_2_4, ..., ip_2_n
sparse_pose_2 -> ip_3_1, ip_3_2, ip_3_3, ip_3_4, ..., ip_3_n
sparse_pose_3 -> ip_4_1, ip_4_2, ip_4_3, ip_4_4, ..., ip_4_n
...
sparse pose_m -> ip_m_1, ip_m_2, ip_m_3, ip_m_4, ..., ip_y_n
Parameters
pinholeCameraThe pinhole camera profile to be applied
imagePointGroupsKey frames of image points, all points in one group are located in the same camera frame and the individual points correspond to the same unique object points
randomGeneratorA random generator object
validPosesThe resulting poses that could be determined
validPoseIndicesThe indices of resulting valid poses in relation to the given image point groups
objectPointsThe resulting object points that could be determined
validObjectPointIndicesThe indices of resulting valid object points in relation to the given image point groups
iterationsThe number of RANSAC iterations trying to find a better result than already determined
minimalValidObjectPointsThe threshold of object points that must be valid
maximalSqrErrorThe maximal square distance between an image points and a projected object point
databaseOptional database holding the image points from the imagePointGroups to validate the resulting 3D object positions even for camera poses not corresponding to the provided groups of image points; if defined also 'keyFrameIds' and 'objectPointIds' must be defined
keyFrameIdsOptional ids of the individual keyframes to which the set of image point groups from 'imagePointGroups' belong, each key frame id corresponds with one group of image points, if defined also 'database' and 'objectPointIds' must be defined
objectPointIdsOptional ids of the individual object points which projections are provided as groups of image points in 'imagePointGroups', if defined also 'database' and 'keyFrameIds' must be defined
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded
See also
determineInitialObjectPointsFromDenseFramesRANSAC().

◆ determineNumberCorrespondences()

static bool Ocean::Tracking::Solver3::determineNumberCorrespondences ( const Database database,
const bool  needValidPose,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
unsigned int *  minimalCorrespondences = nullptr,
Scalar averageCorrespondences = nullptr,
unsigned int *  medianCorrespondences = nullptr,
unsigned int *  maximalCorrespondences = nullptr,
Worker worker = nullptr 
)
static

Determines the number of valid correspondences between image points and object points for each frame within a specified frame range.

Parameters
databaseThe database providing the 3D object points, the 2D image points and the topology between image and object points
needValidPoseTrue, if the pose must be valid so that the number of valid correspondences will be determined, otherwise the number of correspondences will be zero
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesOptional resulting minimal number of correspondences for all frames within the defined frame range
averageCorrespondencesOptional resulting averaged number of correspondences for all frames within the defined frame range
medianCorrespondencesOptional resulting median of all correspondences for all frames within the defined frame range
maximalCorrespondencesOptional resulting maximal number correspondences for all frames within the defined frame range
workerOptional worker object to distribute the computation
Returns
True, if succeeded

◆ determineObjectPointAccuracy()

static Scalar Ocean::Tracking::Solver3::determineObjectPointAccuracy ( const PinholeCamera pinholeCamera,
const HomogenousMatrix4 poses,
const Vector2 imagePoints,
const size_t  observations,
const AccuracyMethod  accuracyMethod 
)
static

Measures the accuracy of a 3D object point in combination with a set of camera poses and image points (the projections of the object point).

The accuracy of the point can be determined by individual methods, while the basic idea is to use the angles between the individual observation rays of the object point.

Parameters
pinholeCameraThe pinhole camera profile which is applied
posesThe camera poses in which the object point is visible
imagePointsThe individual image points in the individual camera frames
observationsThe number of observations (pairs of camera poses and image points)
accuracyMethodThe method which is applied to determine the accuracy, must be valid
Returns
The resulting accuracy parameter depending on the specified method

◆ determineObjectPointsAccuracy()

static Scalars Ocean::Tracking::Solver3::determineObjectPointsAccuracy ( const Database database,
const PinholeCamera pinholeCamera,
const Indices32 objectPointIds,
const AccuracyMethod  accuracyMethhod,
const unsigned int  lowerFrame = (unsigned int)(-1),
const unsigned int  upperFrame = (unsigned int)(-1),
Worker worker = nullptr 
)
static

Measures the accuracy of several 3D object points.

This methods extracts the 3D object point locations from the given database.
The accuracy of the points can be determined by individual methods, while the basic idea is to use the angles between the individual observation rays of the object points.

Parameters
databaseThe database providing the location of the 3D object points, the camera poses and the image point positions.
pinholeCameraThe pinhole camera profile which is applied
objectPointIdsThe ids of the object points for which the accuracies will be determined, each object point must be valid
accuracyMethhodThe method which is applied to determine the accuracy, must be valid
lowerFrameOptional index of the frame defining the lower border of camera poses which will be investigated, -1 if no lower and no upper border is defined
upperFrameOptional index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity), -1 if also 'lowerFrame' is -1
workerOptional worker object to distribute the computation
Returns
The resulting accuracy parameters depending on the specified method, one parameter of each object point, an accuracy of -1 indicates an invalid point (e.g., due to too less measurements)

◆ determineObjectPointsAccuracySubset()

static void Ocean::Tracking::Solver3::determineObjectPointsAccuracySubset ( const Database database,
const PinholeCamera pinholeCamera,
const Index32 objectPointIds,
const AccuracyMethod  accuracyMethhod,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
Scalar values,
const unsigned int  firstObjectPoint,
const unsigned int  numberObjectPoints 
)
staticprotected

Measures the accuracy of a subset of several 3D object points.

Parameters
databaseThe database providing the location of the 3D object points, the camera poses and the image point positions.
pinholeCameraThe pinhole camera profile which is applied
objectPointIdsThe ids of the object points for which the accuracies will be determined, each object point must be valid
accuracyMethhodThe method which is applied to determine the accuracy
lowerFrameOptional index of the frame defining the lower border of camera poses which will be investigated, -1 if no lower and no upper border is defined
upperFrameOptional index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity), -1 if also 'lowerFrame' is -1
valuesThe resulting accuracy parameters depending on the specified method, one parameter of each object point
firstObjectPointFirst object point to be handled
numberObjectPointsThe number of object points to be handled
See also
measureObjectPointsAccuracy().

◆ determineOrientation() [1/5]

SquareMatrix3 Ocean::Tracking::Solver3::determineOrientation ( const AnyCamera camera,
RandomGenerator randomGenerator,
const ConstIndexedAccessor< ObjectPoint > &  objectPoints,
const ConstIndexedAccessor< ImagePoint > &  imagePoints,
const size_t  priorityCorrespondences,
const SquareMatrix3 roughOrientation = SquareMatrix3(false),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr 
)
inlinestatic

Determines the camera 3-DOF orientation for a set of object point and image point correspondences.

Parameters
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
objectPointsThe object points which are visible in a frame, first all priority object points followed by the remaining object points
imagePointsThe image points which are projections of the given object points, one image point corresponds to one object point
priorityCorrespondencesThe number of priority point correspondences
roughOrientationOptional a rough camera orientation to speedup the computation and accuracy
estimatorThe robust estimator which is applied for the non-linear orientation optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
Returns
The resulting camera orientation, an invalid orientation if no orientation can be determined

◆ determineOrientation() [2/5]

SquareMatrix3 Ocean::Tracking::Solver3::determineOrientation ( const AnyCamera camera,
RandomGenerator randomGenerator,
const ConstIndexedAccessor< ObjectPoint > &  objectPoints,
const ConstIndexedAccessor< ImagePoint > &  imagePoints,
const SquareMatrix3 roughOrientation = SquareMatrix3(false),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr,
Indices32 validIndices = nullptr 
)
inlinestatic

Determines the camera 3-DOF orientation for a set of object point and image point correspondences.

Parameters
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
objectPointsThe object points which are visible in a frame
imagePointsThe image points which are projections of the given object points, one image point corresponds to one object point
roughOrientationOptional a rough camera orientation to speedup the computation and accuracy
estimatorThe robust estimator which is applied for the non-linear orientation optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
validIndicesOptional resulting indices of the valid point correspondences
Returns
The resulting camera orientation, an invalid orientation if no orientation can be determined

◆ determineOrientation() [3/5]

SquareMatrix3 Ocean::Tracking::Solver3::determineOrientation ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  frameId,
const IndexSet32 priorityObjectPointIds,
const bool  solePriorityPoints,
const SquareMatrix3 roughOrientation = SquareMatrix3(false),
const unsigned int  minimalCorrespondences = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr,
unsigned int *  correspondences = nullptr 
)
inlinestatic

Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame.

Parameters
databaseThe database from which the object point and image point correspondences are extracted
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
frameIdThe id of the frame for which the camera orientation will be determined
priorityObjectPointIdsIds of object points for which the poses will be optimized
solePriorityPointsTrue, to apply only the priority object points for pose determination
roughOrientationOptional a rough camera orientation to speedup the computation and accuracy
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientation, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear orientation optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
correspondencesOptional resulting number of 2D/3D point correspondences which were available
Returns
The resulting camera orientation, an invalid orientation if no orientation can be determined

◆ determineOrientation() [4/5]

SquareMatrix3 Ocean::Tracking::Solver3::determineOrientation ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  frameId,
const ObjectPoint objectPoints,
const Index32 objectPointIds,
const size_t  numberObjectPoints,
const SquareMatrix3 roughOrientation = SquareMatrix3(false),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr 
)
inlinestatic

Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame.

Parameters
databaseThe database from which the image points are extracted
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
frameIdThe id of the frame for which the camera orientation will be determined
objectPointsThe object points which are all visible in the specified frame
objectPointIdsThe ids of the object points, one id for each object points
numberObjectPointsThe number of given object points, with range [5, infinity)
roughOrientationOptional a rough camera orientation to speedup the computation and accuracy
estimatorThe robust estimator which is applied for the non-linear orientation optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
Returns
The resulting camera orientation, an invalid orientation if no orientation can be determined

◆ determineOrientation() [5/5]

SquareMatrix3 Ocean::Tracking::Solver3::determineOrientation ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  frameId,
const SquareMatrix3 roughOrientation = SquareMatrix3(false),
const unsigned int  minimalCorrespondences = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr,
unsigned int *  correspondences = nullptr 
)
inlinestatic

Determines the camera 3-DOF orientation (as the camera has rotational motion only) for a specific camera frame.

Parameters
databaseThe database from which the object point and image point correspondences are extracted
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
frameIdThe id of the frame for which the camera orientation will be determined
roughOrientationOptional a rough camera orientation to speedup the computation and accuracy
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientation, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear orientation optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
correspondencesOptional resulting number of 2D/3D point correspondences which were available
Returns
The resulting camera orientation, an invalid orientation if no orientation can be determined

◆ determineOrientationsSubset()

static void Ocean::Tracking::Solver3::determineOrientationsSubset ( const Database database,
const AnyCamera camera,
const IndexSet32 priorityObjectPointIds,
const bool  solePriorityPoints,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
ShiftVector< HomogenousMatrix4 > *  poses,
const Geometry::Estimator::EstimatorType  estimator,
const Scalar  minimalValidCorrespondenceRatio,
const Scalar  ransacMaximalSqrError,
const Scalar  maximalRobustError,
Scalar totalError,
Lock lock,
bool *  abort,
const unsigned int  numberThreads,
const unsigned int  threadIndex,
const unsigned int  numberThreadsOne 
)
staticprotected

Determines a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames.

The camera orientations (their poses respectively) will be set to invalid if no valid orientation can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).

Parameters
databaseThe database from which the point correspondences are extracted and which receives the determined camera orientations (the 6-DOF poses with zero translation)
cameraThe camera profile defining the projection, must be valid
priorityObjectPointIdsOptional ids of the object points for which the poses will be optimized, may be zero so that all object points are investigated with the same priority
solePriorityPointsTrue, to apply only the priority object points for pose determination
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientations, with range [5, infinity)
posesThe resulting determined poses starting with the lower frame and ending with the upper frame
estimatorThe robust estimator which is applied for the non-linear orientation optimization
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a orientation counts as valid, with range (0, infinity)
totalErrorThe resulting accumulated total error for all poses (orientations)
lockThe lock object which must be defined if this function is executed in parallel on several individual threads
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
numberThreadsThe overall number of threads which are used in parallel
threadIndexThe index of the thread executing this function, with range [0, numberThreads)
numberThreadsOneMust be 1

◆ determinePerpendicularPlane() [1/2]

static bool Ocean::Tracking::Solver3::determinePerpendicularPlane ( const Database database,
const PinholeCamera pinholeCamera,
const unsigned int  frameIndex,
const Vector2 imagePoint,
const Scalar  distance,
Plane3 plane,
const bool  useDistortionParameters,
Vector3 pointOnPlane = nullptr 
)
static

Determines a 3D plane perpendicular to the camera with specified distance to the camera.

This function may be used for e.g., rotational camera motion as e.g., initial guess.

Parameters
databaseThe database holding the 3D object point locations
pinholeCameraThe pinhole camera profile defining the projection, must be valid
frameIndexThe index of the frame in which the given image point is visible
imagePointThe image point to which (to the viewing ray respectively) the resulting plane will be perpendicular, must lie inside the camera frame
distanceThe distance of the plane to the camera, with range (0, infinity)
planeThe resulting 3D plane best fitting for the given data
useDistortionParametersTrue, to use the distortion parameters of the camera
pointOnPlaneOptional resulting 3D intersection point of resulting plane and the viewing ray of the provided image point
Returns
True, if succeeded

◆ determinePerpendicularPlane() [2/2]

static bool Ocean::Tracking::Solver3::determinePerpendicularPlane ( const PinholeCamera pinholeCamera,
const HomogenousMatrix4 pose,
const Vector2 imagePoint,
const Scalar  distance,
Plane3 plane,
const bool  useDistortionParameters,
Vector3 pointOnPlane = nullptr 
)
static

Determines a 3D plane perpendicular to the camera with specified distance to the camera.

This function may be used for e.g., rotational camera motion as e.g., initial guess.

Parameters
pinholeCameraThe pinhole camera profile defining the projection, must be valid
poseThe pose of the camera, must be valid
imagePointThe image point to which (to the viewing ray respectively) the resulting plane will be perpendicular, must lie inside the camera frame
distanceThe distance of the plane to the camera, with range (0, infinity)
planeThe resulting 3D plane best fitting for the given data
useDistortionParametersTrue, to use the distortion parameters of the camera
pointOnPlaneOptional resulting 3D intersection point of resulting plane and the viewing ray of the provided image point
Returns
True, if succeeded

◆ determinePlane() [1/4]

bool Ocean::Tracking::Solver3::determinePlane ( const ConstIndexedAccessor< Vector3 > &  objectPoints,
RandomGenerator randomGenerator,
Plane3 plane,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(3u, Scalar(0.5), 20u),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_HUBER,
Scalar finalError = nullptr,
Indices32 validIndices = nullptr 
)
inlinestatic

Determines a 3D plane best fitting to a set of given 3D object points.

Parameters
objectPointsThe object points for which the best matching plane will be determined, at least 3
randomGeneratorRandom number generator
planeThe resulting 3D plane
minimalValidObjectPointsThe minimal number of valid object points so that a valid plane will be determined
estimatorThe robust estimator which will be applied to determine the 3D plane
finalErrorOptional resulting final error
validIndicesOptional resulting indices of all valid object points
Returns
True, if succeeded

◆ determinePlane() [2/4]

static bool Ocean::Tracking::Solver3::determinePlane ( const Database database,
const Index32  frameIndex,
const CV::SubRegion subRegion,
RandomGenerator randomGenerator,
Plane3 plane,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(3u, Scalar(0.5), 20u),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_HUBER,
Scalar finalError = nullptr,
Indices32 usedObjectPointIds = nullptr 
)
static

Determines a 3D plane best fitting to a set of given 3D object point ids which are specified by a given sub-region in the camera frame.

Parameters
databaseThe database holding the 3D object point locations
frameIndexThe index of the frame in which the plane is visible for which the given sub-region defines the area of image points for which the corresponding object points define the 3D plane, the pose must be valid
subRegionThe sub-region which defines the plane area in the camera frame
randomGeneratorRandom number generator
planeThe resulting 3D plane
minimalValidObjectPointsThe minimal number of valid object points so that a valid plane will be determined
estimatorThe robust estimator which will be applied to determine the 3D plane
finalErrorOptional resulting final error
usedObjectPointIdsOptional resulting ids of the used object points
Returns
True, if succeeded

◆ determinePlane() [3/4]

bool Ocean::Tracking::Solver3::determinePlane ( const Database database,
const Indices32 objectPointIds,
RandomGenerator randomGenerator,
Plane3 plane,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(3u, Scalar(0.5), 20u),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_HUBER,
Scalar finalError = nullptr,
Indices32 validIndices = nullptr 
)
inlinestatic

Determines a 3D plane best fitting to a set of given 3D object point ids.

Parameters
databaseThe database holding the 3D object point locations
objectPointIdsThe ids of the object points for which the best matching plane will be determined, at least 3, must have valid locations in the database
randomGeneratorRandom number generator
planeThe resulting 3D plane
minimalValidObjectPointsThe minimal number of valid object points so that a valid plane will be determined
estimatorThe robust estimator which will be applied to determine the 3D plane
finalErrorOptional resulting final error
validIndicesOptional resulting indices of all valid object points
Returns
True, if succeeded

◆ determinePlane() [4/4]

static bool Ocean::Tracking::Solver3::determinePlane ( const Database database,
const PinholeCamera pinholeCamera,
const unsigned int  lowerFrameIndex,
const unsigned int  subRegionFrameIndex,
const unsigned int  upperFrameIndex,
const CV::SubRegion subRegion,
RandomGenerator randomGenerator,
Plane3 plane,
const bool  useDistortionParameters,
const RelativeThreshold minimalValidObjectPoints = RelativeThreshold(3u, Scalar(0.5), 20u),
const Scalar  medianDistanceFactor = Scalar(6),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_HUBER,
Scalar finalError = nullptr,
Indices32 usedObjectPointIds = nullptr 
)
static

Determines a 3D plane best fitting to image points in a specified sub-region in a specified frame and best fitting to this area visible in a specified frame range.

Parameters
databaseThe database holding the 3D object point locations
pinholeCameraThe pinhole camera profile defining the projection, must be valid
lowerFrameIndexThe index of the frame defining the lower border of camera poses which will be investigated
subRegionFrameIndexThe index of the frame for which the sub-region is specified
upperFrameIndexThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
subRegionThe sub-region defining the area in the image frame for which the 3D plane will be determined
randomGeneratorThe random number generator object
planeThe resulting 3D plane best fitting for the given data
useDistortionParametersTrue, to use the distortion parameters of the camera
minimalValidObjectPointsThe minimal number of valid 3D points in relation to the 3D object points which are projected into the sub-region in the sub-region frame
medianDistanceFactorThe factor with which the median distance between the initial 3D plane and the initial 3D object points is multiplied to determine the maximal distance between the finial plane and any 3D object point which defines the plane, with range (0, infinity)
estimatorThe robust estimator used to determine the initial plane for the sub-region frame
finalErrorOptional resulting final square error
usedObjectPointIdsOptional resulting ids of all 3D object points which have been used to determine the 3D plane
Returns
True, if succeeded

◆ determinePose() [1/5]

HomogenousMatrix4 Ocean::Tracking::Solver3::determinePose ( const AnyCamera camera,
RandomGenerator randomGenerator,
const ConstIndexedAccessor< ObjectPoint > &  objectPoints,
const ConstIndexedAccessor< ImagePoint > &  imagePoints,
const HomogenousMatrix4 roughPose = HomogenousMatrix4(false),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr,
Indices32 validIndices = nullptr 
)
inlinestatic

Determines the camera 6-DOF pose for a set of object point and image point correspondences.

Parameters
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
objectPointsThe object points which are visible in a frame
imagePointsThe image points which are projections of the given object points, one image point corresponds to one object point
roughPoseOptional a rough camera pose to speedup the computation and accuracy
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
validIndicesOptional resulting indices of the valid point correspondences
Returns
The resulting camera pose, an invalid pose if no pose can be determined

◆ determinePose() [2/5]

HomogenousMatrix4 Ocean::Tracking::Solver3::determinePose ( const AnyCamera camera,
RandomGenerator randomGenerator,
const ConstIndexedAccessor< ObjectPoint > &  objectPoints,
const ConstIndexedAccessor< ImagePoint > &  imagePoints,
const size_t  priorityCorrespondences,
const HomogenousMatrix4 roughPose = HomogenousMatrix4(false),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr 
)
inlinestatic

Determines the camera 6-DOF pose for a set of object point and image point correspondences.

The point correspondences are separated to a set of priority correspondences and remaining correspondences ensuring that the pose mainly matches for the priority point correspondences.

Parameters
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
objectPointsThe object points which are visible in a frame, first all priority object points followed by the remaining object points
imagePointsThe image points which are projections of the given object points, one image point corresponds to one object point
priorityCorrespondencesThe number of priority point correspondences
roughPoseOptional a rough camera pose to speedup the computation and accuracy
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal robust squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
Returns
The resulting camera pose, an invalid pose if no pose can be determined

◆ determinePose() [3/5]

HomogenousMatrix4 Ocean::Tracking::Solver3::determinePose ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  frameId,
const ConstIndexedAccessor< ObjectPoint > &  objectPoints,
const ConstIndexedAccessor< Index32 > &  objectPointIds,
const HomogenousMatrix4 roughPose = HomogenousMatrix4(false),
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr 
)
inlinestatic

Determines the camera 6-DOF pose for a specific camera frame.

Parameters
databaseThe database from which the image points are extracted
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
frameIdThe id of the frame for which the camera pose will be determined
objectPointsThe object points which are all visible in the specified frame
objectPointIdsThe ids of the object points, one id for each object points
roughPoseOptional a rough camera pose to speedup the computation and accuracy
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
Returns
The resulting camera pose, an invalid pose if no pose can be determined

◆ determinePose() [4/5]

HomogenousMatrix4 Ocean::Tracking::Solver3::determinePose ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  frameId,
const HomogenousMatrix4 roughPose = HomogenousMatrix4(false),
const unsigned int  minimalCorrespondences = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr,
unsigned int *  correspondences = nullptr 
)
inlinestatic

Determines the camera 6-DOF pose for a specific camera frame.

Parameters
databaseThe database from which the object point and image point correspondences are extracted
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
frameIdThe id of the frame for which the camera pose will be determined
roughPoseOptional a rough camera pose to speedup the computation and accuracy
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
correspondencesOptional resulting number of 2D/3D point correspondences which were available
Returns
The resulting camera pose, an invalid pose if no pose can be determined

◆ determinePose() [5/5]

HomogenousMatrix4 Ocean::Tracking::Solver3::determinePose ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  frameId,
const IndexSet32 priorityObjectPointIds,
const bool  solePriorityPoints,
const HomogenousMatrix4 roughPose = HomogenousMatrix4(false),
const unsigned int  minimalCorrespondences = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 * 3.5),
Scalar finalRobustError = nullptr,
unsigned int *  correspondences = nullptr 
)
inlinestatic

Determines the camera 6-DOF pose for a specific camera frame.

Parameters
databaseThe database from which the object point and image point correspondences are extracted
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
frameIdThe id of the frame for which the camera pose will be determined
roughPoseOptional a rough camera pose to speedup the computation and accuracy
priorityObjectPointIdsIds of object points for which the poses will be optimized
solePriorityPointsTrue, to apply only the priority object points for pose determination
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalSqrErrorThe maximal squared pixel error between image point and projected object points for the RANSAC algorithm, with range (0, infinity)
finalRobustErrorOptional resulting final average robust error, in relation to the defined estimator
correspondencesOptional resulting number of 2D/3D point correspondences which were available
Returns
The resulting camera pose, an invalid pose if no pose can be determined

◆ determinePoses()

static bool Ocean::Tracking::Solver3::determinePoses ( const Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
const IndexSet32 priorityObjectPointIds,
const bool  solePriorityPoints,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
ShiftVector< HomogenousMatrix4 > &  poses,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  ransacMaximalSqrError = Scalar(3.5 *3.5),
const Scalar  maximalRobustError = Scalar(3.5 *3.5),
Scalar finalAverageError = nullptr,
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Determines the camera poses depending on valid 2D/3D points correspondence within a range of camera frames.

The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
The resulting poses will have either a sole rotational motion or a rotational and translational motion, this depends on the defined camera motion.

Parameters
databaseThe database from which the point correspondences are extracted
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined
priorityObjectPointIdsOptional ids of the object points for which the poses will be optimized with higher priority, may be zero so that all object points are investigated with the same priority
solePriorityPointsTrue, to apply only the priority object points for pose determination, has no meaning if no priority points are provided
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
posesThe resulting determined poses starting with the lower frame and ending with the upper frame
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity)
finalAverageErrorOptional resulting average final error for all valid poses, the error depends on the selected robust estimator
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if all poses have been determined (some poses may be invalid)

◆ determinePosesOrientation()

static void Ocean::Tracking::Solver3::determinePosesOrientation ( const Database database,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
Scalar xOrientations,
Scalar yOrientations,
Scalar zOrientations 
)
static

Determines the individual cosine values between the mean coordinate axis of a range of poses and the coordinate axis of the individual poses.

The specified range of camera pose must cover a range with valid poses.

Parameters
databaseThe database providing the camera poses
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
xOrientationsThe resulting cosine values for the poses' xAxis, one for each camera pose
yOrientationsThe resulting cosine values for the poses' yAxis, one for each camera pose
zOrientationsThe resulting cosine values for the poses' zAxis, one for each camera pose

◆ determinePosesSubset()

static void Ocean::Tracking::Solver3::determinePosesSubset ( const Database database,
const AnyCamera camera,
const IndexSet32 priorityObjectPointIds,
const bool  solePriorityPoints,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
ShiftVector< HomogenousMatrix4 > *  poses,
const Geometry::Estimator::EstimatorType  estimator,
const Scalar  minimalValidCorrespondenceRatio,
const Scalar  ransacMaximalSqrError,
const Scalar  maximalRobustError,
Scalar totalError,
Lock lock,
bool *  abort,
const unsigned int  numberThreads,
const unsigned int  threadIndex,
const unsigned int  numberThreadsOne 
)
staticprotected

Determines a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames.

The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).

Parameters
databaseThe database from which the point correspondences are extracted and which receives the determined camera poses
cameraThe camera profile defining the projection, must be valid
priorityObjectPointIdsOptional ids of the object points for which the poses will be optimized, may be zero so that all object points are investigated with the same priority
solePriorityPointsTrue, to apply only the priority object points for pose determination
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
posesThe resulting determined poses starting with the lower frame and ending with the upper frame
estimatorThe robust estimator which is applied for the non-linear pose optimization
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity)
totalErrorThe resulting accumulated total error for all poses
lockThe lock object which must be defined if this function is executed in parallel on several individual threads
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
numberThreadsThe overall number of threads which are used in parallel
threadIndexThe index of the thread executing this function, with range [0, numberThreads)
numberThreadsOneMust be 1

◆ determineProjectionError()

static bool Ocean::Tracking::Solver3::determineProjectionError ( const Database database,
const PinholeCamera pinholeCamera,
const Index32  poseId,
const bool  useDistortionParameters,
unsigned int *  validCorrespondences = nullptr,
Scalar minimalSqrError = nullptr,
Scalar averageSqrError = nullptr,
Scalar maximalSqrError = nullptr 
)
static

Determines the accuracy of a camera pose for all valid object points visible in the frame by measuring the projection error between the projected object points and their corresponding image points.

Parameters
databaseThe database providing the locations of the 3D object points, the camera poses and the image points
pinholeCameraThe pinhole camera profile which is applied
poseIdThe id of the camera frame for which the accuracy of the pose will be determined
useDistortionParametersTrue, to apply the distortion parameter of the camera
validCorrespondencesOptional resulting number of valid pose correspondences
minimalSqrErrorOptional resulting minimal (best) projection error for the pose
averageSqrErrorOptional resulting averaged projection error for the pose
maximalSqrErrorOptional resulting maximal (worst) projection error for the pose
Returns
True, if the database holds a valid pose for the specified camera frame and at least one valid point correspondence

◆ determineProjectionErrors() [1/2]

static void Ocean::Tracking::Solver3::determineProjectionErrors ( const AnyCamera camera,
const Vector3 objectPoint,
const ConstIndexedAccessor< HomogenousMatrix4 > &  world_T_cameras,
const ConstIndexedAccessor< Vector2 > &  imagePoints,
Scalar minimalSqrError = nullptr,
Scalar averageSqrError = nullptr,
Scalar maximalSqrError = nullptr 
)
static

Determines the projection errors of a 3D object point in combination with a set of camera poses and image points (the projections of the object point).

Parameters
cameraThe camera profile defining the projection, must be valid
objectPointThe 3D object point for which the quality will be measured
world_T_camerasThe camera poses in which the object point is visible
imagePointsThe individual image points in the individual camera frames
minimalSqrErrorOptional resulting minimal (best) projection error for the object point
averageSqrErrorOptional resulting averaged projection error for the object point
maximalSqrErrorOptional resulting maximal (worst) projection error for the object point

◆ determineProjectionErrors() [2/2]

static bool Ocean::Tracking::Solver3::determineProjectionErrors ( const Database database,
const PinholeCamera pinholeCamera,
const Indices32 objectPointIds,
const bool  useDistortionParameters,
const unsigned int  lowerFrame = (unsigned int)(-1),
const unsigned int  upperFrame = (unsigned int)(-1),
Scalar minimalSqrErrors = nullptr,
Scalar averagedSqrErrors = nullptr,
Scalar maximalSqrErrors = nullptr,
unsigned int *  observations = nullptr,
Worker worker = nullptr 
)
static

Determines the averaged and maximal squared pixel errors between the projections of individual 3D object points and their corresponding image points in individual camera frames.

Parameters
databaseThe database from which the camera poses, the object points and the image points are extracted
pinholeCameraThe pinhole camera profile which is applied
objectPointIdsThe ids of all object points for which the maximal squared pixel errors are determined
useDistortionParametersTrue, to use the distortion parameters of the camera to distort the projected object points
lowerFrameOptional index of the frame defining the lower border of camera poses which will be investigated, -1 if no lower and no upper border is defined
upperFrameOptional index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity), -1 if also 'lowerFrame' is -1
minimalSqrErrorsOptional resulting minimal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error
averagedSqrErrorsOptional resulting averaged pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error
maximalSqrErrorsOptional resulting maximal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error
observationsOptional resulting observations for each object point, one number of observations for each given object point id
workerOptional worker object to distribute the computation
Returns
True, if succeeded

◆ determineProjectionErrorsSubset()

static void Ocean::Tracking::Solver3::determineProjectionErrorsSubset ( const Database database,
const PinholeCamera pinholeCamera,
const Index32 objectPointIds,
const HomogenousMatrix4 posesIF,
const Index32  lowerPoseId,
const unsigned int  upperPoseId,
const bool  useDistortionParameters,
Scalar minimalSqrErrors,
Scalar averagedSqrErrors,
Scalar maximalSqrErrors,
unsigned int *  observations,
const unsigned int  firstObjectPoint,
const unsigned int  numberObjectPoints 
)
staticprotected

Determines the maximal squared pixel errors between the projections of a subset of individual 3D object points and their corresponding image points in individual camera frames.

Parameters
databaseThe database from which the camera poses, the object points and the image points are extracted
pinholeCameraThe pinhole camera profile which is applied
objectPointIdsThe ids of all object points for which the maximal squared pixel errors are determined
posesIFThe inverted and flipped poses of all camera frames which will be investigated, the poses can be valid or invalid, the first pose is the camera pose for the frame with id 'lowerPoseId'
lowerPoseIdThe id of the first provided pose
upperPoseIdThe id of the last provided pose, thus posesIF must store (upperPoseId - lowerPoseId + 1) poses
useDistortionParametersTrue, to use the distortion parameters of the camera to distort the projected object points
minimalSqrErrorsOptional resulting minimal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error
averagedSqrErrorsOptional resulting averaged pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error
maximalSqrErrorsOptional resulting maximal squared pixel errors, one error for each given object point id, invalid object points or object points without corresponding observation receive Numeric::maxValue() as error
observationsOptional resulting observations for each object point, one number of observations for each given object point id
firstObjectPointThe first object point to handle
numberObjectPointsThe number of object points to handle

◆ determineRepresentativePoses() [1/2]

static Indices32 Ocean::Tracking::Solver3::determineRepresentativePoses ( const Database database,
const Indices32 poseIds,
const size_t  numberRepresentative 
)
static

Determines a set of representative camera poses from a given database from a set of given camera poses.

Parameters
databaseThe database from which the representative camera poses are extracted
poseIdsThe camera pose ids from which the representative camera poses are extracted, all poses must be valid
numberRepresentativeThe number of representative poses that will be determined
Returns
The ids of the representative camera poses

◆ determineRepresentativePoses() [2/2]

static Indices32 Ocean::Tracking::Solver3::determineRepresentativePoses ( const Database database,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const size_t  numberRepresentative 
)
static

Determines a set of representative camera poses from a given database within a specified frame range.

Only valid camera poses from the database will be investigated.

Parameters
databaseThe database from which the representative camera poses are extracted
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
numberRepresentativeThe number of representative poses that will be determined
Returns
The ids of the representative camera poses

◆ determineUnknownObjectPoints() [1/4]

template<bool tVisibleInAllPoses>
bool Ocean::Tracking::Solver3::determineUnknownObjectPoints ( const Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
const Index32  lowerPoseId,
const Index32  upperPoseId,
Vectors3 newObjectPoints,
Indices32 newObjectPointIds,
RandomGenerator randomGenerator,
Indices32 newObjectPointObservations = nullptr,
const Scalar  minimalObjectPointPriority = Scalar(-1),
const unsigned int  minimalObservations = 10u,
const bool  useAllObservations = true,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  ransacMaximalSqrError = Scalar(3.5 * 3.5),
const Scalar  averageRobustError = Scalar(3.5 * 3.5),
const Scalar  maximalSqrError = Numeric::maxValue(),
Worker worker = nullptr,
bool *  abort = nullptr 
)
inlinestatic

Determines the positions of (currently unknown) object points which are visible in specified poses (the poses are specified by a lower and upper frame range).

Only camera frames with valid camera pose are used to determined the locations of the new object points.
All unknown object points with more or equal observations (in valid poses) than specified will be handled.
However, the number of resulting object points with valid 3D position may be small than the maximal possible number due to e.g., the defined maximal error parameters.

Parameters
databaseThe database form which the object point, image point and pose information is extracted
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL
lowerPoseIdThe lower id of the camera pose in which the unknown object points can/must be visible
upperPoseIdThe upper id of the camera pose in which the unknown object points can/must be visible, with range [lowerPoseId, infinity)
newObjectPointsThe resulting 3D location of the new object points
newObjectPointIdsThe ids of the resulting new object points, one id for each resulting new object point
randomGeneratorRandom generator object to be used for creating random numbers, must be defined
newObjectPointObservationsOptional resulting number of observations (with valid camera poses) for each determined 3D object point, one number for each resulting 3D object point location
minimalObjectPointPriorityThe minimal priority value of the resulting unknown object points
minimalObservationsThe minimal number of observations (with valid camera poses) for each new object points which are necessary to determine the 3D location
useAllObservationsTrue, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations
estimatorThe robust estimator which is applied during optimization of each individual new 3D location, must be defined
ransacMaximalSqrErrorThe maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism
averageRobustErrorThe (average) robust error for a new 3D object point after optimization of the 3D location
maximalSqrErrorThe maximal error for a new valid 3D object point after optimization of the 3D location
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded and not aborted
Template Parameters
tVisibleInAllPosesTrue, if the object points must be visible in all poses (frames) of the specified pose range; False, if the object point can be visible in any poses (frames) within the specified pose range

◆ determineUnknownObjectPoints() [2/4]

static bool Ocean::Tracking::Solver3::determineUnknownObjectPoints ( const Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
const Indices32 unknownObjectPointIds,
Vectors3 newObjectPoints,
Indices32 newObjectPointIds,
RandomGenerator randomGenerator,
Indices32 newObjectPointObservations = nullptr,
const unsigned int  minimalObservations = 2u,
const bool  useAllObservations = true,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  ransacMaximalSqrError = Scalar(3.5 *3.5),
const Scalar  averageRobustError = Scalar(3.5 *3.5),
const Scalar  maximalSqrError = Numeric::maxValue(),
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Determines the positions of a set of (currently unknown) object points.

Only camera frames with valid camera pose are used to determined the new object points.

Parameters
databaseThe database form which the object point, image point and pose information is extracted
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL
unknownObjectPointIdsThe ids of all (currently unknown) object points for which a 3D position will be determined, must all be valid
newObjectPointsThe resulting 3D location of the new object points
newObjectPointIdsThe ids of the resulting new object points, one id for each resulting new object point
randomGeneratorRandom generator object to be used for creating random numbers, must be defined
newObjectPointObservationsOptional resulting number of observations for each determined 3D object point, one number for each resulting 3D object point location
minimalObservationsThe minimal number of observations for each new object points which are necessary to determine the 3D location
useAllObservationsTrue, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations
estimatorThe robust estimator which is applied during optimization of each individual new 3D location, must be defined
ransacMaximalSqrErrorThe maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism
averageRobustErrorThe (average) robust error for a new 3D object point after optimization of the 3D location
maximalSqrErrorThe maximal error for a new valid 3D object point after optimization of the 3D location
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded and not aborted

◆ determineUnknownObjectPoints() [3/4]

bool Ocean::Tracking::Solver3::determineUnknownObjectPoints ( const Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
Vectors3 newObjectPoints,
Indices32 newObjectPointIds,
RandomGenerator randomGenerator,
Indices32 newObjectPointObservations = nullptr,
const Scalar  minimalObjectPointPriority = Scalar(-1),
const unsigned int  minimalObservations = 10u,
const bool  useAllObservations = true,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  ransacMaximalSqrError = Scalar(3.5 * 3.5),
const Scalar  averageRobustError = Scalar(3.5 * 3.5),
const Scalar  maximalSqrError = Numeric::maxValue(),
Worker worker = nullptr,
bool *  abort = nullptr 
)
inlinestatic

Determines the positions of all (currently unknown) object points.

Only camera frames with valid camera pose are used to determined the locations of the new object points.
All unknown object points with more or equal observations (in valid poses) than specified will be handled.
However, the number of resulting object points with valid 3D position may be smaller than the maximal possible number due to e.g., the defined maximal error parameters.

Parameters
databaseThe database form which the object point, image point and pose information is extracted
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL
newObjectPointsThe resulting 3D location of the new object points
newObjectPointIdsThe ids of the resulting new object points, one id for each resulting new object point
randomGeneratorRandom generator object to be used for creating random numbers, must be defined
newObjectPointObservationsOptional resulting number of observations (with valid camera poses) for each determined 3D object point, one number for each resulting 3D object point location
minimalObjectPointPriorityThe minimal priority value of the resulting unknown object points
minimalObservationsThe minimal number of observations (with valid camera poses) for each new object points which are necessary to determine the 3D location, with range [2, infinity)
useAllObservationsTrue, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations
estimatorThe robust estimator which is applied during optimization of each individual new 3D location, must be defined
ransacMaximalSqrErrorThe maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism
averageRobustErrorThe (average) robust error for a new 3D object point after optimization of the 3D location
maximalSqrErrorThe maximal error for a new valid 3D object point after optimization of the 3D location
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded and not aborted

◆ determineUnknownObjectPoints() [4/4]

static bool Ocean::Tracking::Solver3::determineUnknownObjectPoints ( const Database database,
const AnyCamera camera,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
Vectors3 newObjectPoints,
Indices32 newObjectPointIds,
const unsigned int  minimalKeyFrames = 3u,
const unsigned int  maximalKeyFrames = 10u,
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Determines the positions of new object points from a database within a specified frame range.

Only camera frames with valid camera poses are used to determine the new object points.
This function extracts a subset of representative camera poses and triangulates image points from individual camera poses to determine new 3D object points.
Object points in the database with valid 3D positions are not investigated.

Parameters
databaseThe database defining the topology of 3D object points, corresponding 2D image points and corresponding camera poses
cameraThe camera profile defining the projection, must be valid
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
newObjectPointsThe resulting positions of new object points
newObjectPointIdsThe resulting ids of the new object points, each id corresponds with a positions from 'newObjectPoints'
minimalKeyFramesThe minimal number of key frames which must be valid for a 3D object point, with range [minimalKeyFrames, upperFrame - lowerFrame + 1]
maximalKeyFramesThe maximal number of key frames which will be use to determine the 3D object point positions, with range [minimalKeyFrames, upperFrame - lowerFrame + 1]
maximalSqrErrorThe maximal squared error between a projected 3D object point and an image point so that the combination of object point and image point count as valid
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ determineUnknownObjectPointsSubset() [1/2]

static void Ocean::Tracking::Solver3::determineUnknownObjectPointsSubset ( const AnyCamera camera,
const Database database,
const Database::PoseImagePointTopologyGroups objectPointsData,
RandomGenerator randomGenerator,
const Scalar  maximalSqrError,
bool *  abort,
Lock lock,
Vectors3 newObjectPoints,
Indices32 newObjectPointIds,
unsigned int  firstObjectPoint,
unsigned int  numberObjectPoints 
)
staticprotected

Determines the positions of new object points from a database within a specified frame range.

Parameters
cameraThe camera profile defining the projection, must be valid
databaseThe database from which the object point and image point correspondences are extracted
objectPointsDataThe data holding groups of pose ids and image point ids for each individual object point
randomGeneratorRandom generator object to be used for creating random numbers, must be defined
maximalSqrErrorThe maximal squared error between a projected 3D object point and an image point so that the combination of object point and image point count as valid
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
lockThe lock object which must be defined if this function is invoked in parallel
newObjectPointsThe resulting positions of new object points
newObjectPointIdsThe resulting ids of the new object points, each id corresponds with a positions from 'newObjectPoints'
firstObjectPointThe first object point to be handled, with range [0, numberObjectPoints)
numberObjectPointsThe number of object points to be handled, with range [0, objectPointData->size()]

◆ determineUnknownObjectPointsSubset() [2/2]

static void Ocean::Tracking::Solver3::determineUnknownObjectPointsSubset ( const Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
const Index32 objectPointIds,
Vectors3 newObjectPoints,
Indices32 newObjectPointIds,
Indices32 newObjectPointObservations,
RandomGenerator randomGenerator,
const unsigned int  minimalObservations,
const bool  useAllObservations,
const Geometry::Estimator::EstimatorType  estimator,
const Scalar  ransacMaximalSqrError,
const Scalar  averageRobustError,
const Scalar  maximalSqrError,
Lock look,
bool *  abort,
const unsigned int  firstObjectPoint,
const unsigned int  numberObjectPoints 
)
staticprotected

Determines the positions of a subset of (currently unknown) object points.

Parameters
databaseThe database form which the object point, image point and pose information is extracted
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, can be CM_ROTATIONAL or CM_TRANSLATIONAL
objectPointIdsThe ids of all (currently unknown) object points for which a 3D position will be determined, must all be valid
newObjectPointsThe resulting 3D location of the new object points
newObjectPointIdsThe ids of the resulting new object points, one id for each resulting new object point
newObjectPointObservationsOptional resulting number of observations for each resulting new object point, one number for each resulting new object point
randomGeneratorRandom generator object to be used for creating random numbers, must be defined
minimalObservationsThe minimal number of observations for each new object points which are necessary to determine the 3D location
useAllObservationsTrue, to use all observations (with valid camera pose) to determine the 3D locations; False, to apply a RANSAC mechanism taking a subset of all observations to determine the 3D locations
estimatorThe robust estimator which is applied during optimization of each individual new 3D location, must be defined
ransacMaximalSqrErrorThe maximal squared projection error between a new 3D object point and the corresponding image points for the RANSAC mechanism
averageRobustErrorThe (average) robust error for a new 3D object point after optimization of the 3D location
maximalSqrErrorThe maximal error for a new valid 3D object point after optimization of the 3D location
lookLock object which must be defined if this function is executed in parallel on individual threads
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
firstObjectPointFirst object point to be handled
numberObjectPointsNumber of object points to be handled

◆ determineValidPoses()

static size_t Ocean::Tracking::Solver3::determineValidPoses ( const AnyCamera camera,
const Vectors3 objectPoints,
const ImagePointGroups imagePointGroups,
RandomGenerator randomGenerator,
const CameraMotion  cameraMotion,
const unsigned int  firstValidPoseIndex,
const HomogenousMatrix4 firstValidPose,
const unsigned int  secondValidPoseIndex,
const HomogenousMatrix4 secondValidPose,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Indices32 validObjectPointIndices = nullptr,
HomogenousMatrices4 poses = nullptr,
Indices32 poseIds = nullptr,
Scalar totalSqrError = nullptr 
)
static

Determines valid poses for a range of camera frames while for each frame a group of image points is given which correspond to the given object points.

Two individual camera poses must be known within the range of camera frames.

Parameters
cameraThe camera profile defining the projection, must be valid
objectPointsThe object points with known locations, each object point has a corresponding image point in the groups of image points
imagePointGroupsThe groups of image points, each set of image points corresponds to the object points, each group of image points represents one camera pose (the observed object points respectively)
randomGeneratorRandom number generator
cameraMotionThe motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined
firstValidPoseIndexThe index of the frame for which the first pose is known, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()]
firstValidPoseThe first known pose, must be valid
secondValidPoseIndexThe index of the frame for which the second pose is known, with range [imagePointGroups.firstIndex(), imagePointGroups.lastIndex()] with firstValidPoseIndex != secondValidPoseIndex
secondValidPoseThe second known pose, must be valid
minimalValidCorrespondenceRatioThe minimal ratio of valid correspondences (w.r.t. the given object points), if the number of valid correspondences is too low the pose is not valid, with range (0, 1]
maximalSqrErrorThe maximal pixel error between a projected object point and the corresponding image point so that the correspondence is valid
validObjectPointIndicesOptional resulting indices of the object points which are all valid in all determined valid poses
posesOptional resulting valid poses (corresponding to poseIds)
poseIdsOptional resulting ids of all valid poses, each id has a corresponding resulting pose (however the ids themselves have no order)
totalSqrErrorOptional resulting sum of square pixel errors for all valid poses
Returns
The number of valid poses

◆ filterStaticImagePoints()

static size_t Ocean::Tracking::Solver3::filterStaticImagePoints ( ImagePointGroups imagePointGroups,
Indices32 objectPointIds,
const Scalar  maximalStaticImagePointFilterRatio 
)
staticprotected

Determines a subset of perfectly static image points which may be image points located (visible) at static logos in the frames.

Parameters
imagePointGroupsGroups of image points where each group holds the projection of the same 3D object points
objectPointIdsThe ids of the object points which have the corresponding projected image points in the groups of image points
maximalStaticImagePointFilterRatioThe maximal ratio of static image points in relation to the entire number of image points in each group, with range [0, 1]
Returns
The number of static image points that have been removed

◆ optimizeCamera()

static bool Ocean::Tracking::Solver3::optimizeCamera ( const Database database,
const PinholeCamera pinholeCamera,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const bool  findInitialFieldOfView,
const PinholeCamera::OptimizationStrategy  optimizationStrategy,
PinholeCamera optimizedCamera,
Database optimizedDatabase,
const unsigned int  minimalObservationsInKeyframes = 2u,
const unsigned int  minimalKeyframes = 3u,
const unsigned int  maximalKeyframes = 20u,
const Scalar  lowerFovX = Numeric::deg2rad(20),
const Scalar  upperFovX = Numeric::deg2rad(140),
Worker worker = nullptr,
bool *  abort = nullptr,
Scalar finalMeanSqrError = nullptr 
)
static

Optimizes the camera profile for a given database with stable camera poses determined by initial but stable object points.

This function selected a representative subset of the valid poses within the specified frame range and considers all object points visible in the subset of camera frames.
The resulting optimized database (with optimized object point locations) invalidates all object point locations of object points not visible in the selected subset of camera frames.
Therefore, this function should be invoked after the initial set of stable object points are determined but before the database stores too many object points (which would get lost).
Further, this function supposes a translational (and optional rotational) camera motion.

Parameters
databaseThe database providing a set initial 3D object points visible in several valid camera poses
pinholeCameraThe pinhole camera profile which has been used to determine the camera poses and 3D object point locations in the given database
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
findInitialFieldOfViewTrue, to apply a determination of the initial field of view of the camera (should be done if the camera's field of view is not known)
optimizationStrategyThe optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters
optimizedCameraThe resulting optimized camera profile with adjusted field of view and distortion parameters
optimizedDatabaseThe resulting database with optimized camera poses and 3D object point locations
minimalObservationsInKeyframesThe minimal number of observations an object point must have under all selected keyframes so that it will be investigated to optimized the camera profile and so that this object point will be optimized
minimalKeyframesThe minimal number of key frames (with valid poses) which are necessary for the determination/optimization, with range [2, minimalKeyFrames)
maximalKeyframesThe maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, upperFrame - lowerFrame + 1]
lowerFovXThe lower threshold border for the final (ideal) horizontal field of view of the camera profile, with range (0, upperFovX],
upperFovXThe upper threshold border for the final (ideal) horizontal field of view of the camera profile, with range [lowerFoVX, PI)
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
finalMeanSqrErrorOptional resulting final mean squared pose error (averaged)
Returns
True, if the camera profile and the 3D object point locations and the camera poses in the given database could be optimized
See also
supposeRotationalCameraMotion().

◆ optimizeCameraWithVariableObjectPointsAndPoses() [1/3]

static bool Ocean::Tracking::Solver3::optimizeCameraWithVariableObjectPointsAndPoses ( const Database database,
const PinholeCamera pinholeCamera,
const PinholeCamera::OptimizationStrategy  optimizationStrategy,
const Indices32 keyFrameIds,
const Indices32 objectPointIds,
PinholeCamera optimizedCamera,
Vectors3 optimizedObjectPoints = nullptr,
Indices32 optimizedObjectPointIds = nullptr,
HomogenousMatrices4 optimizedKeyFramePoses = nullptr,
const unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently.

The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided, further a set of object point ids must be provided which should be used for optimization, the object points visible in the key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
optimizationStrategyThe optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters
keyFrameIdsThe ids of all poses defining representative key frames for the optimization, at least two
objectPointIdsThe ids of the object points which will be optimized (may be a subset only), at least one
optimizedCameraThe resulting optimized camera profile
optimizedObjectPointsThe resulting positions of the optimized object points
optimizedObjectPointIdsOptional resulting ids of the optimized object points, one id for each positions in 'optimizedObjectPoints', nullptr if not of interest
optimizedKeyFramePosesOptional resulting optimized camera poses, one for each key frame id
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity)
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
iterationsThe number of optimization iterations which will be applied, with range [1, infinity)
initialRobustErrorOptional the initial average robust error before optimization
finalRobustErrorOptional the final average robust error after optimization
Returns
True, if succeeded
See also
optimizeObjectPointsWithFixedPoses().

◆ optimizeCameraWithVariableObjectPointsAndPoses() [2/3]

static bool Ocean::Tracking::Solver3::optimizeCameraWithVariableObjectPointsAndPoses ( const Database database,
const PinholeCamera pinholeCamera,
const PinholeCamera::OptimizationStrategy  optimizationStrategy,
const Indices32 keyFrameIds,
PinholeCamera optimizedCamera,
Vectors3 optimizedObjectPoints = nullptr,
Indices32 optimizedObjectPointIds = nullptr,
HomogenousMatrices4 optimizedKeyFramePoses = nullptr,
const unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently.

The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
optimizationStrategyThe optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters
keyFrameIdsThe ids of all poses defining representative key frames for the optimization, at least two
optimizedCameraThe resulting optimized camera profile with adjusted field of view and distortion parameters
optimizedObjectPointsThe resulting positions of the optimized object points, at least one
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
optimizedKeyFramePosesOptional resulting optimized camera poses, one for each key frame id
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity)
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
iterationsThe number of optimization iterations which will be applied, with range [1, infinity)
initialRobustErrorOptional the initial average robust error before optimization
finalRobustErrorOptional the final average robust error after optimization
Returns
True, if succeeded
See also
optimizeObjectPointsWithFixedPoses().

◆ optimizeCameraWithVariableObjectPointsAndPoses() [3/3]

static bool Ocean::Tracking::Solver3::optimizeCameraWithVariableObjectPointsAndPoses ( const Database database,
const PinholeCamera pinholeCamera,
const PinholeCamera::OptimizationStrategy  optimizationStrategy,
PinholeCamera optimizedCamera,
Vectors3 optimizedObjectPoints = nullptr,
Indices32 optimizedObjectPointIds = nullptr,
HomogenousMatrices4 optimizedKeyFramePoses = nullptr,
Indices32 optimizedKeyFramePoseIds = nullptr,
const unsigned int  minimalKeyFrames = 3u,
const unsigned int  maximalKeyFrames = 20u,
const unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses and camera profile concurrently.

The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses are selected and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
optimizationStrategyThe optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters
optimizedCameraThe resulting optimized camera profile with adjusted field of view and distortion parameters
optimizedObjectPointsThe resulting positions of the optimized object points
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
optimizedKeyFramePosesOptional resulting camera poses, one for each keyframe which has been used during optimization, nullptr if not of interest
optimizedKeyFramePoseIdsOptional resulting ids of the camera poses which have been used as key frame during optimization, one for each 'optimizedKeyFramePoses', nullptr if not of interest
minimalKeyFramesThe minimal number of key frames (with valid poses) which are necessary for the optimization, with range [2, maximalkeyFrames]
maximalKeyFramesThe maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, infinity)
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity)
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
iterationsThe number of optimization iterations which will be applied, with range [1, infinity)
initialRobustErrorOptional the initial average robust error before optimization
finalRobustErrorOptional the final average robust error after optimization
Returns
True, if succeeded
See also
optimizeObjectPointsWithFixedPoses().

◆ optimizeInitialObjectPoints()

static bool Ocean::Tracking::Solver3::optimizeInitialObjectPoints ( const Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame,
const Vectors3 initialObjectPoints,
const Indices32 initialObjectPointIds,
Vectors3 optimizedObjectPoints,
Indices32 optimizedObjectPointIds,
const unsigned int  minimalObjectPoints = 5u,
const unsigned int  minimalKeyFrames = 3u,
const unsigned int  maximalKeyFrames = 10u,
const Scalar  maximalSqrError = Scalar(3.5 *3.5),
Indices32 usedPoseIds = nullptr,
Scalar initialSqrError = nullptr,
Scalar finalSqrError = nullptr,
bool *  abort = nullptr 
)
static

Optimizes the positions of already known initial 3D object points when a given database holds neither valid 3D positions or valid 6DOF poses.

The optimization is done by a bundle adjustment between the camera poses of distinct keyframes and the given 3D object points, however the optimized camera poses are not provided.
This function can optimize a subset of the given initial object points to allow more camera poses (camera frames) to be involved.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
initialObjectPointsThe already known initial 3D positions of object points
initialObjectPointIdsThe ids of the already known object points, one id for each given initial object point
optimizedObjectPointsThe resulting optimized 3D positions of the given initial object points
optimizedObjectPointIdsThe resulting ids of the optimized object points, one id for each optimized object point
minimalObjectPointsThe minimal number of object points that will be optimized (the higher the number the less camera poses may be used as some object points may not be visible in all camera frames), with range [5, initialObjectPoints.size()); however, tue to pose inaccuracies the algorithm finally may use less object points
minimalKeyFramesThe minimal number of keyframes that will be used, with range [2, maximalKeyFrames]
maximalKeyFramesThe maximal number of keyframes that will be used, with range [minimalKeyFrames, upperFrame - lowerFrame + 1]; however, due to pose inaccuracies the algorithm finally may use more keyframes
maximalSqrErrorThe maximal squared projection error for a 3D object point, points with larger error are excluded after a first optimization iteration
usedPoseIdsOptional resulting ids of all camera poses which have been used to optimized the object points
initialSqrErrorOptional resulting initial average squared error
finalSqrErrorOptional resulting final average squared error
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ optimizeObjectPointsWithFixedPoses()

static bool Ocean::Tracking::Solver3::optimizeObjectPointsWithFixedPoses ( const Database database,
const PinholeCamera pinholeCamera,
const CameraMotion  cameraMotion,
const Indices32 objectPointIds,
Vectors3 optimizedObjectPoints,
Indices32 optimizedObjectPointIds,
unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  maximalRobustError = Scalar(3.5 *3.5),
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Optimizes a set of 3D object points (having a quite good accuracy already) without optimizing the camera poses concurrently.

The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
cameraMotionThe motion of the camera, CM_ROTATIONAL if the camera poses do not have a translational part, CM_TRANSLATIONAL otherwise
objectPointIdsThe ids of the object points for which the positions will be optimized (all points must have already initial 3D positions)
optimizedObjectPointsThe resulting positions of the optimized object points
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
maximalRobustErrorThe maximal error between a projected object point and the individual image points; beware the error must be defined w.r.t. the selected estimator
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded
See also
optimizeObjectPointsWithVariablePoses().

◆ optimizeObjectPointsWithFixedPosesSubset()

static void Ocean::Tracking::Solver3::optimizeObjectPointsWithFixedPosesSubset ( const Database database,
const PinholeCamera pinholeCamera,
const CameraMotion  cameraMotion,
const Index32 objectPointIds,
Vectors3 optimizedObjectPoints,
Indices32 optimizedObjectPointIds,
const unsigned int  minimalObservations,
const Geometry::Estimator::EstimatorType  estimator,
const Scalar  maximalRobustError,
Lock look,
bool *  abort,
const unsigned int  firstObjectPoint,
const unsigned int  numberObjectPoints 
)
staticprotected

Optimizes a subset of a set of 3D object points which have a quite good accuracy already without optimizing the camera poses concurrently.

The database must hold the valid initial 3D object positions and must hold valid camera poses.

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
cameraMotionThe motion of the camera, CM_ROTATIONAL if the camera poses do not have a translational part, CM_TRANSLATIONAL otherwise
objectPointIdsThe ids of the object points for which the positions will be optimized (all points must have already initial 3D positions)
optimizedObjectPointsThe resulting positions of the optimized object points
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
maximalRobustErrorThe maximal error between a projected object point and the individual image points; beware the error must be defined w.r.t. the selected estimator
lookOptional lock object ensuring a safe distribution of the computation, must be defined if this function is executed in parallel
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
firstObjectPointFirst object point to be handled
numberObjectPointsThe number of object points to be handled

◆ optimizeObjectPointsWithVariablePoses() [1/4]

static bool Ocean::Tracking::Solver3::optimizeObjectPointsWithVariablePoses ( const Database database,
const PinholeCamera pinholeCamera,
const Index32  lowerPoseId,
const Index32  upperPoseId,
const Indices32 objectPointIds,
Indices32 usedKeyFrameIds,
Vectors3 optimizedObjectPoints,
const unsigned int  minimalObservations = 10u,
const unsigned int  minimalKeyFrames = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

◆ optimizeObjectPointsWithVariablePoses() [2/4]

static bool Ocean::Tracking::Solver3::optimizeObjectPointsWithVariablePoses ( const Database database,
const PinholeCamera pinholeCamera,
const Indices32 keyFrameIds,
const Indices32 objectPointIds,
Vectors3 optimizedObjectPoints,
Indices32 optimizedObjectPointIds,
HomogenousMatrices4 optimizedKeyFramePoses = nullptr,
const unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently.

The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided, further a set of object point ids must be provided which should be used for optimization, the object points visible in the key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
keyFrameIdsThe ids of all poses defining representative key frames for the optimization, at least two
objectPointIdsThe ids of the object points which will be optimized (may be a subset only), at least one
optimizedObjectPointsThe resulting positions of the optimized object points
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
optimizedKeyFramePosesOptional resulting optimized camera poses, one for each key frame id
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity)
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
iterationsThe number of optimization iterations which will be applied, with range [1, infinity)
initialRobustErrorOptional the initial average robust error before optimization
finalRobustErrorOptional the final average robust error after optimization
Returns
True, if succeeded
See also
optimizeObjectPointsWithFixedPoses().

◆ optimizeObjectPointsWithVariablePoses() [3/4]

static bool Ocean::Tracking::Solver3::optimizeObjectPointsWithVariablePoses ( const Database database,
const PinholeCamera pinholeCamera,
const Indices32 keyFrameIds,
Vectors3 optimizedObjectPoints,
Indices32 optimizedObjectPointIds,
HomogenousMatrices4 optimizedKeyFramePoses = nullptr,
const unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently.

The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses must be provided and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
keyFrameIdsThe ids of all poses defining representative key frames for the optimization, at least two
optimizedObjectPointsThe resulting positions of the optimized object points, at least one
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
optimizedKeyFramePosesOptional resulting optimized camera poses, one for each key frame id
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity)
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
iterationsThe number of optimization iterations which will be applied, with range [1, infinity)
initialRobustErrorOptional the initial average robust error before optimization
finalRobustErrorOptional the final average robust error after optimization
Returns
True, if succeeded
See also
optimizeObjectPointsWithFixedPoses().

◆ optimizeObjectPointsWithVariablePoses() [4/4]

static bool Ocean::Tracking::Solver3::optimizeObjectPointsWithVariablePoses ( const Database database,
const PinholeCamera pinholeCamera,
Vectors3 optimizedObjectPoints,
Indices32 optimizedObjectPointIds,
HomogenousMatrices4 optimizedKeyFramePoses = nullptr,
Indices32 optimizedKeyFramePoseIds = nullptr,
const unsigned int  minimalKeyFrames = 3u,
const unsigned int  maximalKeyFrames = 20u,
const unsigned int  minimalObservations = 10u,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const unsigned int  iterations = 50u,
Scalar initialRobustError = nullptr,
Scalar finalRobustError = nullptr 
)
static

Optimizes 3D object points (having a quite good accuracy already) and optimizes the camera poses concurrently.

The optimization is based on a bundle adjustment for camera poses and object points minimizing the projection error between projected object points and image points located in the camera frames.
Representative key frames with valid camera poses are selected and all object points visible in these key frames will be optimized as long as the object points can be observed in more key frames than the defined threshold 'minimalObservations'.
However, the number of observations for each individual object point and the ids of the key frames in which the object points are visible can be arbitrary (as long as the defined thresholds hold).
The database must hold the valid initial 3D object positions, the image point positions and must hold valid camera poses.
Beware: Neither any pose nor any object point in the database will be updated, use the resulting optimized object point locations to update the database!

Parameters
databaseThe database from which the initial 3D object point positions and the individual camera poses (in which the object points are visible) are extracted
pinholeCameraThe pinhole camera profile to be applied
optimizedObjectPointsThe resulting positions of the optimized object points
optimizedObjectPointIdsThe ids of the optimized object points, one id for each positions in 'optimizedObjectPoints'
optimizedKeyFramePosesOptional resulting camera poses, one for each keyframe which has been used during optimization, nullptr if not of interest
optimizedKeyFramePoseIdsOptional resulting ids of the camera poses which have been used as key frame during optimization, one for each 'optimizedKeyFramePoses', nullptr if not of interest
minimalKeyFramesThe minimal number of key frames (with valid poses) which are necessary for the optimization, with range [2, maximalkeyFrames]
maximalKeyFramesThe maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, infinity)
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [minimalKeyFrames, infinity)
estimatorThe robust estimator which is applied to determine the projection error between 3D object point positions and the image points in individual camera frames
iterationsThe number of optimization iterations which will be applied, with range [1, infinity)
initialRobustErrorOptional the initial average robust error before optimization
finalRobustErrorOptional the final average robust error after optimization
Returns
True, if succeeded
See also
optimizeObjectPointsWithFixedPoses().

◆ removeObjectPointsNotInFrontOfCamera()

static size_t Ocean::Tracking::Solver3::removeObjectPointsNotInFrontOfCamera ( Database database,
Indices32 removedObjectPointIds = nullptr 
)
static

Removes all valid 3D object points (and their corresponding 2D image points) from the database which are at least in one frame not in front of the camera while having an existing 2D image point as observation.

Parameters
databaseThe database from which the 3D object points will be removed
removedObjectPointIdsOptional resulting ids of all object points which have been removed, nullptr if not of interest
Returns
The number of removed 3D object points

◆ removeObjectPointsWithoutEnoughObservations()

static size_t Ocean::Tracking::Solver3::removeObjectPointsWithoutEnoughObservations ( Database database,
const size_t  minimalNumberObservations,
Indices32 removedObjectPointIds = nullptr 
)
static

Removes any 3D object point (and it's corresponding 2D image points) from the database with less then a specified number of observations.

Parameters
databaseThe database from which the 3D object points will be removed
minimalNumberObservationsThe minimal number of observations a 3D object point must have to stay in the database, with range [1, infinity)
removedObjectPointIdsOptional resulting ids of all object points which have been removed, nullptr if not of interest
Returns
The number of removed 3D object points

◆ removeObjectPointsWithSmallBaseline()

static size_t Ocean::Tracking::Solver3::removeObjectPointsWithSmallBaseline ( Database database,
const Scalar  minimalBoxDiagonal,
Indices32 removedObjectPointIds = nullptr 
)
static

Removes any 3D object point (and it's corresponding 2D image points) from the database if all their corresponding camera poses are located within a too small bounding box.

The bounding box is determined based on the translational parts of the camera poses.

Parameters
databaseThe database from which the 3D object points will be removed
minimalBoxDiagonalThe minimal diagonal of the bounding box of all camera poses of supporting an object point to stay in the database
removedObjectPointIdsOptional resulting ids of all object points which have been removed, nullptr if not of interest
Returns
The number of removed 3D object points

◆ removeSparseObjectPoints()

static bool Ocean::Tracking::Solver3::removeSparseObjectPoints ( Database database,
const Scalar  minimalBoundingBoxDiagonal = Scalar(1e+7),
const Scalar  medianFactor = Scalar(100),
const Scalar  maximalSparseObjectPointRatio = Scalar(0.05) 
)
static

Removes very far object points from the database if the amount of object points does not exceed a specified ratio (compared to the remaining object points).

Optimization functions for camera poses or bundle adjustment functions may fail if the database holds a large set of dense object points and a small number of very sparse object points.
Thus, this function can be used to improve the 'quality' of a database.

Parameters
databaseThe database from which the very sparse object points will be removed
minimalBoundingBoxDiagonalthe minimal size of the diagonal of the bounding box of the object points so that the database can be modified, with range (0, infinity)
medianFactorThe factor which is multiplied with the median distance between the median object point and the object points of the database to identify very sparse (very far) object points
maximalSparseObjectPointRatioThe maximal ratio between the very spars object points and the entire number of object points so that the database will be modified
Returns
True, if at least one very sparse object point has been removed from the database

◆ supposeRotationalCameraMotion()

static bool Ocean::Tracking::Solver3::supposeRotationalCameraMotion ( const Database database,
const PinholeCamera pinholeCamera,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const bool  findInitialFieldOfView,
const PinholeCamera::OptimizationStrategy  optimizationStrategy,
PinholeCamera optimizedCamera,
Database optimizedDatabase,
const unsigned int  minimalObservations = 0u,
const unsigned int  minimalKeyframes = 3u,
const unsigned int  maximalKeyframes = 20u,
const Scalar  lowerFovX = Numeric::deg2rad(20),
const Scalar  upperFovX = Numeric::deg2rad(140),
const Scalar  maximalSqrError = (1.5 *1.5),
Worker worker = nullptr,
bool *  abort = nullptr,
Scalar finalMeanSqrError = nullptr 
)
static

Supposes pure rotational camera motion for a given database with stable camera poses determined by initial but stable object points.

If the camera profile is not well approximated during determination of the camera poses and the initial 3D object points the camera motion may contain translational motion although in reality the motion is only rotational.
Especially, if the camera comes with a significant distortion the motion determination may go wrong.
Therefore, this function supposes sole rotational camera motion, determined the new 3D object points locations, selected a set of suitable keyframes best representing the entire number of valid camera poses, optimizes the camera's field of view and the distortion parameter.
If the projection error between 3D object points and 2D image points falls below a defined threshold (should be strong), than the camera motion can be expected to provide only rotational parts.
Beware: Valid object points (with valid location) not visible within the specified frame range will not be investigated.

Parameters
databaseThe database providing a set initial 3D object points visible in several valid camera poses
pinholeCameraThe pinhole camera profile which has been used to determine the camera poses and 3D object point locations in the given database
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
findInitialFieldOfViewTrue, to apply a determination of the initial field of view of the camera (should be done if the camera's field of view is not known)
optimizationStrategyThe optimization strategy for the camera parameters which will be applied, OS_INVALID to avoid any optimization of the camera parameters
optimizedCameraThe resulting optimized camera profile with adjusted field of view and distortion parameters
optimizedDatabaseThe resulting database with optimized camera poses and 3D object point locations
minimalObservationsThe minimal number of observations an object points must have so that it will be investigated to measure whether the camera motion is pure rotational
minimalKeyframesThe minimal number of key frames (with valid poses) which are necessary for the determination/optimization, with range [2, minimalKeyFrames)
maximalKeyframesThe maximal number of key frames (with valid poses) which will be used for the optimization, with range [minimalKeyFrames, upperFrame - lowerFrame + 1]
lowerFovXThe lower threshold border for the final (ideal) horizontal field of view of the camera profile, with range (0, upperFovX],
upperFovXThe upper threshold border for the final (ideal) horizontal field of view of the camera profile, with range [lowerFoVX, PI)
maximalSqrErrorThe maximal average projection error between the 3D object points and the 2D image points so that a correspondence counts as valid
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
finalMeanSqrErrorOptional resulting final mean squared pose error (averaged)
Returns
True, if the camera motion is pure rotational
See also
optimizeCamera().

◆ trackObjectPoints() [1/2]

static bool Ocean::Tracking::Solver3::trackObjectPoints ( const Database database,
const Indices32 objectPointIds,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame,
const unsigned int  minimalTrackedObjectPoints,
const unsigned int  minimalTrackedFrames,
const unsigned int  maximalTrackedObjectPoints,
Indices32 trackedObjectPointIds,
ImagePointGroups trackedImagePointGroups,
Indices32 trackedValidIndices = nullptr,
bool *  abort = nullptr 
)
static

This functions tracks image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number or as long as a minimal number of sibling frames has been processed.

Thus, this function supports two individual termination conditions: either the specification of a minimal number of tracked points or the specification of the minimal number of used sibling frames (with at least one tracked point).
If the number of tracked object points exceeds 'maximalTrackedObjectPoints' we select the most 'interesting' (by taking object points widely spread over the start frame) object points and remove the remaining.
The tracking is applied forward and backward starting at a specific frame.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points, object point positions and camera poses may be invalid as this information is not used
objectPointIdsThe ids of the initial object points defining the image points which will be tracked, each object point should have a corresponding image point
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalTrackedObjectPointsOne of two termination conditions: The minimal number of tracked points, with range [1, objectPointIds.size()], must be 0 if minimalTrackedFrames is not 0
minimalTrackedFramesOne of two termination conditions: The minimal number of tracked frames, with range [1, upperFrame - lowerFrame + 1u], must be 0 if minimalTrackedObjectPoints is not 0
maximalTrackedObjectPointsThe maximal number of tracked points, with range [minimalTrackedObjectPoints, objectPointIds.size()]
trackedObjectPointIdsThe resulting ids of the tracked object points, one id for each tracked object point
trackedImagePointGroupsThe resulting groups of tracked image point, one groups for each camera frame, one image point for each object point
trackedValidIndicesOptional resulting indices of the given object point ids that could be tracked
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ trackObjectPoints() [2/2]

static bool Ocean::Tracking::Solver3::trackObjectPoints ( const Database database,
const Indices32 priorityObjectPointIds,
const Indices32 remainingObjectPointIds,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame,
const unsigned int  minimalTrackedPriorityObjectPoints,
const Scalar  minimalRemainingFramesRatio,
const unsigned int  maximalTrackedPriorityObjectPoints,
const unsigned int  maximalTrackedRemainingObjectPoints,
Indices32 trackedObjectPointIds,
ImagePointGroups trackedImagePointGroups,
Indices32 trackedValidPriorityIndices = nullptr,
Indices32 trackedValidRemainingIndices = nullptr,
bool *  abort = nullptr 
)
static

This functions tracks two individual groups (disjoined) image points (defined by their object points) from one frame to the sibling frames as long as the number of tracked points fall below a specified number.

The tracking is applied forward and backward starting at a specific frame
. First, the priority points will be tracked as long as possible which defined the tracking range for the remaining points.
Afterwards, the remaining points will be tracked as long as possible but not outside the frame range which results from the tracking of the priority points.
Last, the results of both groups will be joined to one large set of tracked object points, first the priority object points, followed by the remaining object points.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points, object point positions and camera poses may be invalid as this information is not used
priorityObjectPointIdsThe ids of the initial priority object points defining the first group of image points which will be tracked, each object point should have a corresponding image point
remainingObjectPointIdsThe ids of the initial remaining object points defining the second group of image points which will be tracked, each object point should have a corresponding image point
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalTrackedPriorityObjectPointsThe minimal number of tracked priority points, with range [1, priorityObjectPointIds.size())
minimalRemainingFramesRatioThe minimal number of frames in which remaining points must be tracked (must be visible) defined as a ratio of the number of frames in which the priority points are visible, with range (0, 1]
maximalTrackedPriorityObjectPointsThe maximal number of tracked priority points, with range [minimalTrackedPriorityObjectPoints, priorityObjectPointIds.size()]
maximalTrackedRemainingObjectPointsThe maximal number of tracked remaining points, with range [minimalTrackedRemainingObjectPoints, remainingObjectPointIds.size()]
trackedObjectPointIdsThe resulting ids of the tracked object points, one id for each tracked object point
trackedImagePointGroupsThe resulting groups of tracked image point, one groups for each camera frame, one image point for each object point
trackedValidPriorityIndicesOptional resulting indices of the given priority object point ids that could be tracked
trackedValidRemainingIndicesOptional resulting indices of the given remaining object point ids that could be tracked
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if succeeded

◆ trackObjectPointsToNeighborFrames()

static Indices32 Ocean::Tracking::Solver3::trackObjectPointsToNeighborFrames ( const Database database,
const Indices32 objectPointIds,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame 
)
static

This function tracks a group of object points from one frame to both (if available) neighbor frames and counts the minimal number of tracked points.

Use this function to measure the scene complexity at a specific frame.
The less object points can be tracked the more complex the scene.

Parameters
databaseThe database defining the topology of 3D object points and corresponding 2D image points, object point positions and camera poses may be invalid as this information is not used
objectPointIdsThe ids of the object points which will be tracked, each object point should have a corresponding image point
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
Returns
The resulting tracked object points (the object points visible in the range of three image frames)

◆ translateCameraMotion()

static std::string Ocean::Tracking::Solver3::translateCameraMotion ( const CameraMotion  cameraMotion)
static

Translates a camera motion value to a string providing the detailed motion as readable string.

Parameters
cameraMotionThe camera motion for which a readable string is requested
Returns
The readable string of the camera motion

◆ updateDatabaseToRotationalMotion()

static bool Ocean::Tracking::Solver3::updateDatabaseToRotationalMotion ( Database database,
const PinholeCamera pinholeCamera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalObservations,
IndexSet32 relocatedObjectPointIds 
)
staticprotected

Determines the semi-precise location of 3D object points and the camera poses for a sole rotational camera motion.

The locations and the camera poses may not match with a pure rotational camera motion before.
Only object points with an already valid location will receive a precise location matching to the rotational motion.
Only valid camera poses will receive a precise pose matching to the rotational motion.

Parameters
databaseThe database providing already known locations of 3D object points (may not match with a sole rotational camera motion), already known valid camera poses (may also not match with a sole rotational camera motion)
pinholeCameraThe pinhole camera profile defined the projection
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalObservationsThe minimal number of observations a 3D object point must have so that the position of the object point will be optimized, with range [0, infinity)
relocatedObjectPointIdsOptional resulting ids of all object points which have been relocated
Returns
True, if succeeded

◆ updateOrientationsSubset()

static void Ocean::Tracking::Solver3::updateOrientationsSubset ( Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
const Geometry::Estimator::EstimatorType  estimator,
const Scalar  minimalValidCorrespondenceRatio,
const Scalar  ransacMaximalSqrError,
const Scalar  maximalRobustError,
Scalar totalError,
size_t *  validPoses,
Lock lock,
bool *  abort,
const unsigned int  numberThreads,
const unsigned int  threadIndex,
const unsigned int  numberThreadsOne 
)
staticprotected

Updates a subset of the camera orientations (as the camera has rotational motion only) depending on valid 2D/3D points correspondences within a range of camera frames.

The camera orientations (their poses respectively) will be set to invalid if no valid orientation can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).

Parameters
databaseThe database from which the point correspondences are extracted and which receives the determined camera orientations (the 6-DOF poses with zero translation)
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera orientations, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear orientation optimization
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a orientation counts as valid, with range (0, infinity)
totalErrorThe resulting accumulated total error for all poses (orientations)
validPosesThe resulting number of valid poses (orientations)
lockThe lock object which must be defined if this function is executed in parallel on several individual threads
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
numberThreadsThe overall number of threads which are used in parallel
threadIndexThe index of the thread executing this function, with range [0, numberThreads)
numberThreadsOneMust be 1

◆ updatePoses() [1/2]

static bool Ocean::Tracking::Solver3::updatePoses ( Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  startFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  ransacMaximalSqrError = Scalar(3.5 *3.5),
const Scalar  maximalRobustError = Scalar(3.5 *3.5),
Scalar finalAverageError = nullptr,
size_t *  validPoses = nullptr,
bool *  abort = nullptr 
)
static

Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames.

The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
Pose determination starts at a specified frame and moves to higher and lower frame indices afterwards.
Poses from successive frames are applied as initial guess for a new frame.
The resulting poses will have either a sole rotational motion or a rotational and translational motion, this depends on the defined camera motion.

Parameters
databaseThe database from which the point correspondences are extracted and which receives the determined camera poses
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
startFrameThe index of the frame from which the algorithm will start, in this frame the specified initial object points must all be visible, with range [lowerFrame, upperFrame]
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity)
finalAverageErrorOptional resulting average final error for all valid poses, the error depends on the selected robust estimator
validPosesOptional resulting number of valid poses
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if all poses have been updated (the poses may be invalid)

◆ updatePoses() [2/2]

static bool Ocean::Tracking::Solver3::updatePoses ( Database database,
const AnyCamera camera,
const CameraMotion  cameraMotion,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
const Geometry::Estimator::EstimatorType  estimator = Geometry::Estimator::ET_SQUARE,
const Scalar  minimalValidCorrespondenceRatio = Scalar(1),
const Scalar  ransacMaximalSqrError = Scalar(3.5 *3.5),
const Scalar  maximalRobustError = Scalar(3.5 *3.5),
Scalar finalAverageError = nullptr,
size_t *  validPoses = nullptr,
Worker worker = nullptr,
bool *  abort = nullptr 
)
static

Updates the camera poses of the database depending on valid 2D/3D points correspondences within a range of camera frames.

The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).
If a worker is provided every pose is determined independently.
The resulting poses will have either a sole rotational motion or a rotational and translational motion, this depends on the defined camera motion.

Parameters
databaseThe database from which the point correspondences are extracted and which receives the determined camera poses
cameraThe camera profile defining the projection, must be valid
cameraMotionThe motion of the camera, use CM_UNKNOWN if the motion is unknown so that 6-DOF poses will be determined
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear pose optimization
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity)
finalAverageErrorOptional resulting average final error for all valid poses, the error depends on the selected robust estimator
validPosesOptional resulting number of valid poses
workerOptional worker object to distribute the computation
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
Returns
True, if all poses have been updated (the poses may be invalid)

◆ updatePosesSubset()

static void Ocean::Tracking::Solver3::updatePosesSubset ( Database database,
const AnyCamera camera,
RandomGenerator randomGenerator,
const unsigned int  lowerFrame,
const unsigned int  upperFrame,
const unsigned int  minimalCorrespondences,
const Geometry::Estimator::EstimatorType  estimator,
const Scalar  minimalValidCorrespondenceRatio,
const Scalar  ransacMaximalSqrError,
const Scalar  maximalRobustError,
Scalar totalError,
size_t *  validPoses,
Lock lock,
bool *  abort,
const unsigned int  numberThreads,
const unsigned int  threadIndex,
const unsigned int  numberThreadsOne 
)
staticprotected

Updates a subset of the camera poses depending on valid 2D/3D points correspondences within a range of camera frames.

The camera poses will be set to invalid if no valid pose can be determined (e.g., if not enough valid point correspondences are known for a specific camera frame).

Parameters
databaseThe database from which the point correspondences are extracted and which receives the determined camera poses
cameraThe camera profile defining the projection, must be valid
randomGeneratorRandom generator object
lowerFrameThe index of the frame defining the lower border of camera poses which will be investigated
upperFrameThe index of the frame defining the upper border of camera poses which will be investigated, with range [lowerFrame, infinity)
minimalCorrespondencesThe minimal number of 2D/3D points correspondences which are necessary to determine a valid camera pose, with range [5, infinity)
estimatorThe robust estimator which is applied for the non-linear pose optimization
ransacMaximalSqrErrorThe maximal squared pixel error between image point and projected object points for RANSAC iterations, with range (0, infinity)
minimalValidCorrespondenceRatioThe ratio of the minimal number of valid correspondences (the valid correspondences will be determined from a RANSAC iteration), with range [0, 1]
maximalRobustErrorThe maximal average robust pixel error between image point and projected object points so that a pose counts as valid, with range (0, infinity)
totalErrorThe resulting accumulated total error for all poses
validPosesThe resulting number of valid poses
lockThe lock object which must be defined if this function is executed in parallel on several individual threads
abortOptional abort statement allowing to stop the execution; True, if the execution has to stop
numberThreadsThe overall number of threads which are used in parallel
threadIndexThe index of the thread executing this function, with range [0, numberThreads)
numberThreadsOneMust be 1

The documentation for this class was generated from the following file: