Camera Geometry - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Camera Geometry

The Experts below are selected from a list of 321 Experts worldwide ranked by ideXlab platform

Kwanyee K Wong – 1st expert on this subject based on the ideXlab platform

  • BMVC – 1D Camera Geometry and Its Application to Circular Motion Estimation
    Procedings of the British Machine Vision Conference 2006, 2020
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong

    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

  • 1d Camera Geometry and its application to the self calibration of circular motion sequences
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
    Co-Authors: Kwanyee K Wong, Guoqiang Zhang, C Liang, Hui Zhang

    Abstract:

    This paper proposes a novel method for robustly recovering the Camera Geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the Camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2times2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence Geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved.

  • 1d Camera Geometry and its application to circular motion estimation
    British Machine Vision Conference, 2006
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong

    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

Hui Zhang – 2nd expert on this subject based on the ideXlab platform

  • BMVC – 1D Camera Geometry and Its Application to Circular Motion Estimation
    Procedings of the British Machine Vision Conference 2006, 2020
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong

    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

  • 1d Camera Geometry and its application to the self calibration of circular motion sequences
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
    Co-Authors: Kwanyee K Wong, Guoqiang Zhang, C Liang, Hui Zhang

    Abstract:

    This paper proposes a novel method for robustly recovering the Camera Geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the Camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2times2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence Geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved.

  • 1d Camera Geometry and its application to circular motion estimation
    British Machine Vision Conference, 2006
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong

    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

John E. W. Mayhew – 3rd expert on this subject based on the ideXlab platform

  • Model-driven active visual tracking
    Real-time Imaging, 1998
    Co-Authors: Y. Shao, John E. W. Mayhew, Y. Zheng

    Abstract:

    Abstract We have previously demonstrated that the performance of tracking algorithms can be improved by integrating information from multiple cues in a model-driven Bayesian reasoning framework. Here we extend our work to active vision tracking, with variable Camera Geometry. Many existent active tracking algorithms avoid the problem of variable Camera Geometry by tracking view independent features, such as corners and lines. However, the performance of algorithms based on those single features will greatly deteriorate in the presence of specularities and dense clutter. We show, by integrating multiple cues and updating the Camera Geometry on-line, that it is possible to track a complicated object moving arbitrarily in three-dimensional (3D) space. We use a four degree-of-freedom (4-DoF) binocular Camera rig to track three focus features of an industrial object, whose complete model is known. The Camera Geometry is updated by using the rig control commands and kinematic model of the stereo head. The extrinsic parameters are further refined by interpolation from a previously sampled calibration of the head work space. The 2D target position estimates are obtained by a combination of blob detection, edge searching and gray-level matching, with the aid of model geometrical structure projection using current estimates of Camera Geometry. The information is represented in the form of a probability density distribution, and propagated in a Bayes Net. The Bayesian reasoning that is performed in the 2D image is coupled by the rigid model Geometry constraint in 3D space. An αβ filter is used to smooth the tracking pursuit and to predict the position of the object in the next iteration of data acquisition. The solution of the inverse kinematic problem at the predicted position is used to control the position of the stereo head. Finally, experiments show that the target undertaking arbitrarily 3D motion can be successfully tracked in the presence of specularities and dense clutter.

  • Vertical disparity prediction for model-based stereo correspondence
    Electronics Letters, 1996
    Co-Authors: Y. Shao, John E. W. Mayhew

    Abstract:

    Vertical disparity can be approximated under certain conditions by a quadratic expression of image eccentricities, with the coefficiencies encoding the Camera Geometry only and being estimated from model information. This serves to reduce the ambiguity of the epipolar Geometry constraint in stereo correspondence.

  • BMVC – Ground plane obstacle detection of stereo vision under variable Camera Geometry using neural nets
    Procedings of the British Machine Vision Conference 1995, 1995
    Co-Authors: Y. Shao, John E. W. Mayhew, S. D. Hippisley-cox

    Abstract:

    We use a stereo disparity predictor, implemented as layered neural nets in the PILUT architecture, to encode the disparity flow field for the ground plane at various viewing positions over the work space. A deviation of disparity, computed using a correspondence algorithm, from its prediction may then indicate a potential obstacle. A casual bayes net model is used to estimate the probability that a point of interest lies on the ground plane.