Camera Geometry

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 321 Experts worldwide ranked by ideXlab platform

Kwanyee K Wong - One of the best experts on this subject based on the ideXlab platform.

  • BMVC - 1D Camera Geometry and Its Application to Circular Motion Estimation
    Procedings of the British Machine Vision Conference 2006, 2020
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong
    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

  • 1d Camera Geometry and its application to the self calibration of circular motion sequences
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
    Co-Authors: Kwanyee K Wong, Guoqiang Zhang, C Liang, Hui Zhang
    Abstract:

    This paper proposes a novel method for robustly recovering the Camera Geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the Camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2times2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence Geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved.

  • 1d Camera Geometry and its application to circular motion estimation
    British Machine Vision Conference, 2006
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong
    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

Hui Zhang - One of the best experts on this subject based on the ideXlab platform.

  • BMVC - 1D Camera Geometry and Its Application to Circular Motion Estimation
    Procedings of the British Machine Vision Conference 2006, 2020
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong
    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

  • 1d Camera Geometry and its application to the self calibration of circular motion sequences
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
    Co-Authors: Kwanyee K Wong, Guoqiang Zhang, C Liang, Hui Zhang
    Abstract:

    This paper proposes a novel method for robustly recovering the Camera Geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the Camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2times2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence Geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved.

  • 1d Camera Geometry and its application to circular motion estimation
    British Machine Vision Conference, 2006
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong
    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

John E. W. Mayhew - One of the best experts on this subject based on the ideXlab platform.

  • Model-driven active visual tracking
    Real-time Imaging, 1998
    Co-Authors: Y. Shao, John E. W. Mayhew, Y. Zheng
    Abstract:

    Abstract We have previously demonstrated that the performance of tracking algorithms can be improved by integrating information from multiple cues in a model-driven Bayesian reasoning framework. Here we extend our work to active vision tracking, with variable Camera Geometry. Many existent active tracking algorithms avoid the problem of variable Camera Geometry by tracking view independent features, such as corners and lines. However, the performance of algorithms based on those single features will greatly deteriorate in the presence of specularities and dense clutter. We show, by integrating multiple cues and updating the Camera Geometry on-line, that it is possible to track a complicated object moving arbitrarily in three-dimensional (3D) space. We use a four degree-of-freedom (4-DoF) binocular Camera rig to track three focus features of an industrial object, whose complete model is known. The Camera Geometry is updated by using the rig control commands and kinematic model of the stereo head. The extrinsic parameters are further refined by interpolation from a previously sampled calibration of the head work space. The 2D target position estimates are obtained by a combination of blob detection, edge searching and gray-level matching, with the aid of model geometrical structure projection using current estimates of Camera Geometry. The information is represented in the form of a probability density distribution, and propagated in a Bayes Net. The Bayesian reasoning that is performed in the 2D image is coupled by the rigid model Geometry constraint in 3D space. An αβ filter is used to smooth the tracking pursuit and to predict the position of the object in the next iteration of data acquisition. The solution of the inverse kinematic problem at the predicted position is used to control the position of the stereo head. Finally, experiments show that the target undertaking arbitrarily 3D motion can be successfully tracked in the presence of specularities and dense clutter.

  • Vertical disparity prediction for model-based stereo correspondence
    Electronics Letters, 1996
    Co-Authors: Y. Shao, John E. W. Mayhew
    Abstract:

    Vertical disparity can be approximated under certain conditions by a quadratic expression of image eccentricities, with the coefficiencies encoding the Camera Geometry only and being estimated from model information. This serves to reduce the ambiguity of the epipolar Geometry constraint in stereo correspondence.

  • BMVC - Ground plane obstacle detection of stereo vision under variable Camera Geometry using neural nets
    Procedings of the British Machine Vision Conference 1995, 1995
    Co-Authors: Y. Shao, John E. W. Mayhew, S. D. Hippisley-cox
    Abstract:

    We use a stereo disparity predictor, implemented as layered neural nets in the PILUT architecture, to encode the disparity flow field for the ground plane at various viewing positions over the work space. A deviation of disparity, computed using a correspondence algorithm, from its prediction may then indicate a potential obstacle. A casual bayes net model is used to estimate the probability that a point of interest lies on the ground plane.

  • The adaptive control of a four-degrees-of-freedom stereo Camera head
    Philosophical Transactions of the Royal Society B, 1992
    Co-Authors: John E. W. Mayhew, Ying Zheng, Stuart M. Cornell
    Abstract:

    The paper describes the use of biologically plausible neural network architectures to address some of the issues associated with the use of stereopsis under variable Camera Geometry. We report an implementation of a layered (subsumption) architecture for the adaptive control of microsaccadic tracking, and show experimental results demonstrating the use of lattice filter predictors for trajectory modelling. A rather simple, but seemingly adequate, neural network architecture for representing high-dimensional surface approximations ( piluts) is evaluated as a method of encoding the predictive stereo mapping of the ground plane for different head positions.

  • BMVC - Ground Plane Obstacle Detection under variable Camera Geometry Using a Predictive Stereo Matcher.
    BMVC92, 1992
    Co-Authors: Stuart M. Cornell, John Porrill, John E. W. Mayhew
    Abstract:

    A scheme is proposed for ground plane obstacle detection under conditions of variable Camera Geometry. It uses a predictive stereo matcher implemented in the PILUT architecture described below, in which is encoded the disparity map of the ground plane for the different viewing positions required to scan the work space. The research is the extension of Mallot et al’s (1989) scheme for ground plane obstacle detection which begins with an inverse perspective mapping of the left and right images that transforms the image locations of all points arising from the ground plane so that they have zero disparity: simple differencing of the resulting images then permits ready detection of obstacles. The essence of this physiologically-inspired method is to exploit knowledge of the prevailing Camera Geometry (to find epipolar lines) and the expectation of a ground plane (to predict the locations along epipolars of corresponding left/right image points of features arising from the ground plane).

Kristian E Waters - One of the best experts on this subject based on the ideXlab platform.

  • performance analysis of a new positron Camera Geometry for high speed fine particle tracking
    Measurement Science and Technology, 2017
    Co-Authors: J M Sovechles, Darryel Boucher, Thomas Leadbeater, Agus P Sasmito, Kristian E Waters
    Abstract:

    A new positron Camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m s−1. For lower activity tracers (<10−2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m s−1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m s−1. The Camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a −212 + 106 µm (10−1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.

  • Performance analysis of a new positron Camera Geometry for high speed, fine particle tracking
    Measurement Science and Technology, 2017
    Co-Authors: J M Sovechles, Darryel Boucher, Thomas Leadbeater, Agus P Sasmito, Kristian E Waters
    Abstract:

    A new positron Camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m s−1. For lower activity tracers (

Guoqiang Zhang - One of the best experts on this subject based on the ideXlab platform.

  • BMVC - 1D Camera Geometry and Its Application to Circular Motion Estimation
    Procedings of the British Machine Vision Conference 2006, 2020
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong
    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.

  • 1d Camera Geometry and its application to the self calibration of circular motion sequences
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
    Co-Authors: Kwanyee K Wong, Guoqiang Zhang, C Liang, Hui Zhang
    Abstract:

    This paper proposes a novel method for robustly recovering the Camera Geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the Camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2times2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence Geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved.

  • 1d Camera Geometry and its application to circular motion estimation
    British Machine Vision Conference, 2006
    Co-Authors: Guoqiang Zhang, Hui Zhang, Kwanyee K Wong
    Abstract:

    This paper describes a new and robust method for estimating circular motion Geometry from an uncalibrated image sequence. Under circular motion, all the Camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the Camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D Geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence Geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.