Imaged Scene

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 141 Experts worldwide ranked by ideXlab platform

Thomas Pock - One of the best experts on this subject based on the ideXlab platform.

  • ICCP - Real-time panoramic tracking for event cameras
    2017 IEEE International Conference on Computational Photography (ICCP), 2017
    Co-Authors: Christian Reinbacher, Gottfried Munda, Thomas Pock
    Abstract:

    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the Scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the Imaged Scene point. We verify the robustness to fast camera movements and dynamic objects in the Scene on a recently proposed dataset [18] and self-recorded sequences.

  • EMMCVPR - Variational Shape from Light Field
    Lecture Notes in Computer Science, 2013
    Co-Authors: Sol Heber, Rene Ranftl, Thomas Pock
    Abstract:

    In this paper we propose an efficient method to calculate a high-quality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling AWS with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an Imaged Scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the Scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the Scene depth.

Christian Reinbacher - One of the best experts on this subject based on the ideXlab platform.

  • ICCP - Real-time panoramic tracking for event cameras
    2017 IEEE International Conference on Computational Photography (ICCP), 2017
    Co-Authors: Christian Reinbacher, Gottfried Munda, Thomas Pock
    Abstract:

    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the Scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the Imaged Scene point. We verify the robustness to fast camera movements and dynamic objects in the Scene on a recently proposed dataset [18] and self-recorded sequences.

Gottfried Munda - One of the best experts on this subject based on the ideXlab platform.

  • ICCP - Real-time panoramic tracking for event cameras
    2017 IEEE International Conference on Computational Photography (ICCP), 2017
    Co-Authors: Christian Reinbacher, Gottfried Munda, Thomas Pock
    Abstract:

    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the Scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the Imaged Scene point. We verify the robustness to fast camera movements and dynamic objects in the Scene on a recently proposed dataset [18] and self-recorded sequences.

Corcoran Peter - One of the best experts on this subject based on the ideXlab platform.

  • Total variation-based dense depth from multicamera array
    Society of Photo-optical Instrumentation Engineers (SPIE), 2018
    Co-Authors: Javidnia Hossein, Corcoran Peter
    Abstract:

    Multicamera arrays are increasingly employed in both consumer and industrial applications, and various passive techniques are documented to estimate depth from such camera arrays. Current depth estimation methods provide useful estimations of depth in an Imaged Scene but are often impractical due to significant computational requirements. This paper presents a framework that generates a high-quality continuous depth map from multicamera array/light-field cameras. The proposed framework utilizes analysis of the local epipolar plane image to initiate the depth estimation process. The estimated depth map is then refined using total variation minimization based on the Fenchel-Rockafellar duality. Evaluation of this method based on a well-known benchmark indicates that the proposed framework performs well in terms of accuracy when compared with the top-ranked depth estimation methods and a baseline algorithm. The test dataset includes both photorealistic and nonphotorealistic Scenes. Notably, the computational requirements required to achieve an equivalent accuracy are significantly reduced when compared with the top algorithms. As a consequence, the proposed framework is suitable for deployment in consumer and industrial applications. (C) 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)The research work presented here was funded under the Strategic Partnership Program of Science Foundation Ireland (SFI) and cofunded by SFI and FotoNation Ltd. Project ID: 13/SPP/I2868 on “Next Generation Imaging for Smartphone and Embedded Platforms.

  • Total variation-based dense depth from multicamera array
    'SPIE-Intl Soc Optical Eng', 2018
    Co-Authors: Javidnia Hossein, Corcoran Peter
    Abstract:

    Multicamera arrays are increasingly employed in both consumer and industrial applications, and various passive techniques are documented to estimate depth from such camera arrays. Current depth estimation methods provide useful estimations of depth in an Imaged Scene but are often impractical due to significant computational requirements. This paper presents a framework that generates a high-quality continuous depth map from multicamera array/light-field cameras. The proposed framework utilizes analysis of the local epipolar plane image to initiate the depth estimation process. The estimated depth map is then refined using total variation minimization based on the Fenchel-Rockafellar duality. Evaluation of this method based on a well-known benchmark indicates that the proposed framework performs well in terms of accuracy when compared with the top-ranked depth estimation methods and a baseline algorithm. The test dataset includes both photorealistic and nonphotorealistic Scenes. Notably, the computational requirements required to achieve an equivalent accuracy are significantly reduced when compared with the top algorithms. As a consequence, the proposed framework is suitable for deployment in consumer and industrial applications. (C) 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)The research work presented here was funded under the Strategic Partnership Program of Science Foundation Ireland (SFI) and cofunded by SFI and FotoNation Ltd. Project ID: 13/SPP/I2868 on “Next Generation Imaging for Smartphone and Embedded Platforms.”peer-reviewe

Stefano Tebaldini - One of the best experts on this subject based on the ideXlab platform.

  • phase calibration of airborne tomographic sar data via phase center double localization
    IEEE Transactions on Geoscience and Remote Sensing, 2016
    Co-Authors: Stefano Tebaldini, F Rocca, Mauro Mariotti Dalessandro, Laurent Ferrofamil
    Abstract:

    Synthetic aperture radar (SAR) data collected over a 2-D synthetic aperture can be processed to focus the illuminated scatterers in the 3-D space, using a number of signal processing techniques generally grouped under the name of SAR tomography (TomoSAR). A fundamental requirement for TomoSAR processing is to have precise knowledge of the platform position along the 2-D synthetic aperture. This requirement is not easily met in the case where the 2-D aperture is formed by collecting different flight lines (i.e., 1-D apertures) in a repeat-pass fashion, which is the typical case of airborne and spaceborne TomoSAR. Subwavelength platform position errors give rise to residual phase screens among different passes, which hinder coherent focusing in the 3-D space. In this paper, we propose a strategy for calibrating repeat-pass tomographic SAR data that allows us to accurately estimate and remove such residual phase screens in the absence of reference targets and prior information about terrain topography and even in the absence of any point- or surface-like target within the illuminated Scene. The problem is tackled by observing that multiple flight lines provide enough information to jointly estimate platform and target positions, up to a roto-translation of the coordinate system used for representing the Imaged Scene. The employment of volumetric scatterers in the calibration process is enabled by the phase linking algorithm, which allows us to represent them as equivalent phase centers. The proposed approach is demonstrated through numerical simulations, in order to validate the results based on the exact knowledge of the simulated scatterers, and using real data from the ESA campaigns AlpTomoSAR, BioSAR 2008, and TropiSAR. A cross-check of the results from simultaneous P- and L-band acquisitions from the TropiSAR data set indicates that the dispersion of the retrieved flight trajectories is limited to a few millimeters.

  • On the Role of Phase Stability in SAR Multibaseline Applications
    IEEE Transactions on Geoscience and Remote Sensing, 2010
    Co-Authors: Stefano Tebaldini, Andrea Monti Guarnieri
    Abstract:

    This paper is meant to present a statistical analysis of the role of propagation disturbances (PDs), such as those due to atmospheric disturbances or to residual platform motion, in multibaseline synthetic aperture radar (SAR) interferometry (InSAR) and tomography (T-SAR) applications. The analysis will consider both pointlike and distributed targets in such a way as to cover all the cases that are relevant in the applications. In order to provide a tool for the evaluation of the impact of PDs on the analysis of an arbitrary scenario, a definition of signal-to-noise ratio (SNR) will be introduced that accounts for both the presence of PDs and the characteristics of the Imaged Scene. In the case of pointlike targets, it will be shown that such definition of SNR allows reusing well known results following after the Neyman-Pearson theory, thus providing a straightforward tool to asses phase-stability requirements for the detection and localization of multiple pointlike targets. In the case of distributed targets, instead, it will be provided a detailed analysis of the random fluctuations of the reconstructed Scene as a function of the extent of the PDs, of the vertical structure of the Imaged Scene, and of the number of looks that are employed. Results from Monte Carlo simulations will be presented that fully support the theoretical developments within this paper. The most relevant conclusion of this paper is that the impact of PDs is more severe in the case where the Imaged Scene is characterized by a complex vertical structure or when multiple pointlike targets are present. As a consequence, it follows that the T-SAR analyses require either a higher phase stability or a more accurate phase calibration with respect to InSAR analyses. Finally, an example of phase-stability analysis and phase calibration of a real data set will be shown, based on a P-band data set relative to the forest site of Remningstorp, Sweden