3D Imaging

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 359781 Experts worldwide ranked by ideXlab platform

Gordon Wetzstein - One of the best experts on this subject based on the ideXlab platform.

  • Sub-picosecond photon-efficient 3D Imaging using single-photon sensors
    Scientific Reports, 2018
    Co-Authors: Felix Heide, David B Lindell, Steven Diamond, Gordon Wetzstein
    Abstract:

    Active 3D Imaging systems have broad applications across disciplines, including biological Imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D Imaging in practical scenarios where widely-varying photon counts are observed.

  • single photon 3D Imaging with deep sensor fusion
    ACM Transactions on Graphics, 2018
    Co-Authors: David B Lindell, Matthew Otoole, Gordon Wetzstein
    Abstract:

    Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, and overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D Imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels, our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.

  • sub picosecond photon efficient 3D Imaging using single photon sensors
    arXiv: Applied Physics, 2018
    Co-Authors: Felix Heide, David B Lindell, Steven Diamond, Gordon Wetzstein
    Abstract:

    Active 3D Imaging systems have broad applications across disciplines, including biological Imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing resolution, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but this approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D Imaging in practical scenarios where widely-varying photon counts are observed.

  • simultaneous whole animal 3D Imaging of neuronal activity using light field microscopy
    Nature Methods, 2014
    Co-Authors: Robert Prevedel, Younggyu Yoon, Maximilian Hoffmann, Nikita Pak, Gordon Wetzstein, Saul Kato
    Abstract:

    High-speed, large-scale three-dimensional (3D) Imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional Imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ∼700 μm × 700 μm × 200 μm at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium Imaging.

David A Feinberg - One of the best experts on this subject based on the ideXlab platform.

  • single shot 3D Imaging techniques improve arterial spin labeling perfusion measurements
    Magnetic Resonance in Medicine, 2005
    Co-Authors: Matthias Günther, Koichi Oshio, David A Feinberg
    Abstract:

    Arterial spin labeling (ASL) can be used to measure perfusion without the use of contrast agents. Due to the small volume fraction of blood vessels compared to tissue in the human brain (typ. 3–5%) ASL techniques have an intrinsically low signal-to-noise ratio (SNR). In this publication, evidence is presented that the SNR can be improved by using arterial spin labeling in combination with single-shot 3D readout techniques. Specifically, a single-shot 3D-GRASE sequence is presented, which yields a 2.8-fold increase in SNR compared to 2D EPI at the same nominal resolution. Up to 18 slices can be acquired in 2 min with an SNR of 10 or more for gray matter perfusion. A method is proposed to increase the reliability of perfusion quantification using QUIPSS II derivates by acquiring low-resolution maps of the bolus arrival time, which allows differentiation between lack of perfusion and delayed arrival of the labeled blood. For arterial spin labeling, single-shot 3D Imaging techniques are optimal in terms of efficiency and might prove beneficial to improve reliability of perfusion quantitation in a clinical setup. Magn Reson Med 54:491–498, 2005. © 2005 Wiley-Liss, Inc.

  • single shot 3D Imaging techniques improve arterial spin labeling perfusion measurements
    Magnetic Resonance in Medicine, 2005
    Co-Authors: Matthias Günther, Koichi Oshio, David A Feinberg
    Abstract:

    Arterial spin labeling (ASL) can be used to measure perfusion without the use of contrast agents. Due to the small volume fraction of blood vessels compared to tissue in the human brain (typ. 3-5%) ASL techniques have an intrinsically low signal-to-noise ratio (SNR). In this publication, evidence is presented that the SNR can be improved by using arterial spin labeling in combination with single-shot 3D readout techniques. Specifically, a single-shot 3D-GRASE sequence is presented, which yields a 2.8-fold increase in SNR compared to 2D EPI at the same nominal resolution. Up to 18 slices can be acquired in 2 min with an SNR of 10 or more for gray matter perfusion. A method is proposed to increase the reliability of perfusion quantification using QUIPSS II derivates by acquiring low-resolution maps of the bolus arrival time, which allows differentiation between lack of perfusion and delayed arrival of the labeled blood. For arterial spin labeling, single-shot 3D Imaging techniques are optimal in terms of efficiency and might prove beneficial to improve reliability of perfusion quantitation in a clinical setup.

David B Lindell - One of the best experts on this subject based on the ideXlab platform.

  • Sub-picosecond photon-efficient 3D Imaging using single-photon sensors
    Scientific Reports, 2018
    Co-Authors: Felix Heide, David B Lindell, Steven Diamond, Gordon Wetzstein
    Abstract:

    Active 3D Imaging systems have broad applications across disciplines, including biological Imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D Imaging in practical scenarios where widely-varying photon counts are observed.

  • single photon 3D Imaging with deep sensor fusion
    ACM Transactions on Graphics, 2018
    Co-Authors: David B Lindell, Matthew Otoole, Gordon Wetzstein
    Abstract:

    Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, and overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D Imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels, our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.

  • sub picosecond photon efficient 3D Imaging using single photon sensors
    arXiv: Applied Physics, 2018
    Co-Authors: Felix Heide, David B Lindell, Steven Diamond, Gordon Wetzstein
    Abstract:

    Active 3D Imaging systems have broad applications across disciplines, including biological Imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing resolution, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but this approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D Imaging in practical scenarios where widely-varying photon counts are observed.

Matthias Günther - One of the best experts on this subject based on the ideXlab platform.

  • single shot 3D Imaging techniques improve arterial spin labeling perfusion measurements
    Magnetic Resonance in Medicine, 2005
    Co-Authors: Matthias Günther, Koichi Oshio, David A Feinberg
    Abstract:

    Arterial spin labeling (ASL) can be used to measure perfusion without the use of contrast agents. Due to the small volume fraction of blood vessels compared to tissue in the human brain (typ. 3–5%) ASL techniques have an intrinsically low signal-to-noise ratio (SNR). In this publication, evidence is presented that the SNR can be improved by using arterial spin labeling in combination with single-shot 3D readout techniques. Specifically, a single-shot 3D-GRASE sequence is presented, which yields a 2.8-fold increase in SNR compared to 2D EPI at the same nominal resolution. Up to 18 slices can be acquired in 2 min with an SNR of 10 or more for gray matter perfusion. A method is proposed to increase the reliability of perfusion quantification using QUIPSS II derivates by acquiring low-resolution maps of the bolus arrival time, which allows differentiation between lack of perfusion and delayed arrival of the labeled blood. For arterial spin labeling, single-shot 3D Imaging techniques are optimal in terms of efficiency and might prove beneficial to improve reliability of perfusion quantitation in a clinical setup. Magn Reson Med 54:491–498, 2005. © 2005 Wiley-Liss, Inc.

  • single shot 3D Imaging techniques improve arterial spin labeling perfusion measurements
    Magnetic Resonance in Medicine, 2005
    Co-Authors: Matthias Günther, Koichi Oshio, David A Feinberg
    Abstract:

    Arterial spin labeling (ASL) can be used to measure perfusion without the use of contrast agents. Due to the small volume fraction of blood vessels compared to tissue in the human brain (typ. 3-5%) ASL techniques have an intrinsically low signal-to-noise ratio (SNR). In this publication, evidence is presented that the SNR can be improved by using arterial spin labeling in combination with single-shot 3D readout techniques. Specifically, a single-shot 3D-GRASE sequence is presented, which yields a 2.8-fold increase in SNR compared to 2D EPI at the same nominal resolution. Up to 18 slices can be acquired in 2 min with an SNR of 10 or more for gray matter perfusion. A method is proposed to increase the reliability of perfusion quantification using QUIPSS II derivates by acquiring low-resolution maps of the bolus arrival time, which allows differentiation between lack of perfusion and delayed arrival of the labeled blood. For arterial spin labeling, single-shot 3D Imaging techniques are optimal in terms of efficiency and might prove beneficial to improve reliability of perfusion quantitation in a clinical setup.

Andres G Marrugo - One of the best experts on this subject based on the ideXlab platform.

  • robust automated reading of the skin prick test via 3D Imaging and parametric surface fitting
    PLOS ONE, 2019
    Co-Authors: Jesus Pineda, Raul Vargas, Lenny A Romero, Javier Marrugo, Jaime Meneses, Andres G Marrugo
    Abstract:

    The conventional reading of the skin prick test (SPT) for diagnosing allergies is prone to inter- and intra-observer variations. Drawing the contours of the skin wheals from the SPT and scanning them for computer processing is cumbersome. However, 3D scanning technology promises the best results in terms of accuracy, fast acquisition, and processing. In this work, we present a wide-field 3D Imaging system for the 3D reconstruction of the SPT, and we propose an automated method for the measurement of the skin wheals. The automated measurement is based on pyramidal decomposition and parametric 3D surface fitting for estimating the sizes of the wheals directly. We proposed two parametric models for the diameter estimation. Model 1 is based on an inverted Elliptical Paraboloid function, and model 2 on a super-Gaussian function. The accuracy of the 3D Imaging system was evaluated with validation objects obtaining transversal and depth accuracies within ± 0.1 mm and ± 0.01 mm, respectively. We tested the method on 80 SPTs conducted in volunteer subjects, which resulted in 61 detected wheals. We analyzed the accuracy of the models against manual reference measurements from a physician and obtained that the parametric model 2 on average yields diameters closer to the reference measurements (model 1: -0.398 mm vs. model 2: -0.339 mm) with narrower 95% limits of agreement (model 1: [-1.58, 0.78] mm vs. model 2: [-1.39, 0.71] mm) in a Bland-Altman analysis. In one subject, we tested the reproducibility of the method by registering the forearm under five different poses obtaining a maximum coefficient of variation of 5.24% in the estimated wheal diameters. The proposed method delivers accurate and reproducible measurements of the SPT.