Image Estimation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 131544 Experts worldwide ranked by ideXlab platform

Dinggang Shen - One of the best experts on this subject based on the ideXlab platform.

  • 3d conditional generative adversarial networks for high quality pet Image Estimation at low dose
    NeuroImage, 2018
    Co-Authors: Yan Wang, D S Lalush, Weili Lin, Dinggang Shen, Lei Wang, Jiliu Zhou, Luping Zhou
    Abstract:

    Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET Images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET Images, which impacts the Image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET Images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET Images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET Image and generate a corresponding output full-dose PET Image. Specifically, to render the same underlying information between the low-dose and full-dose PET Images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose Image. In order to guarantee the synthesized PET Image to be close to the real one, we take into account of the Estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated Images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.

  • deep auto context convolutional neural networks for standard dose pet Image Estimation from low dose pet mri
    Neurocomputing, 2017
    Co-Authors: Lei Xiang, Weili Lin, Yu Qiao, Dong Nie, Qian Wang, Dinggang Shen
    Abstract:

    Positron emission tomography (PET) is an essential technique in many clinical applications such as tumor detection and brain disorder diagnosis. In order to obtain high-quality PET Images, a standard-dose radioactive tracer is needed, which inevitably causes the risk of radiation exposure damage. For reducing the patient's exposure to radiation and maintaining the high quality of PET Images, in this paper, we propose a deep learning architecture to estimate the high-quality standard-dose PET (SPET) Image from the combination of the low-quality low-dose PET (LPET) Image and the accompanying T1-weighted acquisition from magnetic resonance imaging (MRI). Specifically, we adapt the convolutional neural network (CNN) to account for the two channel inputs of LPET and T1, and directly learn the end-to-end mapping between the inputs and the SPET output. Then, we integrate multiple CNN modules following the auto-context strategy, such that the tentatively estimated SPET of an early CNN can be iteratively refined by subsequent CNNs. Validations on real human brain PET/MRI data show that our proposed method can provide competitive Estimation quality of the PET Images, compared to the state-of-the-art methods. Meanwhile, our method is highly efficient to test on a new subject, e.g., spending 2 s for estimating an entire SPET Image in contrast to 16 min by the state-of-the-art method. The results above demonstrate the potential of our method in real clinical applications.

  • multi level canonical correlation analysis for standard dose pet Image Estimation
    IEEE Transactions on Image Processing, 2016
    Co-Authors: Pei Zhang, Ehsan Adeli, Yan Wang, Feng Shi, D S Lalush, Weili Lin, Dinggang Shen
    Abstract:

    Positron emission tomography (PET) Images are widely used in many clinical applications, such as tumor detection and brain disorder diagnosis. To obtain PET Images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting Images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) Image from a low-dose one in order to reduce the risk of radiation exposure and preserve Image quality. This may be achieved through mapping both S-PET and low-dose PET data into a common space and then performing patch-based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the Estimation accuracy. In this paper, we propose a data-driven multi-level canonical correlation analysis scheme to solve this problem. In particular, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve Estimation. In addition, we also use multi-modal magnetic resonance Images to help improve the Estimation with complementary information. Validations on phantom and real human brain data sets show that our method effectively estimates S-PET Images and well preserves critical clinical quantification measures, such as standard uptake value.

Richard G Baraniuk - One of the best experts on this subject based on the ideXlab platform.

  • Bayesian tree-structured Image modeling using wavelet-domain hidden Markov models
    IEEE Transactions on Image Processing, 2001
    Co-Authors: Justin Romberg, Hyeokho Choi, Richard G Baraniuk
    Abstract:

    Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and Image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training to fit an HMT model to a given data set (e.g., using the expectation-maximization algorithm). We greatly simplify the HMT model by exploiting the inherent self-similarity of real-world Images. The simplified model specifies the HMT parameters with just nine meta-parameters (independent of the size of the Image and the number of wavelet scales). We also introduce a Bayesian universal HMT (uHMT) that fixes these nine parameters. The uHMT requires no training of any kind, while extremely simple, we show using a series of Image Estimation/denoising experiments that these new models retain nearly all of the key Image structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT Estimation algorithm that outperforms other wavelet-based estimators in the current literature, both visually and in mean square error.

  • bayesian tree structured Image modeling using wavelet domain hidden markov models
    Proceedings of the 1999 Mathematical Modeling Bayesian Estimation and Inverse Problems, 1999
    Co-Authors: Justin Romberg, Hyeokho Choi, Richard G Baraniuk
    Abstract:

    ABSTRACT Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and Image processing. The hiddenMarkov tree (HMT) model captures the key features of the joint density of the wavelet coefficients of real-world data. Onepotential drawback to the HMT framework is the need for cornputationallv expensive iterative training (using the Expectation-Iaximization algorithm, for example). In this paper, we propose two reduced-parameter HMT models that capture the generalstructure of a broad class of real-world Images. In the Image HMT (iHMT) model we use the fact that for a large class ofImages the structure of the HMT is self-similar across scale. This allows us to reduce the complexity of the iHMT to justnine easily trained parameters (independent of the size of the Image and the number of wavelet scales). In the universal HMT(uHMT) we take a Bayesian approach and fix these nine parameters. The uHMT requires no training of any kind. Whilesimple, we show using a series of Image Estimation/denoising experiments that these two new models retain nearly all of thekey structure modeled by the full HvIT. Finally, we propose a fast shift-invariant HMT Estimation algorithm that outperformsall other wavelet-based estimators in the current literature. both in mean-square error and visual metrics.

Luping Zhou - One of the best experts on this subject based on the ideXlab platform.

  • 3d conditional generative adversarial networks for high quality pet Image Estimation at low dose
    NeuroImage, 2018
    Co-Authors: Yan Wang, D S Lalush, Weili Lin, Dinggang Shen, Lei Wang, Jiliu Zhou, Luping Zhou
    Abstract:

    Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET Images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET Images, which impacts the Image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET Images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET Images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET Image and generate a corresponding output full-dose PET Image. Specifically, to render the same underlying information between the low-dose and full-dose PET Images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose Image. In order to guarantee the synthesized PET Image to be close to the real one, we take into account of the Estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated Images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.

Peter Kellman - One of the best experts on this subject based on the ideXlab platform.

  • motion correction for myocardial t1 mapping using Image registration with synthetic Image Estimation
    Magnetic Resonance in Medicine, 2012
    Co-Authors: Hui Xue, Saurabh Shah, Andreas Greiser, Christoph Guetter, Arne Littmann, Marie-pierre Jolly, Andrew E. Arai, Sven Zuehlsdorff, Jens Guehring, Peter Kellman
    Abstract:

    Quantification of myocardial T1 relaxation has potential value in the diagnosis of both ischemic and nonischemic cardiomyopathies. Image acquisition using the modified Look-Locker inversion recovery technique is clinically feasible for T1 mapping. However, respiratory motion limits its applicability and degrades the accuracy of T1 Estimation. The robust registration of acquired inversion recovery Images is particularly challenging due to the large changes in Image contrast, especially for those Images acquired near the signal null point of the inversion recovery and other inversion times for which there is little tissue contrast. In this article, we propose a novel motion correction algorithm. This approach is based on estimating synthetic Images presenting contrast changes similar to the acquired Images. The Estimation of synthetic Images is formulated as a variational energy minimization problem. Validation on a consecutive patient data cohort shows that this strategy can perform robust nonrigid registration to align inversion recovery Images experiencing significant motion and lead to suppression of motion induced artifacts in the T1 map.

  • Motion correction for myocardial T1 mapping using Image registration with synthetic Image Estimation.
    Magnetic resonance in medicine, 2011
    Co-Authors: Hui Xue, Saurabh Shah, Andreas Greiser, Christoph Guetter, Arne Littmann, Marie-pierre Jolly, Andrew E. Arai, Sven Zuehlsdorff, Jens Guehring, Peter Kellman
    Abstract:

    Quantification of myocardial T1 relaxation has potential value in the diagnosis of both ischemic and nonischemic cardiomyopathies. Image acquisition using the modified Look-Locker inversion recovery technique is clinically feasible for T1 mapping. However, respiratory motion limits its applicability and degrades the accuracy of T1 Estimation. The robust registration of acquired inversion recovery Images is particularly challenging due to the large changes in Image contrast, especially for those Images acquired near the signal null point of the inversion recovery and other inversion times for which there is little tissue contrast. In this article, we propose a novel motion correction algorithm. This approach is based on estimating synthetic Images presenting contrast changes similar to the acquired Images. The Estimation of synthetic Images is formulated as a variational energy minimization problem. Validation on a consecutive patient data cohort shows that this strategy can perform robust nonrigid registration to align inversion recovery Images experiencing significant motion and lead to suppression of motion induced artifacts in the T1 map. Magn Reson Med 67:1644–1655, 2012. V C 2011 Wiley

Weili Lin - One of the best experts on this subject based on the ideXlab platform.

  • 3d conditional generative adversarial networks for high quality pet Image Estimation at low dose
    NeuroImage, 2018
    Co-Authors: Yan Wang, D S Lalush, Weili Lin, Dinggang Shen, Lei Wang, Jiliu Zhou, Luping Zhou
    Abstract:

    Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET Images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET Images, which impacts the Image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET Images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET Images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET Image and generate a corresponding output full-dose PET Image. Specifically, to render the same underlying information between the low-dose and full-dose PET Images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose Image. In order to guarantee the synthesized PET Image to be close to the real one, we take into account of the Estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated Images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.

  • deep auto context convolutional neural networks for standard dose pet Image Estimation from low dose pet mri
    Neurocomputing, 2017
    Co-Authors: Lei Xiang, Weili Lin, Yu Qiao, Dong Nie, Qian Wang, Dinggang Shen
    Abstract:

    Positron emission tomography (PET) is an essential technique in many clinical applications such as tumor detection and brain disorder diagnosis. In order to obtain high-quality PET Images, a standard-dose radioactive tracer is needed, which inevitably causes the risk of radiation exposure damage. For reducing the patient's exposure to radiation and maintaining the high quality of PET Images, in this paper, we propose a deep learning architecture to estimate the high-quality standard-dose PET (SPET) Image from the combination of the low-quality low-dose PET (LPET) Image and the accompanying T1-weighted acquisition from magnetic resonance imaging (MRI). Specifically, we adapt the convolutional neural network (CNN) to account for the two channel inputs of LPET and T1, and directly learn the end-to-end mapping between the inputs and the SPET output. Then, we integrate multiple CNN modules following the auto-context strategy, such that the tentatively estimated SPET of an early CNN can be iteratively refined by subsequent CNNs. Validations on real human brain PET/MRI data show that our proposed method can provide competitive Estimation quality of the PET Images, compared to the state-of-the-art methods. Meanwhile, our method is highly efficient to test on a new subject, e.g., spending 2 s for estimating an entire SPET Image in contrast to 16 min by the state-of-the-art method. The results above demonstrate the potential of our method in real clinical applications.

  • multi level canonical correlation analysis for standard dose pet Image Estimation
    IEEE Transactions on Image Processing, 2016
    Co-Authors: Pei Zhang, Ehsan Adeli, Yan Wang, Feng Shi, D S Lalush, Weili Lin, Dinggang Shen
    Abstract:

    Positron emission tomography (PET) Images are widely used in many clinical applications, such as tumor detection and brain disorder diagnosis. To obtain PET Images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting Images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) Image from a low-dose one in order to reduce the risk of radiation exposure and preserve Image quality. This may be achieved through mapping both S-PET and low-dose PET data into a common space and then performing patch-based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the Estimation accuracy. In this paper, we propose a data-driven multi-level canonical correlation analysis scheme to solve this problem. In particular, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve Estimation. In addition, we also use multi-modal magnetic resonance Images to help improve the Estimation with complementary information. Validations on phantom and real human brain data sets show that our method effectively estimates S-PET Images and well preserves critical clinical quantification measures, such as standard uptake value.