Brain Image - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Brain Image

The Experts below are selected from a list of 80382 Experts worldwide ranked by ideXlab platform

Arno Klein – 1st expert on this subject based on the ideXlab platform

  • a reproducible evaluation of ants similarity metric performance in Brain Image registration
    NeuroImage, 2011
    Co-Authors: Brian B Avants, Nicholas J. Tustison, Gang Song, Philip A. Cook, Arno Klein

    Abstract:

    Abstract The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA’s LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669 ± 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling.

  • A reproducible evaluation of ANTs similarity metric performance in Brain Image registration
    NeuroImage, 2011
    Co-Authors: Brian B Avants, Nicholas J. Tustison, Gang Song, Philip A. Cook, Arno Klein, James C. Gee

    Abstract:

    The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA’s LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669. ±. 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling. © 2010 Elsevier Inc.

  • evaluation of volume based and surface based Brain Image registration methods
    NeuroImage, 2010
    Co-Authors: Arno Klein, Brian B Avants, Satrajit S Ghosh, Bruce Fischl, Babak A Ardekani, John J Mann, Ramin V Parsey

    Abstract:

    Establishing correspondences across Brains for the purposes of comparison and group analysis is almost universally done by registering Images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to Brain Image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and Brain-only (de-skulled) Images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled Brain Images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering Brains to the template.

Brian B Avants – 2nd expert on this subject based on the ideXlab platform

  • a reproducible evaluation of ants similarity metric performance in Brain Image registration
    NeuroImage, 2011
    Co-Authors: Brian B Avants, Nicholas J. Tustison, Gang Song, Philip A. Cook, Arno Klein

    Abstract:

    Abstract The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA’s LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669 ± 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling.

  • A reproducible evaluation of ANTs similarity metric performance in Brain Image registration
    NeuroImage, 2011
    Co-Authors: Brian B Avants, Nicholas J. Tustison, Gang Song, Philip A. Cook, Arno Klein, James C. Gee

    Abstract:

    The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA’s LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669. ±. 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling. © 2010 Elsevier Inc.

  • evaluation of volume based and surface based Brain Image registration methods
    NeuroImage, 2010
    Co-Authors: Arno Klein, Brian B Avants, Satrajit S Ghosh, Bruce Fischl, Babak A Ardekani, John J Mann, Ramin V Parsey

    Abstract:

    Establishing correspondences across Brains for the purposes of comparison and group analysis is almost universally done by registering Images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to Brain Image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and Brain-only (de-skulled) Images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled Brain Images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering Brains to the template.

N. Kehtarnavaz – 3rd expert on this subject based on the ideXlab platform

  • Spatial Mutual Information as Similarity Measure for 3-D Brain Image Registration
    IEEE Journal of Translational Engineering in Health and Medicine, 2014
    Co-Authors: Qolamreza R. Razlighi, N. Kehtarnavaz

    Abstract:

    Information theoretic-based similarity measures, in particular mutual information, are widely used for intermodal/intersubject 3-D Brain Image registration. However, conventional mutual information does not consider spatial dependency between adjacent voxels in Images, thus reducing its efficacy as a similarity measure in Image registration. This paper first presents a review of the existing attempts to incorporate spatial dependency into the computation of mutual information (MI). Then, a recently introduced spatially dependent similarity measure, named spatial MI, is extended to 3-D Brain Image registration. This extension also eliminates its artifact for translational misregistration. Finally, the effectiveness of the proposed 3-D spatial MI as a similarity measure is compared with three existing MI measures by applying controlled levels of noise degradation to 3-D simulated Brain Images.

  • Evaluating similarity measures for Brain Image registration
    Journal of Visual Communication and Image Representation, 2013
    Co-Authors: Qolamreza R. Razlighi, N. Kehtarnavaz, Siamak Yousefi

    Abstract:

    Evaluation of similarity measures for Image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, Image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for Brain Image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded Images and are also more successful in performing intermodal Image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D Brain Image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater Image degradation and provides more consistent outcomes for intermodal Brain Image registration.