The Experts below are selected from a list of 80382 Experts worldwide ranked by ideXlab platform
Arno Klein - One of the best experts on this subject based on the ideXlab platform.
-
a reproducible evaluation of ants similarity metric performance in Brain Image registration
NeuroImage, 2011Co-Authors: Brian B Avants, Philip A. Cook, Gang Song, Nicholas J. Tustison, Arno KleinAbstract:Abstract The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA's LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669 ± 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling.
-
A reproducible evaluation of ANTs similarity metric performance in Brain Image registration
NeuroImage, 2011Co-Authors: Brian B Avants, Philip A. Cook, Arno Klein, Gang Song, Nicholas J. Tustison, James C. GeeAbstract:The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA's LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669. ±. 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling. © 2010 Elsevier Inc.
-
evaluation of volume based and surface based Brain Image registration methods
NeuroImage, 2010Co-Authors: Arno Klein, Brian B Avants, Satrajit S Ghosh, Bruce Fischl, Babak A Ardekani, John J Mann, Ramin V ParseyAbstract:Establishing correspondences across Brains for the purposes of comparison and group analysis is almost universally done by registering Images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to Brain Image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and Brain-only (de-skulled) Images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled Brain Images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering Brains to the template.
Brian B Avants - One of the best experts on this subject based on the ideXlab platform.
-
a reproducible evaluation of ants similarity metric performance in Brain Image registration
NeuroImage, 2011Co-Authors: Brian B Avants, Philip A. Cook, Gang Song, Nicholas J. Tustison, Arno KleinAbstract:Abstract The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA's LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669 ± 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling.
-
A reproducible evaluation of ANTs similarity metric performance in Brain Image registration
NeuroImage, 2011Co-Authors: Brian B Avants, Philip A. Cook, Arno Klein, Gang Song, Nicholas J. Tustison, James C. GeeAbstract:The United States National Institutes of Health (NIH) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools (ANTs), which is approaching its 2.0 release. The ANTs open source software library consists of a suite of state-of-the-art Image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use ANTs to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the ANTs implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted MRI Brain Image data of 40 subjects (UCLA's LPBA40 dataset). We report evaluation results on cortical and whole Brain labels for both the affine and deformable components of the registration. Results indicate that the best ANTs methods are competitive with existing Brain extraction results (Jaccard = 0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard = 0.669. ±. 0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and Brain labeling. © 2010 Elsevier Inc.
-
evaluation of volume based and surface based Brain Image registration methods
NeuroImage, 2010Co-Authors: Arno Klein, Brian B Avants, Satrajit S Ghosh, Bruce Fischl, Babak A Ardekani, John J Mann, Ramin V ParseyAbstract:Establishing correspondences across Brains for the purposes of comparison and group analysis is almost universally done by registering Images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to Brain Image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and Brain-only (de-skulled) Images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled Brain Images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering Brains to the template.
N. Kehtarnavaz - One of the best experts on this subject based on the ideXlab platform.
-
Spatial Mutual Information as Similarity Measure for 3-D Brain Image Registration
IEEE Journal of Translational Engineering in Health and Medicine, 2014Co-Authors: Qolamreza R. Razlighi, N. KehtarnavazAbstract:Information theoretic-based similarity measures, in particular mutual information, are widely used for intermodal/intersubject 3-D Brain Image registration. However, conventional mutual information does not consider spatial dependency between adjacent voxels in Images, thus reducing its efficacy as a similarity measure in Image registration. This paper first presents a review of the existing attempts to incorporate spatial dependency into the computation of mutual information (MI). Then, a recently introduced spatially dependent similarity measure, named spatial MI, is extended to 3-D Brain Image registration. This extension also eliminates its artifact for translational misregistration. Finally, the effectiveness of the proposed 3-D spatial MI as a similarity measure is compared with three existing MI measures by applying controlled levels of noise degradation to 3-D simulated Brain Images.
-
Evaluating similarity measures for Brain Image registration
Journal of Visual Communication and Image Representation, 2013Co-Authors: Qolamreza R. Razlighi, N. Kehtarnavaz, Siamak YousefiAbstract:Evaluation of similarity measures for Image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, Image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for Brain Image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded Images and are also more successful in performing intermodal Image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D Brain Image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater Image degradation and provides more consistent outcomes for intermodal Brain Image registration.
Qolamreza R. Razlighi - One of the best experts on this subject based on the ideXlab platform.
-
Spatial Mutual Information as Similarity Measure for 3-D Brain Image Registration
IEEE Journal of Translational Engineering in Health and Medicine, 2014Co-Authors: Qolamreza R. Razlighi, N. KehtarnavazAbstract:Information theoretic-based similarity measures, in particular mutual information, are widely used for intermodal/intersubject 3-D Brain Image registration. However, conventional mutual information does not consider spatial dependency between adjacent voxels in Images, thus reducing its efficacy as a similarity measure in Image registration. This paper first presents a review of the existing attempts to incorporate spatial dependency into the computation of mutual information (MI). Then, a recently introduced spatially dependent similarity measure, named spatial MI, is extended to 3-D Brain Image registration. This extension also eliminates its artifact for translational misregistration. Finally, the effectiveness of the proposed 3-D spatial MI as a similarity measure is compared with three existing MI measures by applying controlled levels of noise degradation to 3-D simulated Brain Images.
-
Evaluating similarity measures for Brain Image registration
Journal of Visual Communication and Image Representation, 2013Co-Authors: Qolamreza R. Razlighi, N. Kehtarnavaz, Siamak YousefiAbstract:Evaluation of similarity measures for Image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, Image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for Brain Image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded Images and are also more successful in performing intermodal Image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D Brain Image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater Image degradation and provides more consistent outcomes for intermodal Brain Image registration.
Shuihua Wang - One of the best experts on this subject based on the ideXlab platform.
-
magnetic resonance Brain Image classification based on weighted type fractional fourier transform and nonparallel support vector machine
International Journal of Imaging Systems and Technology, 2015Co-Authors: Yudong Zhang, Shuihua Wang, Shufang Chen, Jianfei Yang, Preetha PhillipsAbstract:To classify Brain Images into pathological or healthy is a key pre-clinical state for patients. Manual classification is tiresome, expensive, time-consuming, and irreproducible. In this study, we aimed to present an automatic computer-aided system for Brain-Image classification. We used 90 T2-weighted Images obtained by magnetic resonance Images. First, we used weighted-type fractional Fourier transform WFRFT to extract spectrums from each magnetic resonance Image. Second, we used principal component analysis PCA to reduce spectrum features to only 26. Third, those reduced spectral features of different samples were combined and were fed into support vector machine SVM and its two variants: generalized eigenvalue proximal SVM and twin SVM. A 5 × 5-fold cross-validation results showed that this proposed "WFRFT+PCA+generalized eigenvalue proximal SVM" yielded sensitivity of 99.53%, specificity of 92.00%, precision of 99.53%, and accuracy of 99.11%, which are comparable with the proposed "WFRFT+PCA+twin SVM" and better than the proposed "WFRFT+PCA+SVM." Besides, all three proposed methods were superior to eight state-of-the-art algorithms. Thus, WFRFT is effective, and the proposed methods can be used in practical. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 317-327, 2015
-
a hybrid method for mri Brain Image classification
Expert Systems With Applications, 2011Co-Authors: Yudong Zhang, Zhengchao Dong, Lenan Wu, Shuihua WangAbstract:Automated and accurate classification of MR Brain Images is of importance for the analysis and interpretation of these Images and many methods have been proposed. In this paper, we present a neural network (NN) based method to classify a given MR Brain Image as normal or abnormal. This method first employs wavelet transform to extract features from Images, and then applies the technique of principle component analysis (PCA) to reduce the dimensions of features. The reduced features are sent to a back propagation (BP) NN, with which scaled conjugate gradient (SCG) is adopted to find the optimal weights of the NN. We applied this method on 66 Images (18 normal, 48 abnormal). The classification accuracies on both training and test Images are 100%, and the computation time per Image is only 0.0451s.
-
Magnetic resonance Brain Image classification by an improved artificial bee colony algorithm
Progress in Electromagnetics Research, 2011Co-Authors: Yangjun Zhang, Lingfeng Wu, Shuihua WangAbstract:Automated and accurate classification of magnetic resonance (MR) Brain Images is a hot topic in the field of neuroimaging. Recently many different and innovative methods have been proposed to improve upon this technology. In this study, we presented a hybrid method based on forward neural network (FNN) to classify an MR Brain Image as normal or abnormal. The method first employed a discrete wavelet transform to extract features from Images, and then applied the technique of principle component analysis (PCA) to reduce the size of the features. The reduced features were sent to an FNN, of which the parameters were optimized via an improved artificial bee colony (ABC) algorithm based on both fitness scaling and chaotic theory. We referred to the improved algorithm as scaled chaotic artificial bee colony (SCABC). Moreover, the K-fold stratified cross validation was employed to avoid overfitting. In the experiment, we applied the proposed method on the data set of T2-weighted MRI Images consisting of 66 Brain Images (18 normal and 48 abnormal). The proposed SCABC was compared with traditional training methods such as BP, momentum BP, genetic algorithm, elite genetic algorithm with migration, simulated annealing, and ABC. Each algorithm was run 20 times to reduce randomness. The results show that our SCABC can obtain the least mean MSE and 100% classification accuracy. © 2011 EMW Publishing.
-
a novel method for magnetic resonance Brain Image classification based on adaptive chaotic pso
Progress in Electromagnetics Research-pier, 2010Co-Authors: Yudong Zhang, Shuihua Wang, Lenan WuAbstract:Automated and accurate classiflcation of magnetic resonance (MR) Brain Images is an integral component of the analysis and interpretation of neuroimaging. Many difierent and innovative methods have been proposed to improve upon this technology. In this study, we presented a forward neural network (FNN) based method to classify a given MR Brain Image as normal or abnormal. This method flrst employs a wavelet transform to extract features from Images, and then applies the technique of principle component analysis (PCA) to reduce the dimensions of features. The reduced features are sent to an FNN, and these parameters are optimized via adaptive chaotic particle swarm optimization (ACPSO). K-fold stratifled cross validation was used to enhance generalization. We applied the proposed method on 160 Images (20 normal, 140 abnormal), and found that the classiflcation accuracy is as high as 98.75% while the computation time per Image is only 0.0452s.