Spatial Normalization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9249 Experts worldwide ranked by ideXlab platform

Peter T. Fox - One of the best experts on this subject based on the ideXlab platform.

  • Anatomical Global Spatial Normalization
    Neuroinformatics, 2010
    Co-Authors: Jack L. Lancaster, Peter T. Fox, Peter Kochunov, Matthew D. Cykowski, David Reese Mckay, William E. Rogers, Arthur W. Toga, Karl Zilles, Katrin Amunts, John C. Mazziotta
    Abstract:

    Anatomical global Spatial Normalization (aGSN) is presented as a method to scale high-resolution brain images to control for variability in brain size without altering the mean size of other brain structures. Two types of mean preserving scaling methods were investigated, “shape preserving” and “shape standardizing”. aGSN was tested by examining 56 brain structures from an adult brain atlas of 40 individuals (LPBA40) before and after Normalization, with detailed analyses of cerebral hemispheres, all gyri collectively, cerebellum, brainstem, and left and right caudate, putamen, and hippocampus. Mean sizes of brain structures as measured by volume, distance, and area were preserved and variance reduced for both types of scale factors. An interesting finding was that scale factors derived from each of the ten brain structures were also mean preserving. However, variance was best reduced using whole brain hemispheres as the reference structure, and this reduction was related to its high average correlation with other brain structures. The fractional reduction in variance of structure volumes was directly related to ρ 2, the square of the reference-to-structure correlation coefficient. The average reduction in variance in volumes by aGSN with whole brain hemispheres as the reference structure was approximately 32%. An analytical method was provided to directly convert between conventional and aGSN scale factors to support adaptation of aGSN to popular Spatial Normalization software packages.

  • Coordinate-based voxel-wise meta-analysis: dividends of Spatial Normalization. Report of a virtual workshop.
    Human brain mapping, 2005
    Co-Authors: Peter T. Fox, Angela R. Laird, Jack L. Lancaster
    Abstract:

    Spatial Normalization transforms a brain image from its natural form (“native space”) into a standardized form defined by a reference brain [Fox, 1995a]. The original motivation for introducing this technique was to allow the brain locations of task-induced functional activations to be reported in a “precise and unambiguous” manner, thereby “facilitating direct comparison of experimental results from different laboratories” [Fox et al., 1985]. The prospect of clear communication as a “dividend” from a community commitment to Spatial Normalization, however, proved largely unconvincing to the still-nascent brain mapping community of the middle 1980s. Improvement in the signal-to-noise ratio of functional brain maps that could be achieved by intersubject image averaging in standardized space [Fox et al., 1988; Friston et al., 1991] proved to be a very salient motivation, leading to widespread adoption of this data analysis standard. We estimate the human functional brain mapping (HFBM) literature reporting brain activations as x-y-z coordinates in standardized space to be no less than 2,500 articles (!10,000 experiments) with !500 new articles (2,000 experiments) published per year (Fig. 1). Fortunately, regardless of the motivation for adoption of this standard, the widespread use of Spatial standardization makes the HFBM literature fertile ground for quantitative meta-analysis methods based on Spatial concordance [Fox and Lancaster, 1996a,b; Fox et al., 1998]. In reference to the title of this article, voxel-based, function-location meta-analysis can be considered a dividend that the HFBM community is now receiving from its long-term investment in the development and promulgation of community standards for data-analysis and, in particular, Spatial Normalization. Meta-analysis is defined most generally as the post-hoc combination of results from independently performed studies to estimate better a parameter of interest. The original and by far the most prevalent form of meta-analysis pools studies with nonsignificant effects to test for significance in the collective, using the increase in n to increase statistical power [Pearson, 1904]. Effect-size meta-analyses have come under criticism for a variety of misuses, but are growing steadily in power and acceptance [Fox et al., 1998]. In the HFBM community, fundamentally new forms of meta-analysis are emerging, in which statistically significant effects are pooled and contrasted to estimate better such parameters as the Spatial location, Spatial distribution, activation likelihood, co-occurrence patterns, and underlying cognitive operations for specific categories of task. In the first published meta-analysis in cognitive neuroimaging, coordinates from three prior reports were tabulated and plotted to guide interpretation of results in a primary (non-meta-analytic) study [Frith et al., 1991]. Shortly thereafter, “stand-alone” HFBM meta-analyses began to appear in the literature [Buckner and Petersen, 1996; Fox, 1995b; Paus, 1996; Picard and Strick, 1996; Tulving et al., 1994]. To date, more than 50 meta-analyses of coordinate-based HFBM studies have appeared in the peer-reviewed literature. Although most of these meta-analyses are semiquantitative and statistically informal, this is changing. The trend toward quantitative, statistically formal HFBM meta-analysis began with Paus [1996], who computed and interpreted means and standard deviations of the x-y-z addresses in a review of studies of the frontal eye fields. Fox et al. [1997, 2001] extended this initiative by correcting raw estimates of Spatial location and variance for sample size to create scalable models of location probabilities (functional volumes models; FVM) and suggesting uses of such models *Correspondence to: Peter Fox, University of Texas Health Science Center, 7703 Floyd Curl Drive, San Antonio, TX 78284. E-mail: fox@uthscsa.edu Received for publication 7 February 2005; Accepted 8 February 2005 DOI: 10.1002/hbm.20139 Published online in Wiley InterScience (www.interscience.wiley. com). ! Human Brain Mapping 25:1–5(2005) !

  • Improvement in variability of the horizontal meridian of the primary visual area following high-resolution Spatial Normalization.
    Human brain mapping, 2003
    Co-Authors: Peter Kochunov, Jack L. Lancaster, M. Hasnain, Thomas J. Grabowski, Peter T. Fox
    Abstract:

    We investigated the decrease in intersubject functional variability in the horizontal meridian (HM) of the primary visual area (V1) before and after individual anatomical variability was significantly reduced using a high-resolution Spatial Normalization (HRSN) method. The analyzed dataset consisted of 10 normal, right-handed volunteers who had undergone both an O-15 PET study, which localized retinotopic visual area (V1), and a high-resolution anatomical MRI. Individual occipital lobes were manually segmented from anatomical images and transformed into a common space using an in-house high-resolution regional Spatial Normalization method called OSN. Individual anatomical and functional variability was quantified before and after HRSN processing. The reduction of individual anatomical variability was judged by the reduction in gray matter (GM) mismatch and by the improvement in overlap frequency between individual calcarine sulci. The reduction in intersubject functional variability of HM was determined by measurements of the overlap frequency between individual HM areas and by improvement in intersubject Z-score maps. The HRSN processing significantly reduced the individual anatomical variability: GM mismatch was reduced by a factor of two and the mean calcarine sulcus overlap frequency was improved from 37 to 68%. The reduction in functional variability was more subtle. However, both HM mean overlap (increased from 18 to 28%) and the average Z-score (increased from 2.2 to 2.55) were significantly improved. Although, functional registration was significantly improved by matching sulci, there was still residual variability. This is believed to be the variability of individual areas within the calcarine sulcus, and cannot be resolved by sulcal match. Thus, the proposed methodology provides an efficient, unbiased, and automated way to study structure-functional relationship in human brain. Hum. Brain Mapping 18:123–134, 2003. © 2002 Wiley-Liss, Inc.

  • Evaluation of octree regional Spatial Normalization method for regional anatomical matching.
    Human brain mapping, 2000
    Co-Authors: Peter Kochunov, Jack L. Lancaster, Paul M. Thompson, A. Boyer, Jean Hardies, Peter T. Fox
    Abstract:

    The goal of regional Spatial Normalization is to remove anatomical differences between individual three-dimensional (3D) brain images by warping them to match features of a standard brain atlas. Processing to fit features at the limiting resolution of a 3D MR image volume is computationally intensive, limiting the broad use of full-resolution regional Spatial Normalization. In Kochunov et al. (1999: Neuro-Image 10:724-737), we proposed a regional Spatial Normalization algorithm called octree Spatial Normalization (OSN) that reduces processing time to minutes while targeting the accuracy of previous methods. In the current study, modifications of the OSN algorithm for use in human brain images are described and tested. An automated brain tissue segmentation procedure was adopted to create anatomical templates to drive feature matching in white matter, gray matter, and cerebral-spinal fluid. Three similarity measurement functions (fast-cross correlation (CC), sum-square error, and centroid) were evaluated in a group of six subjects. A combination of fast-CC and centroid was found to provide the best feature matching and speed. Multiple iterations and multiple applications of the OSN algorithm were evaluated to improve fit quality. Two applications of the OSN algorithm with two iterations per application were found to significantly reduce volumetric mismatch (up to six times for lateral ventricle) while keeping processing time under 30 min. The refined version of OSN was tested with anatomical landmarks from several major sulci in a group of nine subjects. Anatomical variability was appreciably reduced for every sulcus investigated, and mean sulcal tracings accurately followed sulcal tracings in the target brain.

  • global Spatial Normalization of human brain using convex hulls
    The Journal of Nuclear Medicine, 1999
    Co-Authors: Jack L. Lancaster, Peter T. Fox, Hunter Downs, Daniel Nickerson, Trish A. Hander, Mohammed El Mallah, Peter Kochunov, Frank Zamarripa
    Abstract:

    UNLABELLED Global Spatial Normalization transforms a brain image so that its principal global Spatial features (position, orientation and dimensions) match those of a standard or atlas brain, supporting consistent analysis and referencing of brain locations. The convex hull (CH), derived from the brain's surface, was selected as the basis for automating and standardizing global Spatial Normalization. The accuracy and precision of CH global Spatial Normalization of PET and MR brain images were evaluated in normal human subjects. METHODS Software was developed to extract CHs of brain surfaces from tomographic brain images. Pelizzari's hat-to-head least-square-error surface-fitting method was modified to fit individual CHs (hats) to a template CH (head) and calculate a nine-parameter coordinate transformation to perform Spatial Normalization. A template CH was refined using MR images from 12 subjects to optimize global Spatial feature conformance to the 1988 Talairach Atlas brain. The template was tested in 12 additional subjects. Three major performance characteristics were evaluated: (a) quality of Spatial Normalization with anatomical MR images, (b) optimal threshold for PET and (c) quality of Spatial Normalization for functional PET images. RESULTS As a surface model of the human brain, the CH was shown to be highly consistent across subjects and imaging modalities. In MR images (n = 24), mean errors for anterior and posterior commissures generally were <1 mm, with SDs < 1.5 mm. Mean brain-dimension errors generally were <1.3 mm, and bounding limits were within 1-2 mm of the Talairach Atlas values. The optimal threshold for defining brain boundaries in both 18F-fluorodeoxyglucose (n = 8) and 15O-water (n = 12) PET images was 40% of the brain maximum value. The accuracy of global Spatial Normalization of PET images was shown to be similar to that of MR images. CONCLUSION The global features of CH-Spatially normalized brain images (position, orientation and size) were consistently transformed to match the Talairach Atlas in both MR and PET images. The CH method supports intermodality and intersubject global Spatial Normalization of tomographic brain images.

Jack L. Lancaster - One of the best experts on this subject based on the ideXlab platform.

  • Anatomical Global Spatial Normalization
    Neuroinformatics, 2010
    Co-Authors: Jack L. Lancaster, Peter T. Fox, Peter Kochunov, Matthew D. Cykowski, David Reese Mckay, William E. Rogers, Arthur W. Toga, Karl Zilles, Katrin Amunts, John C. Mazziotta
    Abstract:

    Anatomical global Spatial Normalization (aGSN) is presented as a method to scale high-resolution brain images to control for variability in brain size without altering the mean size of other brain structures. Two types of mean preserving scaling methods were investigated, “shape preserving” and “shape standardizing”. aGSN was tested by examining 56 brain structures from an adult brain atlas of 40 individuals (LPBA40) before and after Normalization, with detailed analyses of cerebral hemispheres, all gyri collectively, cerebellum, brainstem, and left and right caudate, putamen, and hippocampus. Mean sizes of brain structures as measured by volume, distance, and area were preserved and variance reduced for both types of scale factors. An interesting finding was that scale factors derived from each of the ten brain structures were also mean preserving. However, variance was best reduced using whole brain hemispheres as the reference structure, and this reduction was related to its high average correlation with other brain structures. The fractional reduction in variance of structure volumes was directly related to ρ 2, the square of the reference-to-structure correlation coefficient. The average reduction in variance in volumes by aGSN with whole brain hemispheres as the reference structure was approximately 32%. An analytical method was provided to directly convert between conventional and aGSN scale factors to support adaptation of aGSN to popular Spatial Normalization software packages.

  • Coordinate-based voxel-wise meta-analysis: dividends of Spatial Normalization. Report of a virtual workshop.
    Human brain mapping, 2005
    Co-Authors: Peter T. Fox, Angela R. Laird, Jack L. Lancaster
    Abstract:

    Spatial Normalization transforms a brain image from its natural form (“native space”) into a standardized form defined by a reference brain [Fox, 1995a]. The original motivation for introducing this technique was to allow the brain locations of task-induced functional activations to be reported in a “precise and unambiguous” manner, thereby “facilitating direct comparison of experimental results from different laboratories” [Fox et al., 1985]. The prospect of clear communication as a “dividend” from a community commitment to Spatial Normalization, however, proved largely unconvincing to the still-nascent brain mapping community of the middle 1980s. Improvement in the signal-to-noise ratio of functional brain maps that could be achieved by intersubject image averaging in standardized space [Fox et al., 1988; Friston et al., 1991] proved to be a very salient motivation, leading to widespread adoption of this data analysis standard. We estimate the human functional brain mapping (HFBM) literature reporting brain activations as x-y-z coordinates in standardized space to be no less than 2,500 articles (!10,000 experiments) with !500 new articles (2,000 experiments) published per year (Fig. 1). Fortunately, regardless of the motivation for adoption of this standard, the widespread use of Spatial standardization makes the HFBM literature fertile ground for quantitative meta-analysis methods based on Spatial concordance [Fox and Lancaster, 1996a,b; Fox et al., 1998]. In reference to the title of this article, voxel-based, function-location meta-analysis can be considered a dividend that the HFBM community is now receiving from its long-term investment in the development and promulgation of community standards for data-analysis and, in particular, Spatial Normalization. Meta-analysis is defined most generally as the post-hoc combination of results from independently performed studies to estimate better a parameter of interest. The original and by far the most prevalent form of meta-analysis pools studies with nonsignificant effects to test for significance in the collective, using the increase in n to increase statistical power [Pearson, 1904]. Effect-size meta-analyses have come under criticism for a variety of misuses, but are growing steadily in power and acceptance [Fox et al., 1998]. In the HFBM community, fundamentally new forms of meta-analysis are emerging, in which statistically significant effects are pooled and contrasted to estimate better such parameters as the Spatial location, Spatial distribution, activation likelihood, co-occurrence patterns, and underlying cognitive operations for specific categories of task. In the first published meta-analysis in cognitive neuroimaging, coordinates from three prior reports were tabulated and plotted to guide interpretation of results in a primary (non-meta-analytic) study [Frith et al., 1991]. Shortly thereafter, “stand-alone” HFBM meta-analyses began to appear in the literature [Buckner and Petersen, 1996; Fox, 1995b; Paus, 1996; Picard and Strick, 1996; Tulving et al., 1994]. To date, more than 50 meta-analyses of coordinate-based HFBM studies have appeared in the peer-reviewed literature. Although most of these meta-analyses are semiquantitative and statistically informal, this is changing. The trend toward quantitative, statistically formal HFBM meta-analysis began with Paus [1996], who computed and interpreted means and standard deviations of the x-y-z addresses in a review of studies of the frontal eye fields. Fox et al. [1997, 2001] extended this initiative by correcting raw estimates of Spatial location and variance for sample size to create scalable models of location probabilities (functional volumes models; FVM) and suggesting uses of such models *Correspondence to: Peter Fox, University of Texas Health Science Center, 7703 Floyd Curl Drive, San Antonio, TX 78284. E-mail: fox@uthscsa.edu Received for publication 7 February 2005; Accepted 8 February 2005 DOI: 10.1002/hbm.20139 Published online in Wiley InterScience (www.interscience.wiley. com). ! Human Brain Mapping 25:1–5(2005) !

  • Improvement in variability of the horizontal meridian of the primary visual area following high-resolution Spatial Normalization.
    Human brain mapping, 2003
    Co-Authors: Peter Kochunov, Jack L. Lancaster, M. Hasnain, Thomas J. Grabowski, Peter T. Fox
    Abstract:

    We investigated the decrease in intersubject functional variability in the horizontal meridian (HM) of the primary visual area (V1) before and after individual anatomical variability was significantly reduced using a high-resolution Spatial Normalization (HRSN) method. The analyzed dataset consisted of 10 normal, right-handed volunteers who had undergone both an O-15 PET study, which localized retinotopic visual area (V1), and a high-resolution anatomical MRI. Individual occipital lobes were manually segmented from anatomical images and transformed into a common space using an in-house high-resolution regional Spatial Normalization method called OSN. Individual anatomical and functional variability was quantified before and after HRSN processing. The reduction of individual anatomical variability was judged by the reduction in gray matter (GM) mismatch and by the improvement in overlap frequency between individual calcarine sulci. The reduction in intersubject functional variability of HM was determined by measurements of the overlap frequency between individual HM areas and by improvement in intersubject Z-score maps. The HRSN processing significantly reduced the individual anatomical variability: GM mismatch was reduced by a factor of two and the mean calcarine sulcus overlap frequency was improved from 37 to 68%. The reduction in functional variability was more subtle. However, both HM mean overlap (increased from 18 to 28%) and the average Z-score (increased from 2.2 to 2.55) were significantly improved. Although, functional registration was significantly improved by matching sulci, there was still residual variability. This is believed to be the variability of individual areas within the calcarine sulcus, and cannot be resolved by sulcal match. Thus, the proposed methodology provides an efficient, unbiased, and automated way to study structure-functional relationship in human brain. Hum. Brain Mapping 18:123–134, 2003. © 2002 Wiley-Liss, Inc.

  • High-speed high degree-of-freedom Spatial Normalization for human brain imaging
    Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat. No.00CH37143), 2000
    Co-Authors: Peter Kochunov, Jack L. Lancaster
    Abstract:

    Regional Spatial Normalization is an important preliminary step in the analysis of 3-D brain images. The goal is to remove anatomical differences by warping each brain image to match corresponding features in a standard brain atlas. We are developing a very efficient regional Spatial Normalization algorithm based on octree volume decomposition. The original Octree Spatial Normalization (OSN) algorithm was shown to perform regional Spatial Normalization in binary brain phantoms in less then 8 minutes with accuracy similar to previously published methods. Several modifications were made in OSN algorithm to optimize it for use with human brain images including automated brain tissue segmentation for tissue classification and feature matching methods with fast cross-correlation. Even with these modifications Spatial Normalization can still be done in less then 15 minutes for 256 arrays.

  • Evaluation of octree regional Spatial Normalization method for regional anatomical matching.
    Human brain mapping, 2000
    Co-Authors: Peter Kochunov, Jack L. Lancaster, Paul M. Thompson, A. Boyer, Jean Hardies, Peter T. Fox
    Abstract:

    The goal of regional Spatial Normalization is to remove anatomical differences between individual three-dimensional (3D) brain images by warping them to match features of a standard brain atlas. Processing to fit features at the limiting resolution of a 3D MR image volume is computationally intensive, limiting the broad use of full-resolution regional Spatial Normalization. In Kochunov et al. (1999: Neuro-Image 10:724-737), we proposed a regional Spatial Normalization algorithm called octree Spatial Normalization (OSN) that reduces processing time to minutes while targeting the accuracy of previous methods. In the current study, modifications of the OSN algorithm for use in human brain images are described and tested. An automated brain tissue segmentation procedure was adopted to create anatomical templates to drive feature matching in white matter, gray matter, and cerebral-spinal fluid. Three similarity measurement functions (fast-cross correlation (CC), sum-square error, and centroid) were evaluated in a group of six subjects. A combination of fast-CC and centroid was found to provide the best feature matching and speed. Multiple iterations and multiple applications of the OSN algorithm were evaluated to improve fit quality. Two applications of the OSN algorithm with two iterations per application were found to significantly reduce volumetric mismatch (up to six times for lateral ventricle) while keeping processing time under 30 min. The refined version of OSN was tested with anatomical landmarks from several major sulci in a group of nine subjects. Anatomical variability was appreciably reduced for every sulcus investigated, and mean sulcal tracings accurately followed sulcal tracings in the target brain.

Bradley S Peterson - One of the best experts on this subject based on the ideXlab platform.

  • a highly accurate symmetric optical flow based high dimensional nonlinear Spatial Normalization of brain images
    Magnetic Resonance Imaging, 2015
    Co-Authors: Lianghua He, Bradley S Peterson, Dongrong Xu
    Abstract:

    Abstract Spatial Normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional Spatial Normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention.

  • A highly accurate symmetric optical flow based high-dimensional nonlinear Spatial Normalization of brain images
    Magnetic resonance imaging, 2015
    Co-Authors: Ying Wen, Lili Hou, Bradley S Peterson
    Abstract:

    Spatial Normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional Spatial Normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention.

  • a highly accurate optical flow based algorithm for nonlinear Spatial Normalization of diffusion tensor images
    International Joint Conference on Neural Network, 2013
    Co-Authors: Ying Wen, Bradley S Peterson
    Abstract:

    Spatial Normalization plays a key role in voxel-based analyses of diffusion tensor images (DTI). We propose a highly accurate algorithm for high-dimensional Spatial Normalization of DTI data based on the technique of 3D optical flow. The theory of conventional optic flow assumes consistency of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering DTI data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is better not only than that of those traditional optical flow algorithms or using affine alignment, but also better than the results using popular tools such as the statistical parametric mapping (SPM) software package. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention.

  • Spatial Normalization of diffusion tensor images with voxel wise reconstruction of the diffusion gradient direction
    Medical Image Computing and Computer-Assisted Intervention, 2012
    Co-Authors: Wei Liu, Ying Wen, Xiaozheng Liu, Zhenyu Zhou, Yongdi Zhou, Bradley S Peterson
    Abstract:

    We propose a reconstructed diffusion gradient (RDG) method for Spatial Normalization of diffusion tensor imaging (DTI) data that warps the raw imaging data and then estimates the associated gradient direction for reconstruction of normalized DTI in the template space. The RDG method adopts the backward mapping strategy for DTI Normalization, with a specially designed approach to reconstruct a specific gradient direction in combination with the local deformation force. The method provides a voxel-based strategy to make the gradient direction align with the raw diffusion weighted imaging (DWI) volumes, ensuring correct estimation of the tensors in the warped space and thereby retaining the orientation information of the underlying structure. Compared with the existing tensor reorientation methods, experiments using both simulated and human data demonstrated that the RDG method provided more accurate tensor information. Our method can properly estimate the gradient direction in the template space that has been changed due to image transformation, and subsequently use the warped imaging data to directly reconstruct the warped tensor field in the template space, achieving the same goal as directly warping the tensor image. Moreover, the RDG method also can be used to Spatially normalize data using the Q-ball imaging (QBI) model.

Gilles Karcher - One of the best experts on this subject based on the ideXlab platform.

  • Voxel-based quantitative analysis of brain images from F-18 Fluorodeoxyglucose Positron Emission Tomography with a Block-Matching algorithm for Spatial Normalization
    Clinical Nuclear Medicine, 2012
    Co-Authors: Christophe Person, Valérie Louis-dorr, Sylvain Poussier, Olivier Commowick, Grégoire Malandain, Louis Maillard, Didier Wolf, Véronique Roch, Nicolas Gilet, Gilles Karcher
    Abstract:

    Purpose of the Report. Statistical Parametric Mapping (SPM) is widely used for the quantitative analysis of brain images from F-18 fluorodeoxyglucose Positron Emission Tomography (FDG-PET). SPM requires an initial step of Spatial Normalization to align all images to a standard anatomical model (the template), but this may lead to image distortion and artefacts, especially in cases of marked brain abnormalities. This study aimed at assessing a Block-Matching (BM) Normalization algorithm, where most transformations are not directly computed on the overall brain volume but through small blocks, a principle that is likely to minimize artefacts

  • Voxel-Based Quantitative Analysis of Brain Images From 18F-FDG PET With a Block-Matching Algorithm for Spatial Normalization
    Clinical nuclear medicine, 2012
    Co-Authors: Christophe Person, Valérie Louis-dorr, Sylvain Poussier, Olivier Commowick, Grégoire Malandain, Louis Maillard, Didier Wolf, Nicolas Gillet, Véronique Roch, Gilles Karcher
    Abstract:

    Purpose of the Report. Statistical Parametric Mapping (SPM) is widely used for the quantitative analysis of brain images from F-18 fluorodeoxyglucose Positron Emission Tomography (FDG-PET). SPM requires an initial step of Spatial Normalization to align all images to a standard anatomical model (the template), but this may lead to image distortion and artefacts, especially in cases of marked brain abnormalities. This study aimed at assessing a Block-Matching (BM) Normalization algorithm, where most transformations are not directly computed on the overall brain volume but through small blocks, a principle that is likely to minimize artefacts

Hsiao-wen Chung - One of the best experts on this subject based on the ideXlab platform.

  • effects of interpolation methods in Spatial Normalization of diffusion tensor imaging data on group comparison of fractional anisotropy
    Magnetic Resonance Imaging, 2009
    Co-Authors: Tzu Cheng Chao, Ming Chung Chou, Pinchen Yang, Hsiao-wen Chung
    Abstract:

    This study investigated the effects on the measurement of fractional anisotropy (FA) during interpolation of diffusion tensor images in Spatial Normalization, which is required for voxel-based statistics. Diffusion tensor imaging data were obtained from nine male patients with attention deficit/hyperactivity disorder and nine age-matched control subjects. Regions of interest were selected from the genu of corpus callosum (GCC) and the right anterior corona radiata (RACR), with FA values measured before and after Spatial Normalization using two interpolation algorithms: linear and rotationally linear. Computer simulations were performed to verify the experimental findings. Between-group difference in FA was observed in the GCC and RACR before Spatial Normalization (P<.00001). Interpolation reduced the measured FA values significantly (P<.00001 for both algorithms) but did not affect the group difference in the GCC. For the RACR, the between-group difference vanished (P=.968) after linear interpolation but was relatively unaffected by using rotationally linear interpolation (P=.00001). FA histogram analysis and computer simulations confirmed these findings. This work suggests that caution should be exercised in voxel-based group comparisons as Spatial Normalization may affect the FA value in nonnegligible degrees, particularly in brain areas with predominantly crossing fibers.

  • Effects of interpolation methods in Spatial Normalization of diffusion tensor imaging data on group comparison of fractional anisotropy
    Magnetic resonance imaging, 2008
    Co-Authors: Tzu Cheng Chao, Ming Chung Chou, Pinchen Yang, Hsiao-wen Chung
    Abstract:

    This study investigated the effects on the measurement of fractional anisotropy (FA) during interpolation of diffusion tensor images in Spatial Normalization, which is required for voxel-based statistics. Diffusion tensor imaging data were obtained from nine male patients with attention deficit/hyperactivity disorder and nine age-matched control subjects. Regions of interest were selected from the genu of corpus callosum (GCC) and the right anterior corona radiata (RACR), with FA values measured before and after Spatial Normalization using two interpolation algorithms: linear and rotationally linear. Computer simulations were performed to verify the experimental findings. Between-group difference in FA was observed in the GCC and RACR before Spatial Normalization (P