Image Deformation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 51231 Experts worldwide ranked by ideXlab platform

Siegfried H Stiehl - One of the best experts on this subject based on the ideXlab platform.

  • non rigid Image registration using a parameter free elastic model
    British Machine Vision Conference, 1998
    Co-Authors: Wladimir Peckar, Christoph Schnorr, Karl Rohr, Siegfried H Stiehl
    Abstract:

    The paper presents a new parameter-freeapproach to non-rigid Image registration, where displacements, obtained through a mapping of boundary structures in the source and target Image, are incorporated as hard constraints into elastic Image Deformation. As a consequence, our approach does not contain any parameters of the Deformation model (elastic constants). The approach guarantees the exact correspondence of boundary structures after elastic transformation provided that correct input data are available. We describe a linear and an incremental model, the latter model allows to cope also with large Deformations. Experimental results for 2-D and 3-D synthetic as well as real medical Images are presented.

  • two step parameter free elastic Image registration with prescribed point displacements
    International Conference on Image Analysis and Processing, 1997
    Co-Authors: Wladimir Peckar, Christoph Schnorr, Karl Rohr, Siegfried H Stiehl
    Abstract:

    A two-step parameter-free approach for non-rigid medical Image registration is presented. Displacements of boundary structures are computed in the first step and then incorporated as hard constraints for elastic Image Deformation in the second step. In comparison to traditional non-parametric methods, no driving forces have to be computed from Image data. The approach guarantees the exact correspondence of certain structures in the Images and does not depend on parameters of the Deformation model such as elastic constants. Numerical examples with synthetic and real Images are presented.

Jerry L. Prince - One of the best experts on this subject based on the ideXlab platform.

  • Deformation field correction for spatial normalization of pet Images using a population derived partial least squares model
    Machine learning in medical imaging. MLMI (Workshop), 2014
    Co-Authors: Murat Bilgel, Aaron Carass, Susan M Resnick, Jerry L. Prince, Dean F Wong
    Abstract:

    Spatial normalization of positron emission tomography (PET) Images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET Images that improves their anatomical alignment based on a Deformation correction model learned from structural Image registration. To generate the model, we first create a population-based PET template with a corresponding structural Image template. We register each PET Image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural Image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and Deformation fields obtained from the diffeomorphic mapping to the structural Image Deformation fields. The trained model can then be used to obtain more accurate registration of PET Images to the PET template without the use of a structural Image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET Images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed Images, 2) a smaller error in the Deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.

  • Deformation field correction for spatial normalization of pet Images using a population derived partial least squares model
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014
    Co-Authors: Murat Bilgel, Aaron Carass, Susan M Resnick, Jerry L. Prince, Dean F Wong
    Abstract:

    Spatial normalization of positron emission tomography (PET) Images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET Images that improves their anatomical alignment based on a Deformation correction model learned from structural Image registration. To generate the model, we first create a population-based PET template with a corresponding structural Image template. We register each PET Image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural Image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and Deformation fields obtained from the diffeomorphic mapping to the structural Image Deformation fields. The trained model can then be used to obtain more accurate registration of PET Images to the PET template without the use of a structural Image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET Images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed Images, 2) a smaller error in the Deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.

Wladimir Peckar - One of the best experts on this subject based on the ideXlab platform.

  • Parameter-Free Elastic Deformation Approach for 2D and 3D Registration Using Prescribed Displacements
    Journal of Mathematical Imaging and Vision, 1999
    Co-Authors: Wladimir Peckar, Christoph Schnorr, Karl Rohr, H. Siegfried Stiehl
    Abstract:

    A parameter-free approach for non-rigid Image registration based on elasticity theory is presented. In contrast to traditional physically-based numerical registration methods, no forces have to be computed from Image data to drive the elastic Deformation. Instead, displacements obtained with the help of mapping boundary structures in the source and target Image are incorporated as hard constraints into elastic Image Deformation. As a consequence, our approach does not contain any parameters of the Deformation model such as elastic constants. The approach guarantees the exact correspondence of boundary structures in the Images assuming that correct input data are available. The implemented incremental method allows to cope with large Deformations. The theoretical background, the finite element discretization of the elastic model, and experimental results for 2D and 3D synthetic as well as real medical Images are presented.

  • non rigid Image registration using a parameter free elastic model
    British Machine Vision Conference, 1998
    Co-Authors: Wladimir Peckar, Christoph Schnorr, Karl Rohr, Siegfried H Stiehl
    Abstract:

    The paper presents a new parameter-freeapproach to non-rigid Image registration, where displacements, obtained through a mapping of boundary structures in the source and target Image, are incorporated as hard constraints into elastic Image Deformation. As a consequence, our approach does not contain any parameters of the Deformation model (elastic constants). The approach guarantees the exact correspondence of boundary structures after elastic transformation provided that correct input data are available. We describe a linear and an incremental model, the latter model allows to cope also with large Deformations. Experimental results for 2-D and 3-D synthetic as well as real medical Images are presented.

  • two step parameter free elastic Image registration with prescribed point displacements
    International Conference on Image Analysis and Processing, 1997
    Co-Authors: Wladimir Peckar, Christoph Schnorr, Karl Rohr, Siegfried H Stiehl
    Abstract:

    A two-step parameter-free approach for non-rigid medical Image registration is presented. Displacements of boundary structures are computed in the first step and then incorporated as hard constraints for elastic Image Deformation in the second step. In comparison to traditional non-parametric methods, no driving forces have to be computed from Image data. The approach guarantees the exact correspondence of certain structures in the Images and does not depend on parameters of the Deformation model such as elastic constants. Numerical examples with synthetic and real Images are presented.

Aaron D Ward - One of the best experts on this subject based on the ideXlab platform.

  • assessment of Image registration accuracy in three dimensional transrectal ultrasound guided prostate biopsy
    Medical Physics, 2010
    Co-Authors: Vaishali Karnik, Aaron Fenster, Jeffrey Bax, Derek W Cool, Lori Gardi, Igor Gyacskov, Cesare Romagnoli, Aaron D Ward
    Abstract:

    Purpose: Prostate biopsy, performed using two-dimensional (2D) transrectal ultrasound(TRUS) guidance, is the clinical standard for a definitive diagnosis of prostate cancer. Histological analysis of the biopsies can reveal cancerous, noncancerous, or suspicious, possibly precancerous, tissue. During subsequent biopsy sessions, noncancerous regions should be avoided, and suspicious regions should be precisely rebiopsied, requiring accurate needle guidance. It is challenging to precisely guide a needle using 2D TRUS due to the limited anatomic information provided, and a three-dimensional (3D) record of biopsy locations for use in subsequent biopsy procedures cannot be collected. Our tracked, 3D TRUS-guided prostate biopsy system provides additional anatomic context and permits a 3D record of biopsies. However, targets determined based on a previous biopsy procedure must be transformed during the procedure to compensate for intraprocedure prostate shifting due to patient motion and prostate Deformation due to transducer probe pressure. Thus, registration is a critically important step required to determine these transformations so that correspondence is maintained between the prebiopsied Image and the real-time Image. Registration must not only be performed accurately, but also quickly, since correction for prostate motion and Deformation must be carried outduring the biopsy procedure. The authors evaluated the accuracy, variability, and speed of several surface-based and Image-based intrasession 3D-to-3D TRUSImage registration techniques, for both rigid and nonrigid cases, to find the required transformations. Methods: Our surface-based rigid and nonrigid registrations of the prostate were performed using the iterative-closest-point algorithm and a thin-plate spline algorithm, respectively. For Image-based rigid registration, the authors used a block matching approach, and for nonrigid registration, the authors define the moving Image Deformation using a regular, 3D grid of B-spline control points. The authors measured the target registration error (TRE) as the postregistration misalignment of 60 manually marked, corresponding intrinsic fiducials. The authors also measured the fiducial localization error (FLE), the effect of segmentation variability, and the effect of fiducial distance from the transducer probe tip. Lastly, the authors performed 3D principal component analysis (PCA) on the x , y , and z components of the TREs to examine the 95% confidence ellipsoids describing the errors for each registration method. Results: Using surface-based registration, the authors found mean TREs of 2.13 ± 0.80 and 2.09 ± 0.77 mm for rigid and nonrigid techniques, respectively. Using Image-based rigid and nonrigid registration, the authors found mean TREs of 1.74 ± 0.84 and 1.50 ± 0.83 mm , respectively. Our FLE was 0.21 mm and did not dominate the overall TRE. However, segmentation variability contributed substantially ( ∼ 50 % ) to the TRE of the surface-based techniques. PCA showed that the 95% confidence ellipsoid encompassing fiducial distances between the source and target registration Images was reduced from 3.05 to 0.14 cm 3 , and 0.05 cm 3 for the surface-based and Image-based techniques, respectively. The run times for both registration methods were comparable at less than 60 s. Conclusions: Our results compare favorably with a clinical need for a TRE of less than 2.5 mm, and suggest that Image-based registration is superior to surface-based registration for 3D TRUS-guided prostate biopsies, since it does not require segmentation.

S Stathakis - One of the best experts on this subject based on the ideXlab platform.

  • su e j 89 comparative analysis of mim and velocity s Image Deformation algorithm using simulated kv cbct Images for quality assurance
    Medical Physics, 2015
    Co-Authors: K Cline, G Narayanasamy, M Obediat, Dennis N Stanley, S Stathakis
    Abstract:

    Purpose: Deformable Image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated Deformations to digitally deform Images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic Image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) Images from CT Images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known Deformations, were used. CT and kV-CBCT Images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and Image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT Image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT Image. CT and kV-CBCT Images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT Image and each of the deformed CT, CBCT, and simulated-CBCT Images to obtain predicted Deformations. The error between the known and predicted Deformations was used as a metric to evaluate the quality of the simulated-CBCT Image. Ideally, themore » simulated-CBCT Image registration would produce the same accuracy as the deformed CBCT Image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT-CBCT registration.« less