Motion Models

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 217167 Experts worldwide ranked by ideXlab platform

Salam Dhou - One of the best experts on this subject based on the ideXlab platform.

  • su c bra 07 variability of patient specific Motion Models derived using different deformable image registration algorithms for lung cancer stereotactic body radiotherapy sbrt patients
    Medical Physics, 2016
    Co-Authors: Salam Dhou, John H Lewis, D Ionascu, Christopher S Williams
    Abstract:

    Purpose: To study the variability of patient-specific Motion Models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion Models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a Motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The Motion Models derived were compared using patient 4DCT scans. Results: Motion Models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference Motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another Motion model derived using a different DIR algorithm. Results showed that comparing to a reference Motion model (derived using the Demons algorithm), the eigenvectors of the Motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the Motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that Motion Models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.

  • we g 207 06 3d fluoroscopic image generation from patient specific 4dcbct based Motion Models derived from physical phantom and clinical patient images
    Medical Physics, 2015
    Co-Authors: Salam Dhou, Weixing Cai, Mark D Hurwitz, Christopher S Williams, J Rottmann, Pankaj Mishra, Marios Myronakis, F Cifter, R Berbeco, D Ionascu
    Abstract:

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient Motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific Motion Models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing Motion during treatment delivery. Methods: Motion Models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the Motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based Motion Models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT Motion Models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based Motion Models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based Motion Models were found to account for the 3D non-rigid Motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.

  • su e i 03 lateral truncation artifact correction for 4dcbct based Motion modeling and dose assessment
    Medical Physics, 2015
    Co-Authors: Salam Dhou, Marios Myronakis, F Cifter, R Berbeco, J Lewis, Weixing Cai
    Abstract:

    Purpose: To allow accurate Motion modeling and dose assessment based on 4DCBCT by addressing the limited field of view (FOV) and lateral truncation artifacts in current clinical CBCT systems. Due to the size and geometry of onboard flat panel detects, CBCT often cannot cover the entire thorax of adult patients. We implement method to extend the images generated from 4DCBCT-based Motion Models and correct lateral truncation artifacts. Methods: The method is based on deforming a reference 4DCT image containing the entire patient anatomy to the (smaller) CBCT image within the higher quality CBCT FOV. Next, the displacement vector field (DVF) derived inside the CBCT FOV is smoothly extrapolated out to the edges of the body. These extrapolated displacement vectors are used to generate a new body contour and HU values outside of the CBCT FOV. This method is applied to time-varying volumetric images (3D fluoroscopic images) generated from a 4DCBCT-based Motion model at 2 Hz. Six XCAT phantoms are used to test this approach and reconstruction accuracy is investigated. Results: The normalized root mean square error between the corrected images generated from the 4DCBCT-based Motion model and the ground truth XCAT phantom at each time point is generally less than 20%. These results are comparable to results from 4DCT-based Motion Models. The anatomical structures outside the CBCT FOV can be reconstructed with an error comparable to that inside the FOV. The resulting noise is comparable to that of 4DCT. Conclusions: The proposed approach can effectively correct the artifact due to lateral truncation in 4DCBCT-based Motion Models. The quality of the resulting images is comparable to images generated from 4DCT-based Motion Models. Capturing the body contour and anatomy outside the CBCT FOV makes more reasonable dose calculations possible.

Weixing Cai - One of the best experts on this subject based on the ideXlab platform.

  • we g 207 06 3d fluoroscopic image generation from patient specific 4dcbct based Motion Models derived from physical phantom and clinical patient images
    Medical Physics, 2015
    Co-Authors: Salam Dhou, Weixing Cai, Mark D Hurwitz, Christopher S Williams, J Rottmann, Pankaj Mishra, Marios Myronakis, F Cifter, R Berbeco, D Ionascu
    Abstract:

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient Motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific Motion Models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing Motion during treatment delivery. Methods: Motion Models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the Motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based Motion Models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT Motion Models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based Motion Models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based Motion Models were found to account for the 3D non-rigid Motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.

  • su e i 03 lateral truncation artifact correction for 4dcbct based Motion modeling and dose assessment
    Medical Physics, 2015
    Co-Authors: Salam Dhou, Marios Myronakis, F Cifter, R Berbeco, J Lewis, Weixing Cai
    Abstract:

    Purpose: To allow accurate Motion modeling and dose assessment based on 4DCBCT by addressing the limited field of view (FOV) and lateral truncation artifacts in current clinical CBCT systems. Due to the size and geometry of onboard flat panel detects, CBCT often cannot cover the entire thorax of adult patients. We implement method to extend the images generated from 4DCBCT-based Motion Models and correct lateral truncation artifacts. Methods: The method is based on deforming a reference 4DCT image containing the entire patient anatomy to the (smaller) CBCT image within the higher quality CBCT FOV. Next, the displacement vector field (DVF) derived inside the CBCT FOV is smoothly extrapolated out to the edges of the body. These extrapolated displacement vectors are used to generate a new body contour and HU values outside of the CBCT FOV. This method is applied to time-varying volumetric images (3D fluoroscopic images) generated from a 4DCBCT-based Motion model at 2 Hz. Six XCAT phantoms are used to test this approach and reconstruction accuracy is investigated. Results: The normalized root mean square error between the corrected images generated from the 4DCBCT-based Motion model and the ground truth XCAT phantom at each time point is generally less than 20%. These results are comparable to results from 4DCT-based Motion Models. The anatomical structures outside the CBCT FOV can be reconstructed with an error comparable to that inside the FOV. The resulting noise is comparable to that of 4DCT. Conclusions: The proposed approach can effectively correct the artifact due to lateral truncation in 4DCBCT-based Motion Models. The quality of the resulting images is comparable to images generated from 4DCT-based Motion Models. Capturing the body contour and anatomy outside the CBCT FOV makes more reasonable dose calculations possible.

D Ionascu - One of the best experts on this subject based on the ideXlab platform.

  • su c bra 07 variability of patient specific Motion Models derived using different deformable image registration algorithms for lung cancer stereotactic body radiotherapy sbrt patients
    Medical Physics, 2016
    Co-Authors: Salam Dhou, John H Lewis, D Ionascu, Christopher S Williams
    Abstract:

    Purpose: To study the variability of patient-specific Motion Models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion Models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a Motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The Motion Models derived were compared using patient 4DCT scans. Results: Motion Models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference Motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another Motion model derived using a different DIR algorithm. Results showed that comparing to a reference Motion model (derived using the Demons algorithm), the eigenvectors of the Motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the Motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that Motion Models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.

  • we g 207 06 3d fluoroscopic image generation from patient specific 4dcbct based Motion Models derived from physical phantom and clinical patient images
    Medical Physics, 2015
    Co-Authors: Salam Dhou, Weixing Cai, Mark D Hurwitz, Christopher S Williams, J Rottmann, Pankaj Mishra, Marios Myronakis, F Cifter, R Berbeco, D Ionascu
    Abstract:

    Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient Motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific Motion Models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing Motion during treatment delivery. Methods: Motion Models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the Motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based Motion Models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT Motion Models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based Motion Models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based Motion Models were found to account for the 3D non-rigid Motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.

David J Hawkes - One of the best experts on this subject based on the ideXlab platform.

  • respiratory Motion Models a review
    Medical Image Analysis, 2013
    Co-Authors: J Mcclelland, David J Hawkes, Tobias Schaeffter, Andrew P King
    Abstract:

    The problem of respiratory Motion has proved a serious obstacle in developing techniques to acquire images or guide interventions in abdominal and thoracic organs. Motion Models offer a possible solution to these problems, and as a result the field of respiratory Motion modelling has become an active one over the past 15 years. A Motion model can be defined as a process that takes some surrogate data as input and produces a Motion estimate as output. Many techniques have been proposed in the literature, differing in the data used to form the Models, the type of model employed, how this model is computed, the type of surrogate data used as input to the model in order to make Motion estimates and what form this output should take. In addition, a wide range of different application areas have been proposed. In this paper we summarise the state of the art in this important field and in the process highlight the key papers that have driven its advance. The intention is that this will serve as a timely review and comparison of the different techniques proposed to date and as a basis to inform future research in this area.

  • inter fraction variations in respiratory Motion Models
    Physics in Medicine and Biology, 2011
    Co-Authors: J Mcclelland, Marc Modat, Simon Hughes, S Ahmad, A Qureshi, David Landau, Sebastien Ourselin, David J Hawkes
    Abstract:

    Respiratory Motion can vary dramatically between the planning stage and the different fractions of radiotherapy treatment. Motion predictions used when constructing the radiotherapy plan may be unsuitable for later fractions of treatment. This paper presents a methodology for constructing patient-specific respiratory Motion Models and uses these Models to evaluate and analyse the inter-fraction variations in the respiratory Motion. The internal respiratory Motion is determined from the deformable registration of Cine CT data and related to a respiratory surrogate signal derived from 3D skin surface data. Three different Models for relating the internal Motion to the surrogate signal have been investigated in this work. Data were acquired from six lung cancer patients. Two full datasets were acquired for each patient, one before the course of radiotherapy treatment and one at the end (approximately 6 weeks later). Separate Models were built for each dataset. All Models could accurately predict the respiratory Motion in the same dataset, but had large errors when predicting the Motion in the other dataset. Analysis of the inter-fraction variations revealed that most variations were spatially varying base-line shifts, but changes to the anatomy and the Motion trajectories were also observed.

  • a continuous 4d Motion model from multiple respiratory cycles for use in lung radiotherapy
    Medical Physics, 2006
    Co-Authors: J Mcclelland, J Blackall, S Tarte, Adam C Chandler, Simon Hughes, S Ahmad, D Landau, David J Hawkes
    Abstract:

    Respiratory Motion causes errors when planning and delivering radiotherapy treatment to lung cancer patients. To reduce these errors, methods of acquiring and using four-dimensional computed tomography (4DCT) datasets have been developed. We have developed a novel method of constructing computational Motion Models from 4DCT. The Motion Models attempt to describe an average respiratory cycle, which reduces the effects of variation between different cycles. They require substantially less memory than a 4DCT dataset, are continuous in space and time, and facilitate automatic target propagation and combining of doses over the respiratory cycle. The Motion Models are constructed from CT data acquired in cine mode while the patient is free breathing (free breathing CT - FBCT). A "slab" of data is acquired at each couch position, with 3-4 contiguous slabs being acquired per patient. For each slab a sequence of 20 or 30 volumes was acquired over 20 seconds. A respiratory signal is simultaneously recorded in order to calculate the position in the respiratory cycle for each FBCT. Additionally, a high quality reference CT volume is acquired at breath hold. The reference volume is nonrigidly registered to each of the FBCT volumes. A Motion model is then constructed for each slab by temporally fitting the nonrigid registration results. The value of each of the registration parameters is related to the position in the respiratory cycle by fitting an approximating B spline to the registration results. As an approximating function is used, and the data is acquired over several respiratory cycles, the function should model an average respiratory cycle. This can then be used to calculate the value of each degree of freedom at any desired position in the respiratory cycle. The resulting nonrigid transformation will deform the reference volume to predict the contents of the slab at the desired position in the respiratory cycle. The slab model predictions are then concatenated to produce a combined prediction over the entire region of interest. We have performed a number of experiments to assess the accuracy of the nonrigid registration results and the Motion model predictions. The individual slab Models were evaluated by expert visual assessment and the tracking of easily identifiable anatomical points. The combined Models were evaluated by calculating the discontinuities between the transformations at the slab boundaries. The experiments were performed on five patients with a total of 18 slabs between them. For the point tracking experiments, the mean distance between where a clinician manually identified a point and where the registration results located the point, the target registration error (TRE), was 1.3 mm. The mean distance between a manually identified point and the Models prediction of the point's location, the target model error (TME), was 1.6 mm. The mean discontinuity between model predictions at the slab boundaries, the Continuity Error, was 2.2 mm. The results show that the Motion Models perform with a level of accuracy comparable to the slice thickness of 1.5 mm. © 2006 American Association of Physicists in Medicine.

J Mcclelland - One of the best experts on this subject based on the ideXlab platform.

  • respiratory Motion Models a review
    Medical Image Analysis, 2013
    Co-Authors: J Mcclelland, David J Hawkes, Tobias Schaeffter, Andrew P King
    Abstract:

    The problem of respiratory Motion has proved a serious obstacle in developing techniques to acquire images or guide interventions in abdominal and thoracic organs. Motion Models offer a possible solution to these problems, and as a result the field of respiratory Motion modelling has become an active one over the past 15 years. A Motion model can be defined as a process that takes some surrogate data as input and produces a Motion estimate as output. Many techniques have been proposed in the literature, differing in the data used to form the Models, the type of model employed, how this model is computed, the type of surrogate data used as input to the model in order to make Motion estimates and what form this output should take. In addition, a wide range of different application areas have been proposed. In this paper we summarise the state of the art in this important field and in the process highlight the key papers that have driven its advance. The intention is that this will serve as a timely review and comparison of the different techniques proposed to date and as a basis to inform future research in this area.

  • inter fraction variations in respiratory Motion Models
    Physics in Medicine and Biology, 2011
    Co-Authors: J Mcclelland, Marc Modat, Simon Hughes, S Ahmad, A Qureshi, David Landau, Sebastien Ourselin, David J Hawkes
    Abstract:

    Respiratory Motion can vary dramatically between the planning stage and the different fractions of radiotherapy treatment. Motion predictions used when constructing the radiotherapy plan may be unsuitable for later fractions of treatment. This paper presents a methodology for constructing patient-specific respiratory Motion Models and uses these Models to evaluate and analyse the inter-fraction variations in the respiratory Motion. The internal respiratory Motion is determined from the deformable registration of Cine CT data and related to a respiratory surrogate signal derived from 3D skin surface data. Three different Models for relating the internal Motion to the surrogate signal have been investigated in this work. Data were acquired from six lung cancer patients. Two full datasets were acquired for each patient, one before the course of radiotherapy treatment and one at the end (approximately 6 weeks later). Separate Models were built for each dataset. All Models could accurately predict the respiratory Motion in the same dataset, but had large errors when predicting the Motion in the other dataset. Analysis of the inter-fraction variations revealed that most variations were spatially varying base-line shifts, but changes to the anatomy and the Motion trajectories were also observed.

  • a continuous 4d Motion model from multiple respiratory cycles for use in lung radiotherapy
    Medical Physics, 2006
    Co-Authors: J Mcclelland, J Blackall, S Tarte, Adam C Chandler, Simon Hughes, S Ahmad, D Landau, David J Hawkes
    Abstract:

    Respiratory Motion causes errors when planning and delivering radiotherapy treatment to lung cancer patients. To reduce these errors, methods of acquiring and using four-dimensional computed tomography (4DCT) datasets have been developed. We have developed a novel method of constructing computational Motion Models from 4DCT. The Motion Models attempt to describe an average respiratory cycle, which reduces the effects of variation between different cycles. They require substantially less memory than a 4DCT dataset, are continuous in space and time, and facilitate automatic target propagation and combining of doses over the respiratory cycle. The Motion Models are constructed from CT data acquired in cine mode while the patient is free breathing (free breathing CT - FBCT). A "slab" of data is acquired at each couch position, with 3-4 contiguous slabs being acquired per patient. For each slab a sequence of 20 or 30 volumes was acquired over 20 seconds. A respiratory signal is simultaneously recorded in order to calculate the position in the respiratory cycle for each FBCT. Additionally, a high quality reference CT volume is acquired at breath hold. The reference volume is nonrigidly registered to each of the FBCT volumes. A Motion model is then constructed for each slab by temporally fitting the nonrigid registration results. The value of each of the registration parameters is related to the position in the respiratory cycle by fitting an approximating B spline to the registration results. As an approximating function is used, and the data is acquired over several respiratory cycles, the function should model an average respiratory cycle. This can then be used to calculate the value of each degree of freedom at any desired position in the respiratory cycle. The resulting nonrigid transformation will deform the reference volume to predict the contents of the slab at the desired position in the respiratory cycle. The slab model predictions are then concatenated to produce a combined prediction over the entire region of interest. We have performed a number of experiments to assess the accuracy of the nonrigid registration results and the Motion model predictions. The individual slab Models were evaluated by expert visual assessment and the tracking of easily identifiable anatomical points. The combined Models were evaluated by calculating the discontinuities between the transformations at the slab boundaries. The experiments were performed on five patients with a total of 18 slabs between them. For the point tracking experiments, the mean distance between where a clinician manually identified a point and where the registration results located the point, the target registration error (TRE), was 1.3 mm. The mean distance between a manually identified point and the Models prediction of the point's location, the target model error (TME), was 1.6 mm. The mean discontinuity between model predictions at the slab boundaries, the Continuity Error, was 2.2 mm. The results show that the Motion Models perform with a level of accuracy comparable to the slice thickness of 1.5 mm. © 2006 American Association of Physicists in Medicine.