Outcome Modeling

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 43995 Experts worldwide ranked by ideXlab platform

Issam El Naqa - One of the best experts on this subject based on the ideXlab platform.

  • fundamentals of radiomics in nuclear medicine and hybrid imaging
    2021
    Co-Authors: Lise Wei, Issam El Naqa
    Abstract:

    Positron emission tomography (PET) and single-photon emission computerized tomography (SPECT) are nuclear diagnostic imaging modalities for different diseases including cardiac failures and cancer. They hold the advantage of detecting disease-related biochemical and physiologic abnormalities in advance of anatomical changes, thus widely used for staging of disease progression, identification of the treatment gross tumor volume, monitoring of disease, as well as prediction of Outcomes and personalization of treatment regimens. Among the arsenal of different functional imaging modalities, nuclear imaging has benefited from early adoption of quantitative image analysis starting from simple standard uptake value (SUV) normalization to more advanced extraction of complex imaging uptake patterns; thanks chiefly to application of sophisticated image processing and machine learning algorithms. In this chapter, we discuss the application of image processing and machine/deep learning techniques to PET/SPECT imaging with special focus on the oncological radiotherapy domain as a case study. We will start from basic feature extraction to application in image-based Outcome Modeling in the radiomics and the deep learning fields.

  • radiation therapy Outcomes models in the era of radiomics and radiogenomics uncertainties and validation
    International Journal of Radiation Oncology Biology Physics, 2018
    Co-Authors: Issam El Naqa, Gaurav Pandey, Hugo J W L Aerts, Jentzung Chien, Christian Nicolaj Andreassen, Andrzej Niemierko, Randall Ten K Haken
    Abstract:

    Recent advances in imaging and biotechnology have tremendously improved the availability of quantitative imaging (radiomics) and molecular data (radiogenomics) for radiotherapy patients. This big data development with its comprehensive nature promises to transform Outcome Modeling in radiotherapy from few dose-volume metrics into utilizing more data-driven analytics. However, it also presents new profound challenges and creates new tasks for alleviating uncertainties arising from dealing with heterogeneous data and complex big data analytics. Therefore, more rigorous validation procedures need to be devised for these radiomics/radiogenomics models compared to traditional Outcome Modeling approaches previously utilized in radiation oncology, before they can be safely deployed for clinical trials or incorporated into daily practice. This editorial highlights current affairs, identifies some of the frequent sources of uncertainties, and presents some of the recommended practices for radiomics/radiogenomics models’ evaluation and validation.

  • Outcome Modeling techniques for prostate cancer radiotherapy data models and validation
    Physica Medica, 2016
    Co-Authors: James Coates, Issam El Naqa
    Abstract:

    Prostate cancer is a frequently diagnosed malignancy worldwide and radiation therapy is a first-line approach in treating localized as well as locally advanced cases. The limiting factor in modern radiotherapy regimens is dose to normal structures, an excess of which can lead to aberrant radiation-induced toxicities. Conversely, dose reduction to spare adjacent normal structures risks underdosing target volumes and compromising local control. As a result, efforts aimed at predicting the effects of radiotherapy could invaluably optimize patient treatments by mitigating such toxicities and simultaneously maximizing biochemical control. In this work, we review the types of data, frameworks and techniques used for prostate radiotherapy Outcome Modeling. Consideration is given to clinical and dose-volume metrics, such as those amassed by the QUANTEC initiative, and also to newer methods for the integration of biological and genetic factors to improve prediction performance. We furthermore highlight trends in machine learning that may help to elucidate the complex pathophysiological mechanisms of tumor control and radiation-induced normal tissue side effects.

Andrew Gelman - One of the best experts on this subject based on the ideXlab platform.

  • bayesian inference under cluster sampling with probability proportional to size
    Statistics in Medicine, 2018
    Co-Authors: Susanna Makela, Andrew Gelman
    Abstract:

    Cluster sampling is common in survey practice, and the corresponding inference has been predominantly design based. We develop a Bayesian framework for cluster sampling and account for the design effect in the Outcome Modeling. We consider a two-stage cluster sampling design where the clusters are first selected with probability proportional to cluster size, and then units are randomly sampled inside selected clusters. Challenges arise when the sizes of the nonsampled cluster are unknown. We propose nonparametric and parametric Bayesian approaches for predicting the unknown cluster sizes, with this inference performed simultaneously with the model for survey Outcome, with computation performed in the open-source Bayesian inference engine Stan. Simulation studies show that the integrated Bayesian approach outperforms classical methods with efficiency gains, especially under informative cluster sampling design with small number of selected clusters. We apply the method to the Fragile Families and Child Wellbeing study as an illustration of inference for complex health surveys.

  • bayesian inference under cluster sampling with probability proportional to size
    arXiv: Methodology, 2017
    Co-Authors: Susanna Makela, Andrew Gelman
    Abstract:

    Cluster sampling is common in survey practice, and the corresponding inference has been predominantly design-based. We develop a Bayesian framework for cluster sampling and account for the design effect in the Outcome Modeling. We consider a two-stage cluster sampling design where the clusters are first selected with probability proportional to cluster size, and then units are randomly sampled inside selected clusters. Challenges arise when the sizes of nonsampled cluster are unknown. We propose nonparametric and parametric Bayesian approaches for predicting the unknown cluster sizes, with this inference performed simultaneously with the model for survey Outcome. Simulation studies show that the integrated Bayesian approach outperforms classical methods with efficiency gains. We use Stan for computing and apply the proposal to the Fragile Families and Child Wellbeing study as an illustration of complex survey inference in health surveys.

Susanna Makela - One of the best experts on this subject based on the ideXlab platform.

  • bayesian inference under cluster sampling with probability proportional to size
    Statistics in Medicine, 2018
    Co-Authors: Susanna Makela, Andrew Gelman
    Abstract:

    Cluster sampling is common in survey practice, and the corresponding inference has been predominantly design based. We develop a Bayesian framework for cluster sampling and account for the design effect in the Outcome Modeling. We consider a two-stage cluster sampling design where the clusters are first selected with probability proportional to cluster size, and then units are randomly sampled inside selected clusters. Challenges arise when the sizes of the nonsampled cluster are unknown. We propose nonparametric and parametric Bayesian approaches for predicting the unknown cluster sizes, with this inference performed simultaneously with the model for survey Outcome, with computation performed in the open-source Bayesian inference engine Stan. Simulation studies show that the integrated Bayesian approach outperforms classical methods with efficiency gains, especially under informative cluster sampling design with small number of selected clusters. We apply the method to the Fragile Families and Child Wellbeing study as an illustration of inference for complex health surveys.

  • bayesian inference under cluster sampling with probability proportional to size
    arXiv: Methodology, 2017
    Co-Authors: Susanna Makela, Andrew Gelman
    Abstract:

    Cluster sampling is common in survey practice, and the corresponding inference has been predominantly design-based. We develop a Bayesian framework for cluster sampling and account for the design effect in the Outcome Modeling. We consider a two-stage cluster sampling design where the clusters are first selected with probability proportional to cluster size, and then units are randomly sampled inside selected clusters. Challenges arise when the sizes of nonsampled cluster are unknown. We propose nonparametric and parametric Bayesian approaches for predicting the unknown cluster sizes, with this inference performed simultaneously with the model for survey Outcome. Simulation studies show that the integrated Bayesian approach outperforms classical methods with efficiency gains. We use Stan for computing and apply the proposal to the Fragile Families and Child Wellbeing study as an illustration of complex survey inference in health surveys.

Ralph B Dagostino - One of the best experts on this subject based on the ideXlab platform.

  • tutorial in biostatistics data driven subgroup identification and analysis in clinical trials
    Statistics in Medicine, 2017
    Co-Authors: Ilya Lipkovich, Alex Dmitrienko, Ralph B Dagostino
    Abstract:

    It is well known that both the direction and magnitude of the treatment effect in clinical trials are often affected by baseline patient characteristics (generally referred to as biomarkers). Characterization of treatment effect heterogeneity plays a central role in the field of personalized medicine and facilitates the development of tailored therapies. This tutorial focuses on a general class of problems arising in data-driven subgroup analysis, namely, identification of biomarkers with strong predictive properties and patient subgroups with desirable characteristics such as improved benefit and/or safety. Limitations of ad-hoc approaches to biomarker exploration and subgroup identification in clinical trials are discussed, and the ad-hoc approaches are contrasted with principled approaches to exploratory subgroup analysis based on recent advances in machine learning and data mining. A general framework for evaluating predictive biomarkers and identification of associated subgroups is introduced. The tutorial provides a review of a broad class of statistical methods used in subgroup discovery, including global Outcome Modeling methods, global treatment effect Modeling methods, optimal treatment regimes, and local Modeling methods. Commonly used subgroup identification methods are illustrated using two case studies based on clinical trials with binary and survival endpoints. Copyright © 2016 John Wiley & Sons, Ltd.

Joseph O Deasy - One of the best experts on this subject based on the ideXlab platform.

  • registering study analysis plans saps before dissecting your data updating and standardizing Outcome Modeling
    Frontiers in Oncology, 2020
    Co-Authors: Maria Thor, A Apte, Joseph O Deasy
    Abstract:

    Public preregistration of statistical analysis plans (SAPs) is widely recognized for clinical trials, but adopted to a much lesser extent in observational studies. Registration of SAPs prior to analysis is encouraged to not only increase transparency and exactness but also to avoid positive finding bias and better standardize Outcome Modeling. Efforts to generally standardize Outcome Modeling, which can be based on clinical trial and/or observational data, have recently spurred. We suggest a three-step SAP concept in which investigators are encouraged to 1.) Design the SAP and circulate it among the co-investigators, 2.) Log the SAP with a public repository, which recognizes the SAP with a digital object identifier (DOI), and 3.) Cite (using the DOI), briefly summarize and motivate any deviations from the SAP in the associated manuscript. More specifically, the SAP should include the scope (brief data and study description, co-investigators, hypotheses, primary Outcome measure, study title), in addition to step-by-step details of the analysis (handling of missing data, resampling, defined significance level, statistical function, validation, and variables and parametrization).

  • cerr a computational environment for radiotherapy research
    Medical Physics, 2003
    Co-Authors: Joseph O Deasy, Angel I Blanco, V Clark
    Abstract:

    A software environment is described, called the computational environment for radiotherapy research (CERR, pronounced “sir”). CERR partially addresses four broad needs in treatment planning research: (a) it provides a convenient and powerful software environment to develop and prototype treatment planning concepts, (b) it serves as a software integration environment to combine treatment planningsoftware written in multiple languages (MATLAB, FORTRAN, C/C++, JAVA, etc.), together with treatment plan information(computed tomography scans, outlined structures, dose distributions, digital films, etc.), (c) it provides the ability to extract treatment plans from disparate planning systems using the widely available AAPM/RTOG archiving mechanism, and (d) it provides a convenient and powerful tool for sharing and reproducing treatment planning research results. The functional components currently being distributed, including source code, include: (1) an import program which converts the widely available AAPM/RTOG treatment planning format into a MATLAB cell-array data object, facilitating manipulation; (2) viewers which display axial, coronal, and sagittal computed tomographyimages, structure contours, digital films, and isodose lines or dose colorwash, (3) a suite of contouring tools to edit and/or create anatomical structures, (4) dose–volume and dose–surface histogram calculation and display tools, and (5) various predefined commands. CERR allows the user to retrieve any AAPM/RTOG key word information about the treatment plan archive. The code is relatively self-describing, because it relies on MATLAB structure field name definitions based on the AAPM/RTOG standard. New structure field names can be added dynamically or permanently. New components of arbitrary data type can be stored and accessed without disturbing system operation. CERR has been applied to aid research in dose–volume-Outcome Modeling, Monte Carlo dose calculation, and treatment planning optimization. In summary, CERR provides a powerful, convenient, and common framework which allows researchers to use common patient data sets, and compare and share research results.