Database Error

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 33 Experts worldwide ranked by ideXlab platform

T D Gardiner - One of the best experts on this subject based on the ideXlab platform.

  • sensitivity of model based quantitative ftir to instrumental and spectroscopic Database Error sources
    Vibrational Spectroscopy, 2009
    Co-Authors: M D Coleman, T D Gardiner
    Abstract:

    Abstract We report a theoretical study of the Error sources associated with the quantification of gas-phase FTIR spectra using synthetic calibrations. A forward model was constructed based on a Bruker IFS66 FTIR spectrometer, modelling the instrument line shape from theory and taking line parameters from a line-by-line spectral Database. Default values were set in the forward model for an ‘ideal’ system where the spectrometer is perfectly aligned and all spectroscopic parameters are exactly known. Using a re-iterative non-linear least squares routine input values were perturbed allowing assessment of the model sensitivity to each parameter. Using the sensitivity information and knowledge of the possible absolute uncertainties (e.g. the accuracy to which a user might be able to axially align the field stop aperture) the possible quantitative contribution of each parameter was ranked in the following order (greatest first); field stop aperture axial alignment, line intensity, line centre, air-broadening line width, pressure, lateral field stop aperture/collimating optic alignment, pathlength, temperature. Whilst this ranking was specific to the type of measurement modelled it was discussed which elements would remain consistent for different measurement acquisition set-ups and which would alter. Consequently, it was proposed that the Error analysis presented here could be used to determine which parameters in a forward model should be set as variables (i.e. those with the highest potential Error contribution) and which should remain fixed in attempting to minimise the Errors in using optimisation routines for the purpose of gas quantification.

M D Coleman - One of the best experts on this subject based on the ideXlab platform.

  • sensitivity of model based quantitative ftir to instrumental and spectroscopic Database Error sources
    Vibrational Spectroscopy, 2009
    Co-Authors: M D Coleman, T D Gardiner
    Abstract:

    Abstract We report a theoretical study of the Error sources associated with the quantification of gas-phase FTIR spectra using synthetic calibrations. A forward model was constructed based on a Bruker IFS66 FTIR spectrometer, modelling the instrument line shape from theory and taking line parameters from a line-by-line spectral Database. Default values were set in the forward model for an ‘ideal’ system where the spectrometer is perfectly aligned and all spectroscopic parameters are exactly known. Using a re-iterative non-linear least squares routine input values were perturbed allowing assessment of the model sensitivity to each parameter. Using the sensitivity information and knowledge of the possible absolute uncertainties (e.g. the accuracy to which a user might be able to axially align the field stop aperture) the possible quantitative contribution of each parameter was ranked in the following order (greatest first); field stop aperture axial alignment, line intensity, line centre, air-broadening line width, pressure, lateral field stop aperture/collimating optic alignment, pathlength, temperature. Whilst this ranking was specific to the type of measurement modelled it was discussed which elements would remain consistent for different measurement acquisition set-ups and which would alter. Consequently, it was proposed that the Error analysis presented here could be used to determine which parameters in a forward model should be set as variables (i.e. those with the highest potential Error contribution) and which should remain fixed in attempting to minimise the Errors in using optimisation routines for the purpose of gas quantification.

Maureen M Cunningham - One of the best experts on this subject based on the ideXlab platform.

  • quantifying data quality for clinical trials using electronic data capture
    PLOS ONE, 2008
    Co-Authors: Meredith Nahm, Carl F Pieper, Maureen M Cunningham
    Abstract:

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring Database Error rates has been to compare the case report form (CRF) to Database entries and count discrepancies. Importantly, Errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-Database Error rate (14.3 Errors per 10,000 fields) for the first year of use of the new evaluation method. This Error rate was significantly lower than the average of published Error rates for source-to-Database audits, and was similar to CRF-to-Database Error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of Error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-Database Error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Meredith Nahm - One of the best experts on this subject based on the ideXlab platform.

  • quantifying data quality for clinical trials using electronic data capture
    PLOS ONE, 2008
    Co-Authors: Meredith Nahm, Carl F Pieper, Maureen M Cunningham
    Abstract:

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring Database Error rates has been to compare the case report form (CRF) to Database entries and count discrepancies. Importantly, Errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-Database Error rate (14.3 Errors per 10,000 fields) for the first year of use of the new evaluation method. This Error rate was significantly lower than the average of published Error rates for source-to-Database audits, and was similar to CRF-to-Database Error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of Error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-Database Error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Carl F Pieper - One of the best experts on this subject based on the ideXlab platform.

  • quantifying data quality for clinical trials using electronic data capture
    PLOS ONE, 2008
    Co-Authors: Meredith Nahm, Carl F Pieper, Maureen M Cunningham
    Abstract:

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring Database Error rates has been to compare the case report form (CRF) to Database entries and count discrepancies. Importantly, Errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-Database Error rate (14.3 Errors per 10,000 fields) for the first year of use of the new evaluation method. This Error rate was significantly lower than the average of published Error rates for source-to-Database audits, and was similar to CRF-to-Database Error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of Error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-Database Error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.