Predictive Capability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Jongmoon Baik - One of the best experts on this subject based on the ideXlab platform.

  • on the long term Predictive Capability of data driven software reliability model an empirical evaluation
    International Symposium on Software Reliability Engineering, 2014
    Co-Authors: Jinhee Park, Nakwon Lee, Jongmoon Baik
    Abstract:

    In recent years, data-driven software reliability models have been proposed to solve the problematic issues of existing software reliability growth models (i.e., Unrealistic underlying assumptions and model selection problems). However, the previous data-driven approaches mostly focused on sample fitting or next-step prediction without adequate evaluation on their long-term Predictive Capability. This paper investigates three multi-step-ahead prediction strategies for data-driven software reliability models and compares their Predictive performance on failure count data and time between failure data. Then, the model with the outstanding strategy on each data type is compared with conventional software reliability growth models. We found that the Recursive strategy gives better prediction for fault count data, while no strategy is superior to the others for time between failure data. Such data-driven approach with the best input domain showed performance as good as the best one among the software reliability growth models in long-term prediction. These results indicate the applicability of data-driven methods even in long-term prediction and help reliability practitioners to identify an appropriate multi-step prediction strategy for software reliability.

William L Oberkampf - One of the best experts on this subject based on the ideXlab platform.

  • verification and validation in scientific computing fundamental concepts and terminology
    2010
    Co-Authors: William L Oberkampf, Christopher J Roy
    Abstract:

    This chapter discusses the fundamental concepts and terminology associated with verification and validation (V&V) of models and simulations. We begin with a brief history of the philosophical foundations so that the reader can better understand why there are a wide variety of views toward V&V principles and procedures. Various perspectives of V&V have also generated different formal definitions of the terms verification and validation in important communities. Although the terminology is moving toward convergence within some communities, there are still significant differences. The reader needs to be aware of these differences in terminology to help minimize confusion and unnecessary disagreements, as well as to anticipate possible difficulties in contractual obligations in business and government. We also discuss a number of important and closely related terms in modeling and simulation (M&S). Examples are Predictive Capability, calibration, certification, uncertainty, and error. We end the chapter with a discussion of a conceptual framework for integrating verification, validation, and Predictive Capability. Although there are different frameworks for integrating these concepts, the framework discussed here has proven very helpful in understanding how the various activities in scientific computing are related.

  • model validation and Predictive Capability for the thermal challenge problem
    Computer Methods in Applied Mechanics and Engineering, 2008
    Co-Authors: Scott Ferson, William L Oberkampf, Lev R Ginzburg
    Abstract:

    We address the thermal problem posed at the Sandia Validation Challenge Workshop. Unlike traditional approaches that confound calibration with validation and prediction, our approach strictly distinguishes these activities, and produces a quantitative measure of model-form uncertainty in the face of available data. We introduce a general validation metric that can be used to characterize the disagreement between the quantitative predictions from a model and relevant empirical data when either or both predictions and data are expressed as probability distributions. By considering entire distributions, this approach generalizes traditional approaches to validation that focus only on the mean behaviors of predictions and observations. The proposed metric has several desirable properties that should make it practically useful in engineering, including objectiveness and robustness, retaining the units of the data themselves, and generalizing the deterministic difference. The metric can be used to assess the overall performance of a model against all the experimental observations in the validation domain and it can be extrapolated to express Predictive Capability of the model under conditions for which direct experimental observations are not available. We apply the metric and the scheme for characterizing Predictive Capability to the thermal problem.

  • Predictive Capability maturity model for computational modeling and simulation
    2007
    Co-Authors: William L Oberkampf, Timothy G Trucano, Martin Pilch
    Abstract:

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a Predictive Capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering applicationmore » of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less

  • verification validation and Predictive Capability in computational engineering and physics
    Applied Mechanics Reviews, 2004
    Co-Authors: William L Oberkampf, Timothy G Trucano, Charles Hirsch
    Abstract:

    Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

Inge Sandholt - One of the best experts on this subject based on the ideXlab platform.

  • evaluation of remote sensing based rainfall products through Predictive Capability in hydrological runoff modelling
    Hydrological Processes, 2010
    Co-Authors: Simon Stisen, Inge Sandholt
    Abstract:

    The emergence of regional and global satellite-based rainfall products with high spatial and temporal resolution has opened up new large-scale hydrological applications in data-sparse or ungauged catchments. Particularly, distributed hydrological models can benefit from the good spatial coverage and distributed nature of satellite-based rainfall estimates (SRFE). In this study, five SRFEs with temporal resolution of 24 h and spatial resolution between 8 and 27 km have been evaluated through their Predictive Capability in a distributed hydrological model of the Senegal River basin in West Africa. The main advantage of this evaluation methodology is the integration of the rainfall model input in time and space when evaluated at the sub-catchment scale. An initial data analysis revealed significant biases in the SRFE products and large variations in rainfall amounts between SRFEs, although the spatial patterns were similar. The results showed that the Climate Prediction Center/Famine Early Warning System (CPC-FEWS) and cold cloud duration (CCD) products, which are partly based on rain gauge data and produced specifically for the African continent, performed better in the modelling context than the global SRFEs, Climate Prediction Center MORPHing technique (CMORPH), Tropical Rainfall Measuring Mission (TRMM) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). The best performing SRFE, CPC-FEWS, produced good results with values of R2NS between 0·84 and 0·87 after bias correction and model recalibration. This was comparable to model simulations based on traditional rain gauge data. The study highlights the need for input specific calibration of hydrological models, since major differences were observed in model performances even when all SRFEs were scaled to the same mean rainfall amounts. This is mainly attributed to differences in temporal dynamics between products. Copyright © 2009 John Wiley & Sons, Ltd.

Aitor Atencia - One of the best experts on this subject based on the ideXlab platform.

  • effect of radar rainfall time resolution on the Predictive Capability of a distributed hydrologic model
    Hydrology and Earth System Sciences, 2011
    Co-Authors: Aitor Atencia, Luis Mediero, M C Llasat, Luis Garrote
    Abstract:

    Abstract. The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.

Luis Garrote - One of the best experts on this subject based on the ideXlab platform.

  • effect of radar rainfall time resolution on the Predictive Capability of a distributed hydrologic model
    Hydrology and Earth System Sciences, 2011
    Co-Authors: Aitor Atencia, Luis Mediero, M C Llasat, Luis Garrote
    Abstract:

    Abstract. The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.