Model Data

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6710541 Experts worldwide ranked by ideXlab platform

Andrew D Richardson - One of the best experts on this subject based on the ideXlab platform.

  • using Model Data fusion to interpret past trends and quantify uncertainties in future projections of terrestrial ecosystem carbon cycling
    Global Change Biology, 2012
    Co-Authors: Trevor F. Keenan, Eric A Davidson, Antje Moffat, William Munger, Andrew D Richardson
    Abstract:

    Uncertainties in Model projections of carbon cycling in terrestrial ecosystems stem from inaccurate parameterization of incorporated processes (endogenous uncertainties) and processes or drivers that are not accounted for by the Model (exogenous uncertainties). Here, we assess endogenous and exogenous uncertainties using a Model-Data fusion framework benchmarked with an artificial neural network (ANN). We used 18 years of eddy-covariance carbon flux Data from the Harvard forest, where ecosystem carbon uptake has doubled over the measurement period, along with 15 ancillary ecological Data sets relative to the carbon cycle. We test the ability of combinations of diverse Data to constrain projections of a process-based carbon cycle Model, both against the measured decadal trend and under future long-term climate change. The use of high-frequency eddy-covariance Data alone is shown to be insufficient to constrain Model projections at the annual or longer time step. Future projections of carbon cycling under climate change in particular are shown to be highly dependent on the Data used to constrain the Model. Endogenous uncertainties in long-term Model projections of future carbon stocks and fluxes were greatly reduced by the use of aggregated flux budgets in conjunction with ancillary Data sets. The Data-informed Model, however, poorly reproduced interannual variability in net ecosystem carbon exchange and biomass increments and did not reproduce the long-term trend. Furthermore, we use the Model-Data fusion framework, and the ANN, to show that the long-term doubling of the rate of carbon uptake at Harvard forest cannot be explained by meteorological drivers, and is driven by changes during the growing season. By integrating all available Data with the Model-Data fusion framework, we show that the observed trend can only be reproduced with temporal changes in Model parameters. Together, the results show that exogenous uncertainty dominates uncertainty in future projections from a Data-informed process-based Model.

  • The ModelData fusion pitfall: assuming certainty in an uncertain world
    Oecologia, 2011
    Co-Authors: Trevor F. Keenan, Mariah S. Carbone, Markus Reichstein, Andrew D Richardson
    Abstract:

    ModelData fusion is a powerful framework by which to combine Models with various Data streams (including observations at different spatial or temporal scales), and account for associated uncertainties. The approach can be used to constrain estimates of Model states, rate constants, and driver sensitivities. The number of applications of ModelData fusion in environmental biology and ecology has been rising steadily, offering insights into both Model and Data strengths and limitations. For reliable ModelData fusion-based results, however, the approach taken must fully account for both Model and Data uncertainties in a statistically rigorous and transparent manner. Here we review and outline the cornerstones of a rigorous ModelData fusion approach, highlighting the importance of properly accounting for uncertainty. We conclude by suggesting a code of best practices, which should serve to guide future efforts.

  • The Model-Data fusion pitfall: Assuming certainty in an uncertain world
    Oecologia, 2011
    Co-Authors: Trevor F. Keenan, Mariah S. Carbone, Markus Reichstein, Andrew D Richardson
    Abstract:

    Model-Data fusion is a powerful framework by which to combine Models with various Data streams (including observations at different spatial or temporal scales), and account for associated uncertainties. The approach can be used to constrain estimates of Model states, rate constants, and driver sensitivities. The number of applications of Model-Data fusion in environmental biology and ecology has been rising steadily, offering insights into both Model and Data strengths and limitations. For reliable Model-Data fusion-based results, however, the approach taken must fully account for both Model and Data uncertainties in a statistically rigorous and transparent manner. Here we review and outline the cornerstones of a rigorous Model-Data fusion approach, highlighting the importance of properly accounting for uncertainty. We conclude by suggesting a code of best practices, which should serve to guide future efforts.

P. D. Jones - One of the best experts on this subject based on the ideXlab platform.

  • Regional climate Model Data used within the SWURVE project ? 2: addressing uncertainty in regional climate Model Data for five European case study areas
    Hydrology and Earth System Sciences Discussions, 2007
    Co-Authors: M. Ekström, B. Hingray, A. Mezghani, P. D. Jones
    Abstract:

    To aid assessments of the impact of climate change on water related activities in the case study regions (CSRs) of the EC-funded project SWURVE, estimates of uncertainty in climate Model Data need to be developed. This paper compares two methods for estimating uncertainty in annual surface temperature and precipitation for the period 2070?2099. Both combine probability distribution functions for global temperature increase and for scaling variables (i.e. the change in regional temperature/precipitation per degree of global annual average temperature change) to produce a probability distribution for regional temperature and precipitation. The methods differ in terms of the distribution used for the respective probability distribution function. For scaling variables, the first method assumes a uniform distribution, whilst the second method assumes a normal distribution. For the probability distribution function of global annual average temperature change, the first method uses a uniform distribution and the second uses a log-normal approximation to a distribution derived from Wigley and Raper, 2001. Although the methods give somewhat different ranges of change, they agree on how temperature and precipitation in each of the CSRs are likely to change relative to each other. For annual surface temperature, both methods predict increases in all CSRs, although somewhat less so for NW England (5th and 95th percentiles vary between 1.1?1.9°C to 3.8?5.7°C) and about 1.7?3.1°C to 5.3?8.6°C for the others. For precipitation, most probability distributions (except for NW England) show predominantly decreasing precipitation, particularly so for the Iberian CSR (5th and 95th percentiles vary from ?29.3 to ?44% to ?9.6 to ?4%).

Trevor F. Keenan - One of the best experts on this subject based on the ideXlab platform.

  • using Model Data fusion to interpret past trends and quantify uncertainties in future projections of terrestrial ecosystem carbon cycling
    Global Change Biology, 2012
    Co-Authors: Trevor F. Keenan, Eric A Davidson, Antje Moffat, William Munger, Andrew D Richardson
    Abstract:

    Uncertainties in Model projections of carbon cycling in terrestrial ecosystems stem from inaccurate parameterization of incorporated processes (endogenous uncertainties) and processes or drivers that are not accounted for by the Model (exogenous uncertainties). Here, we assess endogenous and exogenous uncertainties using a Model-Data fusion framework benchmarked with an artificial neural network (ANN). We used 18 years of eddy-covariance carbon flux Data from the Harvard forest, where ecosystem carbon uptake has doubled over the measurement period, along with 15 ancillary ecological Data sets relative to the carbon cycle. We test the ability of combinations of diverse Data to constrain projections of a process-based carbon cycle Model, both against the measured decadal trend and under future long-term climate change. The use of high-frequency eddy-covariance Data alone is shown to be insufficient to constrain Model projections at the annual or longer time step. Future projections of carbon cycling under climate change in particular are shown to be highly dependent on the Data used to constrain the Model. Endogenous uncertainties in long-term Model projections of future carbon stocks and fluxes were greatly reduced by the use of aggregated flux budgets in conjunction with ancillary Data sets. The Data-informed Model, however, poorly reproduced interannual variability in net ecosystem carbon exchange and biomass increments and did not reproduce the long-term trend. Furthermore, we use the Model-Data fusion framework, and the ANN, to show that the long-term doubling of the rate of carbon uptake at Harvard forest cannot be explained by meteorological drivers, and is driven by changes during the growing season. By integrating all available Data with the Model-Data fusion framework, we show that the observed trend can only be reproduced with temporal changes in Model parameters. Together, the results show that exogenous uncertainty dominates uncertainty in future projections from a Data-informed process-based Model.

  • The ModelData fusion pitfall: assuming certainty in an uncertain world
    Oecologia, 2011
    Co-Authors: Trevor F. Keenan, Mariah S. Carbone, Markus Reichstein, Andrew D Richardson
    Abstract:

    ModelData fusion is a powerful framework by which to combine Models with various Data streams (including observations at different spatial or temporal scales), and account for associated uncertainties. The approach can be used to constrain estimates of Model states, rate constants, and driver sensitivities. The number of applications of ModelData fusion in environmental biology and ecology has been rising steadily, offering insights into both Model and Data strengths and limitations. For reliable ModelData fusion-based results, however, the approach taken must fully account for both Model and Data uncertainties in a statistically rigorous and transparent manner. Here we review and outline the cornerstones of a rigorous ModelData fusion approach, highlighting the importance of properly accounting for uncertainty. We conclude by suggesting a code of best practices, which should serve to guide future efforts.

  • The Model-Data fusion pitfall: Assuming certainty in an uncertain world
    Oecologia, 2011
    Co-Authors: Trevor F. Keenan, Mariah S. Carbone, Markus Reichstein, Andrew D Richardson
    Abstract:

    Model-Data fusion is a powerful framework by which to combine Models with various Data streams (including observations at different spatial or temporal scales), and account for associated uncertainties. The approach can be used to constrain estimates of Model states, rate constants, and driver sensitivities. The number of applications of Model-Data fusion in environmental biology and ecology has been rising steadily, offering insights into both Model and Data strengths and limitations. For reliable Model-Data fusion-based results, however, the approach taken must fully account for both Model and Data uncertainties in a statistically rigorous and transparent manner. Here we review and outline the cornerstones of a rigorous Model-Data fusion approach, highlighting the importance of properly accounting for uncertainty. We conclude by suggesting a code of best practices, which should serve to guide future efforts.

Christiane C. Schmullius - One of the best experts on this subject based on the ideXlab platform.

  • Model Data synthesis in terrestrial carbon observation methods Data requirements and Data uncertainty specifications
    Global Change Biology, 2005
    Co-Authors: Michael R. Raupach, D J Barrett, Martin Heimann, P.j. Rayner, Dennis S. Ojima, Shaun Quegan, Ruth S Defries, Christiane C. Schmullius
    Abstract:

    Systematic, operational, long-term observations of the terrestrial carbon cycle (including its interactions with water, energy and nutrient cycles and ecosystem dynamics) are important for the prediction and management of climate, water resources, food resources, biodiversity and desertification. To contribute to these goals, a terrestrial carbon observing system requires the synthesis of several kinds of observation into terrestrial biosphere Models encompassing the coupled cycles of carbon, water, energy and nutrients. Relevant observations include atmospheric composition (concentrations of CO2 and other gases); remote sensing; flux and process measurements from intensive study sites; in situ vegetation and soil monitoring; weather, climate and hydrological Data; and contemporary and historical Data on land use, land use change and disturbance (grazing, harvest, clearing, fire). A review of Model-Data synthesis tools for terrestrial carbon observation identifies 'nonsequential' and 'sequential' approaches as major categories, differing according to whether Data are treated all at once or sequentially. The structure underlying both approaches is reviewed, highlighting several basic commonalities in formalism and Data requirements. An essential commonality is that for all Model-Data synthesis problems, both nonsequential and sequential, Data uncertainties are as important as Data values themselves and have a comparable role in determining the outcome. Given the importance of Data uncertainties, there is an urgent need for soundly based uncertainty characterizations for the main kinds of Data used in terrestrial carbon observation. The first requirement is a specification of the main properties of the error covariance matrix. As a step towards this goal, semi-quantitative estimates are made of the main properties of the error covariance matrix for four kinds of Data essential for terrestrial carbon observation: remote sensing of land surface properties, atmospheric composition measurements, direct flux measurements, and measurements of carbon stores.

Ted L Briggs - One of the best experts on this subject based on the ideXlab platform.

  • an alternate approach to the exchange of ship product Model Data
    Journal of ship production, 2008
    Co-Authors: Ben Kassel, Ted L Briggs
    Abstract:

    This paper considers an alternate approach to the exchange of ship product Model Data based on general-purpose STEP application protocols. The vision is to provide the functionality defined in the shipbuilding application protocols using a combination of STEP AP239, AP214, and reference Data libraries. It is expected that AP239 translators will soon be available, thus enabling the exchange of significant portions of ship product Model Data.

  • reuse of ship product Model Data for life cycle support
    Journal of ship production, 2006
    Co-Authors: Ted L Briggs, Thomas C Rando, Thomas A Daggett
    Abstract:

    With the advent of full service contracts by the Navy, shipyards have become responsible for the life-cycle support of ships, including maintenance and logistics Data over the life of the ship. Hence, it will become increasingly important for shipyards to efficiently integrate acquisition Data with life-cycle support products. In particular, the use of an Integrated Data Environment (IDE), mandated for all ACAT1 acquisition programs, serves to collect and configure design, engineering, and production information during acquisition. This information is also required to develop the logistics Data for the ship, including technical publications, as well as support life-cycle support systems. The shipyard's cost and performance of these new Navy contracts will depend on the efficient incorporation of this engineering and design information. The Navy and the aerospace domains are moving toward the adoption of a new life-cycle support standard for technical publication: the International Specification for Technical Publications Utilizing a Common Source Database (also known as S1 GOOD). This standard applies to both land- and sea-specific applications, as well as defense and commercial uses. The purpose of the specification was to address the dramatically rising costs of managing life-cycle support information. The specification adopts ISO, CALS, and W3C standards. In fact, it uses Standard for the Exchange of Product Model Data (STEP) AP239, Product Life Cycle Support (PLCS), as one of its normative standards. The Integrated Shipbuilding Environment (ISE) project has published a technical architecture, including XML-based information Models, for the sharing of product Model Data to exchange design, engineering, and production Data. The direct use of such Data in the population of technical publications could result in significant savings. This paper discusses the requirements and use cases necessary to define the architecture and process to populate portions of the common source Database for ship life-cycle support using product Model Data in ISE format. Specifically, it addresses the issues involved in generating PLCS technical Data directly from ISE product Model Data and populating a Database in accordance with the S1000D standard. Both the S1000D standard for interactive technical documentation and AP239 life-cycle support Data will be analyzed based on the ISE technical architecture. The integration of the document-centric S1000D standard with the Data-centric AP239 and ISE standards is discussed.

  • step for shipbuilding a solution for product Model Data exchange
    Journal of ship production, 2003
    Co-Authors: L Benthall, Ted L Briggs, B Downie, B Gischner, B Kassel, R Wood
    Abstract:

    An international standard (ISO 10303) has been created to facilitate the exchange of product Models between diverse computer-aided design (CAD) systems. Informally known as STEP (standard for the exchange of product Model Data), this specification has been under development since the mid 1980s, and parts of it were approved as international standards beginning in 1994. Efforts to expand STEP to meet the needs of the shipbuilding industry have been in work for many years and are nearing completion. By early 2003, it is expected that four application protocols to facilitate the transfer of information relating to ship structures, piping, and heating, ventilation, and air-conditioning will have been approved as international standards and become part of the overall STEP standard. This article discusses the successful efforts to expand STEP to meet the needs of the shipbuilding industry, as well as outlining the various implementation and testing projects that have been undertaken to ensure the validity and success of these new standards.