Data Reconciliation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 17421 Experts worldwide ranked by ideXlab platform

Zhengjiang Zhang - One of the best experts on this subject based on the ideXlab platform.

  • dynamic Data Reconciliation to improve the result of controller performance assessment based on gmvc
    Isa Transactions, 2021
    Co-Authors: Wangwang Zhu, Zhengjiang Zhang, Antonios Armaou, Sheng Zhao, Shipei Huang
    Abstract:

    Due to the complexity of the industrial working environment, controllers are susceptible to various disturbance signals, resulting in unsatisfactory control performance. Therefore, it is especially important to assess the controller performance. Considering the harmful effect of measurement noise on controller performance assessment (CPA) based on generalized minimum variance control (GMVC), this paper proposes dynamic Data Reconciliation (DDR) to improve the accuracy of CPA based on GMVC. The paper first introduces CPA based on GMVC, and then analyzes the influence of measurement noise on GMVC based CPA index. DDR combined with GMVC based CPA is then proposed and analyzed in both SISO and MIMO systems to weaken the impact of measurement noise on CPA index. For both Gaussian distributed noise and non-Gaussian distributed noise, the formulation of DDR is derived from the Bayesian formula and maximum likelihood estimate. The effectiveness of the proposed method is verified in different case studies (involving both SISO and MIMO systems), and further verified by the control process of DC-AC converter. The simulation and experiment results demonstrate that the results of CPA based on GMVC can be obviously improved by using DDR.

  • programming strategies of sequential incremental scale subproblems for large scale Data Reconciliation and parameter estimation with multi operational conditions
    Industrial & Engineering Chemistry Research, 2015
    Co-Authors: Zhengjiang Zhang, Zhijiang Shao, Junghui Chen
    Abstract:

    Data Reconciliation and parameter estimation (DRPE) is a crucial issue in model-based applications, such as real-time optimization and process control. In order to obtain more reliable parameter estimates, a series of measurement Data sets from different operational conditions will be used for DRPE problems. However, the dimensionality of DRPE problems increases directly with the number of measurement Data sets. The number of degrees of freedom in DRPE problems is usually very large. Therefore, it is very difficult to solve the DRPE problem with multioperational conditions. On the basis of the characteristics of the DRPE problem, two directions, including the direction of incremental objectives of the DRPE problem and the direction of incremental parameters of the DRPE problem, are considered to decompose the original DRPE optimization problem into a series of incremental-scale subproblems. Three programming strategies are proposed to solve a series of incremental-scale subproblems one by one. The solutio...

  • correntropy based Data Reconciliation and gross error detection and identification for nonlinear dynamic processes
    Computers & Chemical Engineering, 2015
    Co-Authors: Zhengjiang Zhang, Junghui Chen
    Abstract:

    Abstract Measurement information in dynamic chemical processes is subject to corruption. Although nonlinear dynamic Data Reconciliation (NDDR) utilizes enhanced simultaneous optimization and solution techniques associated with a finite calculation horizon, it is still affected by different types of gross errors. In this paper, two algorithms of Data processing, including correntropy based NDDR (CNDDR) as well as gross error detection and identification (GEDI), are developed to improve the quality of the Data measurements. CNDDR's Reconciliation and estimation are accurate in spite of the presence of gross errors. In addition to CNDDR, GEDI with a hypothesis testing and a distance–time step criterion identifies types of gross errors in dynamic systems. Through a case study of the free radical polymerization of styrene in a complex nonlinear dynamic chemical process, CNDDR greatly decreases the influence of the gross errors on the reconciled results and GEDI successfully classifies the types of gross errors of the measured Data.

  • simultaneous Data Reconciliation and gross error detection for dynamic systems using particle filter and measurement test
    Computers & Chemical Engineering, 2014
    Co-Authors: Zhengjiang Zhang, Junghui Chen
    Abstract:

    Abstract Good dynamic model estimation plays an important role for both feedforward and feedback control, fault detection, and system optimization. Attempts to successfully implement model estimators are often hindered by severe process nonlinearities, complicated state constraints, systematic modeling errors, unmeasurable perturbations, and irregular measurements with possibly abnormal behaviors. Thus, simultaneous Data Reconciliation and gross error detection (DRGED) for dynamic systems are fundamental and important. In this research, a novel particle filter (PF) algorithm based on the measurement test (MT) is used to solve the dynamic DRGED problem, called PFMT-DRGED. This strategy can effectively solve the DRGED problem through measurements that contain gross errors in the nonlinear dynamic process systems. The performance of PFMT-DRGED is demonstrated through the results of two statistical performance indices in a classical nonlinear dynamic system. The effectiveness of the proposed PFMT-DRGED applied to a nonlinear dynamic system and a large scale polymerization process is illustrated.

  • Sequential sub-problem programming strategies for Data Reconciliation and parameter estimation with multiple Data sets
    49th IEEE Conference on Decision and Control (CDC), 2010
    Co-Authors: Zhengjiang Zhang, Zhijiang Shao, Pengfei Jiang, Xi Chen, Yuhong Zhao, Jixin Qian
    Abstract:

    Data Reconciliation and parameter estimation (DRPE) is a key problem in real-time optimization. The dimensionality of the DRPE problem increases directly with the number of Data sets, and the number of degrees of freedom in DRPE is very large. Therefore, solving a DRPE problem is very difficult. Sequential sub-problem programming strategies for Data Reconciliation and parameter estimation with multiple Data sets are proposed in this paper. Based on the characteristics of a DRPE optimization problem, we construct a series of sub-problems depending on objective and model parameters. The solutions of each sub-problem are a good initial guess of the optimum of the next sub-problem. By solving the series of sub-problems, the optimum of the DRPE optimization problem can be derived. The proposed sequential sub-problem programming strategies are used in the industrial purified terephthalic acid (PTA) oxidation process system. The effectiveness of the proposed strategies is demonstrated by the results of numerical experiments.

Shankar Narasimhan - One of the best experts on this subject based on the ideXlab platform.

  • Data Reconciliation for chemical reaction systems using vessel extents and shape constraints
    Computers & Chemical Engineering, 2017
    Co-Authors: Sriniketh Srinivasan, Shankar Narasimhan, Julien Billeter, Dominique Bonvin
    Abstract:

    Abstract Concentrations measurements are typically corrupted by noise. Data Reconciliation techniques improve the accuracy of measurements by using redundancies in the material and energy balances expressed as relationships between measurements. Since in the absence of kinetic models these relationships cannot integrate information regarding past measurements, they are expressed in the form of algebraic constraints. This paper shows that, even in the absence of a kinetic model, one can use shape constraints to relate measurements at different time instants, thereby improving the accuracy of reconciled estimates. The construction of shape constraints depends on the operating mode of the reactor. Moreover, it is shown that the representation of the reaction system in terms of vessel extents helps identify additional shape constraints. A procedure for deriving shape constraints from measurements is also described. Data Reconciliation using both numbers of moles and extents is illustrated via a simulated case study.

  • deconstructing principal component analysis using a Data Reconciliation perspective
    Computers & Chemical Engineering, 2015
    Co-Authors: Shankar Narasimhan, Nirav Bhatt
    Abstract:

    Abstract Data Reconciliation (DR) and principal component analysis (PCA) are two popular Data analysis techniques in process industries. Data Reconciliation is used to obtain accurate and consistent estimates of variables and parameters from erroneous measurements. PCA is primarily used as a method for reducing the dimensionality of high dimensional Data and as a preprocessing technique for denoising measurements. These techniques have been developed and deployed independently of each other. The primary purpose of this article is to elucidate the close relationship between these two seemingly disparate techniques. This leads to a unified framework for applying PCA and DR. Further, we show how the two techniques can be deployed together in a collaborative and consistent manner to process Data. The framework has been extended to deal with partially measured systems and to incorporate partial knowledge available about the process model.

  • robust and reliable estimation via unscented recursive nonlinear dynamic Data Reconciliation
    Journal of Process Control, 2006
    Co-Authors: Pramod Vachhani, Shankar Narasimhan, Raghunathan Rengaswamy
    Abstract:

    Abstract The quality of process Data in a chemical plant significantly affects the performance and benefits gained from activities like performance monitoring, online optimization and control. Since many chemical processes often exhibit nonlinear dynamics, techniques like Extended Kalman Filter (EKF) and Nonlinear Dynamic Data Reconciliation (NDDR) have been developed to improve the Data quality. There are various issues that arise with the use of either of these techniques: EKF cannot handle inequality or equality constraints, while the NDDR has high computational cost. Recently a recursive estimation technique for nonlinear dynamic processes has been proposed which combines the merits of EKF and NDDR techniques. This technique, named as Recursive Nonlinear Dynamic Data Reconciliation (RNDDR), provides state and parameter estimates that satisfy bounds and other constraints imposed on them. However, the estimate error covariance matrix in RNDDR is computed in the same manner as in EKF, that is, the effects of both nonlinearity and constraints are neglected in the computation of the estimate error covariance matrix. A relatively new method known as the Unscented Kalman Filter has been developed for nonlinear processes, in which the statistical properties of the estimates are computed without resorting to linearization of the nonlinear equations. This leads to improved accuracy of the estimates. In this paper, we combine the merits of the Unscented Kalman Filter and the RNDDR to obtain the Unscented Recursive Nonlinear Dynamic Data Reconciliation (URNDDR) technique. This technique addresses all concerns arising due to the presence of nonlinearity and constraints within a recursive estimation framework, resulting in an efficient, accurate and stable method for real-time state and parameter estimation for nonlinear dynamic processes.

  • recursive estimation in constrained nonlinear dynamical systems
    Aiche Journal, 2005
    Co-Authors: Pramod Vachhani, Raghunathan Rengaswamy, V Vikrant R Gangwal, Shankar Narasimhan
    Abstract:

    In any modern chemical plant or refinery, process operation and the quality of product depend on the reliability of Data used for process monitoring and control. The task of improving the quality of Data to be consistent with material and energy balances is called Reconciliation. Because chemical processes often operate dynamically in nonlinear regimes, techniques such as extended-Kalman filter (EKF) and nonlinear dynamic Data Reconciliation (NDDR) have been developed for Reconciliation. There are various issues that arise with the use of either of these techniques. EKF cannot handle inequality or equality constraints, whereas the NDDR has high computational cost. Therefore, a more efficient and robust method is required for reconciling process measurements and estimating parameters involved in nonlinear dynamic processes. Two solution techniques are presented: recursive nonlinear dynamic Data Reconciliation (RNDDR) and a combined predictor–corrector optimization (CPCO) method for efficient state and parameter estimation in nonlinear systems. The proposed approaches combine the efficiency of EKF and the ability of NDDR to handle algebraic inequality and equality constraints. Moreover, the CPCO technique allows deterministic parameter variation, thus relaxing another restriction of EKF where the parameter changes are modeled through a discrete stochastic equation. The proposed techniques are compared against the EKF and the NDDR formulations through simulation studies on a continuous stirred tank reactor and a polymerization reactor. In general, the RNDDR performs as well as the two traditional approaches, whereas the CPCO formulation provides more accurate results than RNDDR at a marginal increase in computational cost. © 2005 American Institute of Chemical Engineers AIChE J, 51: 946–959, 2005

  • robust constrained estimation via unscented transformation
    IFAC Proceedings Volumes, 2004
    Co-Authors: Pramod Vachhani, Shankar Narasimhan, Raghunathan Rengaswamy
    Abstract:

    Abstract The task of improving the quality of the Data so that it is consistent with material and energy balances is called Reconciliation. Since chemical processes often operate dynamically in nonlinear regimes, techniques like Extended Kalman Filter (EKF) and Nonlinear DYnamic Data Reconciliation (NDDR) have been developed. There are various issues that arise with the use of either of these techniques: EKF cannot handle inequality or equality constraints, while the NDDR has high computational cost. In this paper, first, a recursive nonlinear dynamic Data Reconciliation (RNDDR) formulation is discussed. The RNDDR formulation extends the capability of the EKF by allowing for incorporation of algebraic constraints and bounds during correction. The covariance calculations arising in the RNDDR are same as EKF, i.e., both, nonlinearity and constraints are neglected during covariance propagation and calculation of uncertainty in filtered estimates. The use of Unscented Transformation with the RNDDR gives the Unscented Recursive Nonlinear Dynamic Data Reconciliation (URNDDR) formulation, which addresses all the aspects of nonlinearity and constraints in a recursive estimation framework, thus proving to be an efficient tool for real-time estimation

Junghui Chen - One of the best experts on this subject based on the ideXlab platform.

  • programming strategies of sequential incremental scale subproblems for large scale Data Reconciliation and parameter estimation with multi operational conditions
    Industrial & Engineering Chemistry Research, 2015
    Co-Authors: Zhengjiang Zhang, Zhijiang Shao, Junghui Chen
    Abstract:

    Data Reconciliation and parameter estimation (DRPE) is a crucial issue in model-based applications, such as real-time optimization and process control. In order to obtain more reliable parameter estimates, a series of measurement Data sets from different operational conditions will be used for DRPE problems. However, the dimensionality of DRPE problems increases directly with the number of measurement Data sets. The number of degrees of freedom in DRPE problems is usually very large. Therefore, it is very difficult to solve the DRPE problem with multioperational conditions. On the basis of the characteristics of the DRPE problem, two directions, including the direction of incremental objectives of the DRPE problem and the direction of incremental parameters of the DRPE problem, are considered to decompose the original DRPE optimization problem into a series of incremental-scale subproblems. Three programming strategies are proposed to solve a series of incremental-scale subproblems one by one. The solutio...

  • correntropy based Data Reconciliation and gross error detection and identification for nonlinear dynamic processes
    Computers & Chemical Engineering, 2015
    Co-Authors: Zhengjiang Zhang, Junghui Chen
    Abstract:

    Abstract Measurement information in dynamic chemical processes is subject to corruption. Although nonlinear dynamic Data Reconciliation (NDDR) utilizes enhanced simultaneous optimization and solution techniques associated with a finite calculation horizon, it is still affected by different types of gross errors. In this paper, two algorithms of Data processing, including correntropy based NDDR (CNDDR) as well as gross error detection and identification (GEDI), are developed to improve the quality of the Data measurements. CNDDR's Reconciliation and estimation are accurate in spite of the presence of gross errors. In addition to CNDDR, GEDI with a hypothesis testing and a distance–time step criterion identifies types of gross errors in dynamic systems. Through a case study of the free radical polymerization of styrene in a complex nonlinear dynamic chemical process, CNDDR greatly decreases the influence of the gross errors on the reconciled results and GEDI successfully classifies the types of gross errors of the measured Data.

  • simultaneous Data Reconciliation and gross error detection for dynamic systems using particle filter and measurement test
    Computers & Chemical Engineering, 2014
    Co-Authors: Zhengjiang Zhang, Junghui Chen
    Abstract:

    Abstract Good dynamic model estimation plays an important role for both feedforward and feedback control, fault detection, and system optimization. Attempts to successfully implement model estimators are often hindered by severe process nonlinearities, complicated state constraints, systematic modeling errors, unmeasurable perturbations, and irregular measurements with possibly abnormal behaviors. Thus, simultaneous Data Reconciliation and gross error detection (DRGED) for dynamic systems are fundamental and important. In this research, a novel particle filter (PF) algorithm based on the measurement test (MT) is used to solve the dynamic DRGED problem, called PFMT-DRGED. This strategy can effectively solve the DRGED problem through measurements that contain gross errors in the nonlinear dynamic process systems. The performance of PFMT-DRGED is demonstrated through the results of two statistical performance indices in a classical nonlinear dynamic system. The effectiveness of the proposed PFMT-DRGED applied to a nonlinear dynamic system and a large scale polymerization process is illustrated.

Jose Carlos Pinto - One of the best experts on this subject based on the ideXlab platform.

  • simultaneous robust Data Reconciliation and gross error detection through particle swarm optimization for an industrial polypropylene reactor
    Chemical Engineering Science, 2010
    Co-Authors: Diego Martinez Prata, Marcio Schwaab, E L Lima, Jose Carlos Pinto
    Abstract:

    In a previous study, a nonlinear dynamic Data Reconciliation procedure (NDDR) based on the particle swarm optimization (PSO) method was developed and validated in line and in real time with actual industrial Data obtained for an industrial polypropylene reactor (Prata et al., 2009, 2008b). The procedure is modified to allow for robust implementation of the NDDR problem with simultaneous detection of gross errors and estimation of model parameters. The negative effects of the less frequent gross errors are eliminated with the implementation of the Welsch robust estimator, avoiding the computation of biased estimates and implementation of iterative procedures for detection and removal of gross errors. The performance of the proposed procedure was tested in line and in real time in an industrial bulk propylene polymerization process. A phenomenological model of the real process, based on the detailed mass and energy balances and constituted by a set of algebraic-differential equations, was implemented and used for interpretation of the actual plant behavior. The resulting nonlinear dynamic optimization problem was solved iteratively on a moving time window, in order to capture the current process behavior and allow for dynamic adaptation of model parameters. Results indicate that the proposed procedure, based on the combination of the PSO method and the robust Welsch estimator, can be implemented in real time in real industrial environments, allowing for the simultaneous detection of gross errors and estimation of process states and model parameters, leading to more robust and reproducible numerical performance.

  • nonlinear dynamic Data Reconciliation and parameter estimation through particle swarm optimization application for an industrial polypropylene reactor
    Chemical Engineering Science, 2009
    Co-Authors: Diego Martinez Prata, Marcio Schwaab, E L Lima, Jose Carlos Pinto
    Abstract:

    Abstract This work presents a procedure to solve nonlinear dynamic Data Reconciliation (NDDR) problems with simultaneous parameter estimation based on particle swarm optimization (PSO). The performance of the proposed procedure is compared to the performance of a standard Gauss–Newton (GN) scheme in a real industrial problem, as presented previously by Prata et al. [2006. Simultaneous Data Reconciliation and parameter estimation in bulk polypropylene polymerizations in real time. Macromolecular Symposia 243, 91–103; 2008. In-line monitoring of bulk polypropylene reactors based on Data Reconciliation procedures. Macromolecular Symposia 271, 26–37]. Both methods are used to solve the NDDR problem in an industrial bulk propylene polymerization process, using real Data in real time for the simultaneous estimation of model parameters and process states. A phenomenological model of the real process, based on the detailed mass and energy balances and constituted by a set of algebraic–differential equations, was implemented and used for interpretation of the actual plant behavior in real time. The resulting nonlinear dynamic optimization problem was solved iteratively on a moving time window, in order to capture the current process behavior and allow for dynamic adaptation of model parameters. Obtained results indicate that the proposed PSO procedure can be implemented in real time, allowing for estimation of more reliable process states and model parameters and leading to much more robust and reproducible numerical performance.

Daniel Hodouin - One of the best experts on this subject based on the ideXlab platform.

  • mineral processing plant Data Reconciliation including mineral mass balance constraints
    Minerals Engineering, 2018
    Co-Authors: Maryam Sadeghi, Daniel Hodouin, Claude Bazin
    Abstract:

    Abstract The operation of mineral processing units or plants is related to the mineral composition of the ore. However, unit performances are usually characterized in terms of metal content or recovery as this Data is easier to be obtained rather than the mineral content. This paper presents a Data Reconciliation method that combines material balancing calculations and mineral stoichiometric information to estimate balanced mineral composition, chemical assays and flow rates in various streams of a mineral processing plant. The advantage of this method is evaluated by comparing the variance of the reconciled variables from this method to those obtained from usual Data Reconciliation methods. The estimated mineral composition leads to improved process performance evaluation.

  • Determining a dynamic model for flotation circuits using plant Data to implement a Kalman filter for Data Reconciliation
    Minerals Engineering, 2015
    Co-Authors: Amir Vasebi, Eric Poulin, Daniel Hodouin
    Abstract:

    Abstract Data Reconciliation is extensively applied to improve the accuracy and reliability of plant measurements. It relies on process models ranging from simple mass and energy conservation equations to complete causal models. The precision of reconciled Data mainly depends on the complexity and quality of plant models used to develop Data Reconciliation observers. In practice, the difficulty of obtaining detailed models prevents the application of powerful observers like the Kalman filter. The objective of this study is to propose a methodology to build a model for a flotation circuit to support the implementation of a Kalman filter for dynamic Data Reconciliation. This modeling approach extracts essential information from the plant topology, nominal operating conditions, and historical Data. Simulation results illustrate that applying a Kalman filter based on a rough empirical model that has been correctly tuned gives better estimates than those obtained with sub-model based observers.

  • selecting proper uncertainty model for steady state Data Reconciliation application to mineral and metal processing industries
    Minerals Engineering, 2014
    Co-Authors: Amir Vasebi, Eric Poulin, Daniel Hodouin
    Abstract:

    Abstract Data Reconciliation is widely applied in mineral and metal processing plants to improve information quality. Imprecision, unreliability and incompleteness of measurements are common problems motivating the implementation of the technique. Current practices rely on mass and energy conservation constraints to estimate the underlying steady-state values of process variables. Typically, the Gaussian context is assumed and a Maximum-Likelihood estimator is selected. The performance of such an estimator depends on the covariance matrices used to characterize model and measurement uncertainties. In practice, determining these covariance matrices is a challenging task that is often overlooked. Using inappropriate uncertainty models, based on simplistic or improper hypotheses, can lead to unexpected underperformances. The objective of the paper is to illustrate the impact of correctly selecting uncertainty covariance matrices for steady-state Data Reconciliation. Different case-studies involving a combustion chamber, a hydrocyclone, a flotation circuit, and a separation unit are used for investigating the sensitivity of the algorithm to the structure of covariance matrices. An example based on Monte-Carlo simulations is presented to assess the importance of assigning right values to variance terms. Simulation results show that the adjustment of uncertainty covariance matrices has a significant influence on the precision of estimates and reveal that some common tuning practices can have detrimental effects.

  • dynamic Data Reconciliation based on node imbalance autocovariance functions
    Computers & Chemical Engineering, 2012
    Co-Authors: Amir Vasebi, Eric Poulin, Daniel Hodouin
    Abstract:

    Abstract To reduce impacts of measurement errors on plant variables, Data Reconciliation is widely applied in process industries. Reconciled measurements are used in applications such as performance monitoring, process control, or real-time optimization. However, precise estimation generally relies on accurate and detailed process models which could be difficult to build in practice. The trade-off between estimate precision and model complexity is a relevant challenge motivating the development of effective observers with limited modeling efforts. This paper proposes a Data Reconciliation method based on a simple mass and/or energy conservation sub-model that also considers the autocovariance function of plant node imbalances. The observer is applied to simulated benchmark plants and its performance is evaluated in terms of variance reduction and robustness against modeling errors. Results show a superior performance in comparison with classical sub-model based methods and reveal less performance degradation than the Kalman filter in presence of model uncertainties.

  • methods for automatic control observation and optimization in mineral processing plants
    Journal of Process Control, 2011
    Co-Authors: Daniel Hodouin
    Abstract:

    Abstract For controlling strongly disturbed, poorly modeled, and difficult to measure processes, such as those involved in the mineral processing industry, the peripheral tools of the control loop (fault detection and isolation system, Data Reconciliation procedure, observers, soft sensors, optimizers, model parameter tuners) are as important as the controller itself. The paper briefly describes each element of this generalized control loop, while putting emphasis on mineral processing specific cases.