Data Quality Management

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 395805 Experts worldwide ranked by ideXlab platform

Boris Otto - One of the best experts on this subject based on the ideXlab platform.

  • controlling customer master Data Quality findings from a case study
    International Conference on Information Resources Management (Conf-IRM), 2013
    Co-Authors: Ehsan Baghi, Boris Otto, Hubert Österle
    Abstract:

    Data Quality Management plays a critical role in all kinds of organizations. High-Quality Data is one of the most important prerequisites for making strategic business decisions and executing business processes. In order to be able to assess Data Quality, ensure efficient process execution, and verify the effectiveness of Data Quality initiatives, Data Quality has to be monitored and controlled. This can be achieved by implementing a comprehensive controlling system for Data Quality. However, only few organizations have managed to implement such a system. This paper presents a single-case study describing the process of implementing a comprehensive Data Quality controlling system. The study focuses on controlling activities defined in the field of business Management.

  • integrating a Data Quality perspective into business process Management
    Business Process Management Journal, 2012
    Co-Authors: Martin Ofner, Boris Otto, Hubert Österle
    Abstract:

    – The purpose of this paper is to conceptualize Data Quality (DQ) in the context of business process Management and to propose a DQ oriented approach for business process modeling. The approach is based on key concepts and metrics from the Data Quality Management domain and supports decision‐making in process re‐design projects on the basis of process models., – The paper applies a design oriented research approach, in the course of which a modeling method is developed as a design artifact. To do so, method engineering is used as a design technique. The artifact is theoretically founded and incorporates DQ considerations into process re‐design. Furthermore, the paper uses a case study to evaluate the suggested approach., – The paper shows that the DQ oriented process modeling approach facilitates and improves managerial decision‐making in the context of process re‐design. Data Quality is considered as a success factor for business processes and is conceptualized using a rule‐based approach., – The paper presents design research and a case study. More research is needed to triangulate the findings and to allow generalizability of the results., – The paper supports decision‐makers in enterprises in taking a DQ perspective in business process re‐design initiatives., – The paper reports on integrating DQ considerations into business process Management in general and into process modeling in particular, in order to provide more comprehensive decision‐making support in process re‐design projects. The paper represents one of the first contributions to literature regarding a contemporary phenomenon of high practical and scientific relevance.

  • Toward a functional reference model for master Data Quality Management
    Information Systems and E-business Management, 2011
    Co-Authors: Boris Otto, Kai M. Hüner, Hubert Österle
    Abstract:

    The Quality of master Data has become an issue of increasing prominence in companies. One reason for that is the growing number of regulatory and legal provisions companies need to comply with. Another reason is the growing importance of information systems supporting decision-making, requiring master Data that is up-to-date, accurate and complete. While improving and maintaining master Data Quality is an organizational task that cannot be encountered by simply implementing a suitable software system, system support is mandatory in order to be able to meet challenges efficiently and make for good results. This paper describes the design process toward a functional reference model for master Data Quality Management (MDQM). The model design process spanned several iterations comprising multiple design and evaluation cycles, including the model’s application in a participative case study at consumer goods manufacturer Beiersdorf. Practitioners may use the reference model as an instrument for the analysis, design and implementation of a company’s MDQM system landscape. Moreover, the reference model facilitates evaluation of software systems and supports company-internal and external communication. From a scientific perspective, the reference model is a design artifact; hence it represents a theory for designing information systems in the area of MDQM.

  • Quality Management of corporate Data assets
    2011
    Co-Authors: Boris Otto
    Abstract:

    Corporate Data assets such as customer, material, and supplier master Data are critical when it comes to fulfilling business requirements such as compliance to regulations, integrated customer Management and global business process integration. Surprisingly, companies do not treat corporate Data as an asset, but rather focus on reactive Quality Management measures only. As a response to that, the chapterreports on the design of a reference model for Corporate Data Quality Management (CDQM). Following the principles of Design Science Research (DSR), the design process involved professionals of fourteen corporations headquartered in Germany and Switzerland. The reference model consists of six design areas which altogether comprise fifteen goals and 43 related practices. It supports practitioners during the establishment and optimization of their CDQM initiatives.

  • Towards a maturity model for corporate Data Quality Management
    Proceedings of the 2009 ACM symposium on Applied Computing - SAC '09, 2009
    Co-Authors: Kai M. Hüner, Martin Hubert Ofner, Boris Otto
    Abstract:

    High-Quality corporate Data is a prerequisite for world-wide business process harmonization, global spend analysis, integrated service Management, and compliance with regulatory and legal requirements. Corporate Data Quality Management (CDQM) describes the Quality oriented organization and control of a company's key Data assets such as material, customer, and vendor Data. With regard to the aforementioned business drivers, companies demand an instrument to assess the progress and performance of their CDQM initiative. This paper proposes a reference model for CDQM maturity assessment. The model is intended to be used for supporting the build process of CDQM. A case study shows how the model has been successfully implemented in a real-world scenario.

Kumanan Wilson - One of the best experts on this subject based on the ideXlab platform.

  • evaluation of the Quality of clinical Data collection for a pan canadian cohort of children affected by inherited metabolic diseases lessons learned from the canadian inherited metabolic diseases research network
    Orphanet Journal of Rare Diseases, 2020
    Co-Authors: Kylie Tingley, Monica Lamoureux, Michael Pugliese, Michael T Geraghty, Jonathan B Kronick, Beth K Potter, Doug Coyle, Kumanan Wilson
    Abstract:

    The Canadian Inherited Metabolic Diseases Research Network (CIMDRN) is a pan-Canadian practice-based research network of 14 Hereditary Metabolic Disease Treatment Centres and over 50 investigators. CIMDRN aims to develop evidence to improve health outcomes for children with inherited metabolic diseases (IMD). We describe the development of our clinical Data collection platform, discuss our Data Quality Management plan, and present the findings to date from our Data Quality assessment, highlighting key lessons that can serve as a resource for future clinical research initiatives relating to rare diseases. At participating centres, children born from 2006 to 2015 who were diagnosed with one of 31 targeted IMD were eligible to participate in CIMDRN’s clinical research stream. For all participants, we collected a minimum Data set that includes information about demographics and diagnosis. For children with five prioritized IMD, we collected longitudinal Data including interventions, clinical outcomes, and indicators of disease Management. The Data Quality Management plan included: design of user-friendly and intuitive clinical Data collection forms; validation measures at point of Data entry, designed to minimize Data entry errors; regular communications with each CIMDRN site; and routine review of aggregate Data. As of June 2019, CIMDRN has enrolled 798 participants of whom 764 (96%) have complete minimum Data set information. Results from our Data Quality assessment revealed that potential Data Quality issues were related to interpretation of definitions of some variables, participants who transferred care across institutions, and the organization of information within the patient charts (e.g., neuropsychological test results). Little information was missing regarding disease ascertainment and diagnosis (e.g., ascertainment method – 0% missing). Using several Data Quality Management strategies, we have established a comprehensive clinical Database that provides information about care and outcomes for Canadian children affected by IMD. We describe Quality issues and lessons for consideration in future clinical research initiatives for rare diseases, including accurately accommodating different clinic workflows and balancing comprehensiveness of Data collection with available resources. Integrating Data collection within clinical care, leveraging electronic medical records, and implementing core outcome sets will be essential for achieving sustainability.

Ali Sunyaev - One of the best experts on this subject based on the ideXlab platform.

  • process driven Data Quality Management an application of the combined conceptual life cycle model
    Hawaii International Conference on System Sciences, 2014
    Co-Authors: Paul Glowalla, Patryk Balazy, Dirk Basten, Ali Sunyaev
    Abstract:

    Process-driven Data Quality Management, which allows sustaining Data Quality improvements within and beyond the IS domain, is increasingly important. The emphasis on and the integration of Data Quality into process models allows for a detailed, context-specific definition as well as understanding of Data Quality (dimensions) and, thus, supports communication across stakeholders. Extant process modeling approaches lack an explicit reference from Data Quality dimensions to context-specific information product (IP) production. Therefore, we provide a process-driven application of the combined conceptual life cycle (CCLC) model for process exploration and Data Quality improvement. The paper presents an interpretive, in-depth case study in a medium-sized company, which launched a process optimization initiative to improve Data Quality. The results show benefits and limitations of the approach, allowing practitioners to tailor the approach to their needs. Based on our insights, suggestions for further improvements of the CCLC model for a process-driven IP production approach are provided.

  • process driven Data Quality Management through integration of Data Quality into existing process models application of complexity reducing patterns and the impact on complexity metrics
    Web Intelligence, 2013
    Co-Authors: Paul Glowalla, Ali Sunyaev
    Abstract:

    The importance of high Data Quality and the need to consider Data Quality in the context of business processes are well acknowledged. Process modeling is mandatory for process-driven Data Quality Management, which seeks to improve and sustain Data Quality by redesigning processes that create or modify Data. A variety of process modeling languages exist, which organizations heterogeneously apply. The purpose of this article is to present a context-independent approach to integrate Data Quality into the variety of existing process models. The authors aim to improve communication of Data Quality issues across stakeholders while considering process model complexity. They build on a keyword-based literature review in 74 IS journals and three conferences, reviewing 1,555 articles from 1995 onwards. 26 articles, including 46 process models, were examined in detail. The literature review reveals the need for a context-independent and visible integration of Data Quality into process models. First, the authors present the enhancement of existing process models with Data Quality characteristics. Second, they present the integration of a Data-Quality-centric process model with existing process models. Since process models are mainly used for communicating processes, they consider the impact of integrating Data Quality and the application of patterns for complexity reduction on the models’ complexity metrics. There is need for further research on complexity metrics to improve the applicability of complexity reduction patterns. Lacking knowledge about interdependency between metrics and missing complexity metrics impede assessment and prediction of process model complexity and thus understandability. Finally, our context-independent approach can be used complementarily for Data Quality integration with specific process modeling languages.

Hubert Österle - One of the best experts on this subject based on the ideXlab platform.

  • controlling customer master Data Quality findings from a case study
    International Conference on Information Resources Management (Conf-IRM), 2013
    Co-Authors: Ehsan Baghi, Boris Otto, Hubert Österle
    Abstract:

    Data Quality Management plays a critical role in all kinds of organizations. High-Quality Data is one of the most important prerequisites for making strategic business decisions and executing business processes. In order to be able to assess Data Quality, ensure efficient process execution, and verify the effectiveness of Data Quality initiatives, Data Quality has to be monitored and controlled. This can be achieved by implementing a comprehensive controlling system for Data Quality. However, only few organizations have managed to implement such a system. This paper presents a single-case study describing the process of implementing a comprehensive Data Quality controlling system. The study focuses on controlling activities defined in the field of business Management.

  • integrating a Data Quality perspective into business process Management
    Business Process Management Journal, 2012
    Co-Authors: Martin Ofner, Boris Otto, Hubert Österle
    Abstract:

    – The purpose of this paper is to conceptualize Data Quality (DQ) in the context of business process Management and to propose a DQ oriented approach for business process modeling. The approach is based on key concepts and metrics from the Data Quality Management domain and supports decision‐making in process re‐design projects on the basis of process models., – The paper applies a design oriented research approach, in the course of which a modeling method is developed as a design artifact. To do so, method engineering is used as a design technique. The artifact is theoretically founded and incorporates DQ considerations into process re‐design. Furthermore, the paper uses a case study to evaluate the suggested approach., – The paper shows that the DQ oriented process modeling approach facilitates and improves managerial decision‐making in the context of process re‐design. Data Quality is considered as a success factor for business processes and is conceptualized using a rule‐based approach., – The paper presents design research and a case study. More research is needed to triangulate the findings and to allow generalizability of the results., – The paper supports decision‐makers in enterprises in taking a DQ perspective in business process re‐design initiatives., – The paper reports on integrating DQ considerations into business process Management in general and into process modeling in particular, in order to provide more comprehensive decision‐making support in process re‐design projects. The paper represents one of the first contributions to literature regarding a contemporary phenomenon of high practical and scientific relevance.

  • Toward a functional reference model for master Data Quality Management
    Information Systems and E-business Management, 2011
    Co-Authors: Boris Otto, Kai M. Hüner, Hubert Österle
    Abstract:

    The Quality of master Data has become an issue of increasing prominence in companies. One reason for that is the growing number of regulatory and legal provisions companies need to comply with. Another reason is the growing importance of information systems supporting decision-making, requiring master Data that is up-to-date, accurate and complete. While improving and maintaining master Data Quality is an organizational task that cannot be encountered by simply implementing a suitable software system, system support is mandatory in order to be able to meet challenges efficiently and make for good results. This paper describes the design process toward a functional reference model for master Data Quality Management (MDQM). The model design process spanned several iterations comprising multiple design and evaluation cycles, including the model’s application in a participative case study at consumer goods manufacturer Beiersdorf. Practitioners may use the reference model as an instrument for the analysis, design and implementation of a company’s MDQM system landscape. Moreover, the reference model facilitates evaluation of software systems and supports company-internal and external communication. From a scientific perspective, the reference model is a design artifact; hence it represents a theory for designing information systems in the area of MDQM.

  • One Size Does Not Fit All---A Contingency Approach to Data Governance
    Journal of Data and Information Quality, 2009
    Co-Authors: Kristin Weber, Boris Otto, Hubert Österle
    Abstract:

    Enterprizes need Data Quality Management (DQM) to respond to strategic and operational challenges demanding high-Quality corporate Data. Hitherto, companies have mostly assigned accountabilities for DQM to Information Technology (IT) departments. They have thereby neglected the organizational issues critical to successful DQM. With Data governance, however, companies may implement corporate-wide accountabilities for DQM that encompass professionals from business and IT departments. This research aims at starting a scientific discussion on Data governance by transferring concepts from IT governance and organizational theory to the previously largely ignored field of Data governance. The article presents the first results of a community action research project on Data governance comprising six international companies from various industries. It outlines a Data governance model that consists of three components (Data Quality roles, decision areas, and responsibilities), which together form a responsibility assignment matrix. The Data governance model documents Data Quality roles and their type of interaction with DQM activities. In addition, the article describes a Data governance contingency model and demonstrates the influence of performance strategy, diversification breadth, organization structure, competitive strategy, degree of process harmonization, degree of market regulation, and decision-making style on Data governance. Based on these findings, companies can structure their specific Data governance model.

Bo Pedersen Weidema - One of the best experts on this subject based on the ideXlab platform.

  • multi user test of the Data Quality matrix for product life cycle inventory Data
    International Journal of Life Cycle Assessment, 1998
    Co-Authors: Bo Pedersen Weidema
    Abstract:

    The Data Quality matrix for product life cycle inventory Data proposed inWhdkma &Wlsnaus (J. Cleaner Prod. (1996), 4: 167-174) was subjected to a multi-user test, in which 7 persons scored the same 10 Datasets representing 10 different processes. Deviations among scores were listed, and the causes for deviations were determined and grouped into a limited number of well-defined classes. For the majority of the scores, the different test persons arrived at the same score. Deviations occur most often among neighbouring scores. Only a smaller number of the deviations (less than 10% of all scores) affect the overall assessment of the Data Quality and/or uncertainty of the corresponding Dataset. Based on the analysis of the causes of the deviations, improvements to the matrix and its accompanying explanations were suggested and implemented (reported in the appendix to this paper). The average time consumption for the scoring by the different test persons was less than 10 minutes per Data set. It is concluded that the time consumption and the number of deviating scores can be kept at an acceptable level for the pedigree matrix to be recommended for internal Data Quality Management and for comprehensive communication of Quality assessments of large amounts of Data.

  • Data Quality Management for life cycle inventories-an example of using Data Quality indicators
    Journal of Cleaner Production, 1996
    Co-Authors: Bo Pedersen Weidema, Marianne Suhr Wesnæs
    Abstract:

    A formal procedure for Data Quality Management in life cycle inventory is described. The procedure is applied to the example of an energy inventory for 1 kg rye bread. Five independent Data Quality indicators are suggested as necessary and sufficient to describe those aspects of Data Quality which influence the reliability of the result. Listing these Data Quality indicators for all Data gives an improved understanding of the typical Data Quality problems of a particular study. This may subsequently be used for improving the Data collection strategy during a life cycle study. To give an assessment of the reliability of the overall result of a life cycle inventory, the Data Quality indicators are transformed into estimates of the additional uncertainty due to the insufficient Data Quality. It is shown how a low Data Quality can both increase the uncertainty and change the mean value. After assigning additional uncertainties to all Data in the study, a calculation of the uncertainty of the overall result is made by the use of simulations. The use of default estimates of additional uncertainties is suggested as a way to both simplify and improve the procedure. © 1997.