Data Quality Practitioner

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3 Experts worldwide ranked by ideXlab platform

Mukesh Mohania - One of the best experts on this subject based on the ideXlab platform.

  • Data Cleansing Techniques for Large Enterprise Datasets
    2011 Annual SRII Global Conference, 2011
    Co-Authors: K. Hima Prasad, Tanveer Afzal Faruquie, Sachindra Joshi, Snigdha Chaturvedi, L. Venkata Subramaniam, Mukesh Mohania
    Abstract:

    Data Quality improvement is an important aspect of enterprise Data management. Data characteristics can change with customers, with domain and geography making Data Quality improvement a challenging task. Data Quality improvement is often an iterative process which mainly involves writing a set of Data Quality rules for standardization and elimination of duplicates that are present within the Data. Existing Data cleansing tools require a fair amount of customization whenever moving from one customer to another and from one domain to another. In this paper, we present a Data Quality improvement tool which helps the Data Quality Practitioner by showing the characteristics of the entities present in the Data. The tool identifies the variants and synonyms of a given entity present in the Data which is an important task for writing Data Quality rules for standardizing the Data. We present a ripple down rule framework for maintaining Data Quality rules which helps in reducing the services effort for adding new rules. We also present a typical workflow of the Data Quality improvement process and show the usefulness of the tool at each step. We also present some experimental results and discussions on the usefulness of the tools for reducing services effort in a Data Quality improvement.

K. Hima Prasad - One of the best experts on this subject based on the ideXlab platform.

  • Data Cleansing Techniques for Large Enterprise Datasets
    2011 Annual SRII Global Conference, 2011
    Co-Authors: K. Hima Prasad, Tanveer Afzal Faruquie, Sachindra Joshi, Snigdha Chaturvedi, L. Venkata Subramaniam, Mukesh Mohania
    Abstract:

    Data Quality improvement is an important aspect of enterprise Data management. Data characteristics can change with customers, with domain and geography making Data Quality improvement a challenging task. Data Quality improvement is often an iterative process which mainly involves writing a set of Data Quality rules for standardization and elimination of duplicates that are present within the Data. Existing Data cleansing tools require a fair amount of customization whenever moving from one customer to another and from one domain to another. In this paper, we present a Data Quality improvement tool which helps the Data Quality Practitioner by showing the characteristics of the entities present in the Data. The tool identifies the variants and synonyms of a given entity present in the Data which is an important task for writing Data Quality rules for standardizing the Data. We present a ripple down rule framework for maintaining Data Quality rules which helps in reducing the services effort for adding new rules. We also present a typical workflow of the Data Quality improvement process and show the usefulness of the tool at each step. We also present some experimental results and discussions on the usefulness of the tools for reducing services effort in a Data Quality improvement.

Tanveer Afzal Faruquie - One of the best experts on this subject based on the ideXlab platform.

  • Data Cleansing Techniques for Large Enterprise Datasets
    2011 Annual SRII Global Conference, 2011
    Co-Authors: K. Hima Prasad, Tanveer Afzal Faruquie, Sachindra Joshi, Snigdha Chaturvedi, L. Venkata Subramaniam, Mukesh Mohania
    Abstract:

    Data Quality improvement is an important aspect of enterprise Data management. Data characteristics can change with customers, with domain and geography making Data Quality improvement a challenging task. Data Quality improvement is often an iterative process which mainly involves writing a set of Data Quality rules for standardization and elimination of duplicates that are present within the Data. Existing Data cleansing tools require a fair amount of customization whenever moving from one customer to another and from one domain to another. In this paper, we present a Data Quality improvement tool which helps the Data Quality Practitioner by showing the characteristics of the entities present in the Data. The tool identifies the variants and synonyms of a given entity present in the Data which is an important task for writing Data Quality rules for standardizing the Data. We present a ripple down rule framework for maintaining Data Quality rules which helps in reducing the services effort for adding new rules. We also present a typical workflow of the Data Quality improvement process and show the usefulness of the tool at each step. We also present some experimental results and discussions on the usefulness of the tools for reducing services effort in a Data Quality improvement.

Sachindra Joshi - One of the best experts on this subject based on the ideXlab platform.

  • Data Cleansing Techniques for Large Enterprise Datasets
    2011 Annual SRII Global Conference, 2011
    Co-Authors: K. Hima Prasad, Tanveer Afzal Faruquie, Sachindra Joshi, Snigdha Chaturvedi, L. Venkata Subramaniam, Mukesh Mohania
    Abstract:

    Data Quality improvement is an important aspect of enterprise Data management. Data characteristics can change with customers, with domain and geography making Data Quality improvement a challenging task. Data Quality improvement is often an iterative process which mainly involves writing a set of Data Quality rules for standardization and elimination of duplicates that are present within the Data. Existing Data cleansing tools require a fair amount of customization whenever moving from one customer to another and from one domain to another. In this paper, we present a Data Quality improvement tool which helps the Data Quality Practitioner by showing the characteristics of the entities present in the Data. The tool identifies the variants and synonyms of a given entity present in the Data which is an important task for writing Data Quality rules for standardizing the Data. We present a ripple down rule framework for maintaining Data Quality rules which helps in reducing the services effort for adding new rules. We also present a typical workflow of the Data Quality improvement process and show the usefulness of the tool at each step. We also present some experimental results and discussions on the usefulness of the tools for reducing services effort in a Data Quality improvement.

Snigdha Chaturvedi - One of the best experts on this subject based on the ideXlab platform.

  • Data Cleansing Techniques for Large Enterprise Datasets
    2011 Annual SRII Global Conference, 2011
    Co-Authors: K. Hima Prasad, Tanveer Afzal Faruquie, Sachindra Joshi, Snigdha Chaturvedi, L. Venkata Subramaniam, Mukesh Mohania
    Abstract:

    Data Quality improvement is an important aspect of enterprise Data management. Data characteristics can change with customers, with domain and geography making Data Quality improvement a challenging task. Data Quality improvement is often an iterative process which mainly involves writing a set of Data Quality rules for standardization and elimination of duplicates that are present within the Data. Existing Data cleansing tools require a fair amount of customization whenever moving from one customer to another and from one domain to another. In this paper, we present a Data Quality improvement tool which helps the Data Quality Practitioner by showing the characteristics of the entities present in the Data. The tool identifies the variants and synonyms of a given entity present in the Data which is an important task for writing Data Quality rules for standardizing the Data. We present a ripple down rule framework for maintaining Data Quality rules which helps in reducing the services effort for adding new rules. We also present a typical workflow of the Data Quality improvement process and show the usefulness of the tool at each step. We also present some experimental results and discussions on the usefulness of the tools for reducing services effort in a Data Quality improvement.