Granular Data

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 35745 Experts worldwide ranked by ideXlab platform

Witold Pedrycz - One of the best experts on this subject based on the ideXlab platform.

  • A Two-Stage Approach for Constructing Type-2 Information Granules.
    IEEE transactions on cybernetics, 2020
    Co-Authors: Xiubin Zhu, Witold Pedrycz
    Abstract:

    In this article, we are concerned with the formation of type-2 information granules in a two-stage approach. We present a comprehensive algorithmic framework which gives rise to information granules of a higher type (type-2, to be specific) such that the key structure of the local Granular Data, their topologies, and their diversities become fully reflected and quantified. In contrast to traditional collaborative clustering where local structures (information granules) are obtained by running algorithms on the local Datasets and communicating findings across sites, we propose a way of characterizing Granular Data (formed) by forming a suite of higher type information granules to reveal an overall structure of a collection of locally available Datasets. Information granules built at the lower level on a basis of local sources of Data are weighted by the number of Data they represent while the information granules formed at the higher level of hierarchy are more abstract and general, thus facilitating a formation of a hierarchical description of Data realized at different levels of detail. The construction of information granules is completed by resorting to fuzzy clustering algorithms (more specifically, the well-known Fuzzy C-Means). In the formation of information granules, we follow the fundamental principle of Granular computing, viz., the principle of justifiable Granularity. Experimental studies concerning selected publicly available machine-learning Datasets are reported.

  • Development and Analysis of Neural Networks Realized in the Presence of Granular Data
    IEEE transactions on neural networks and learning systems, 2019
    Co-Authors: Xiubin Zhu, Witold Pedrycz
    Abstract:

    In this article, we propose a design and evaluation framework of Granular neural networks realized in the presence of information granules. Neural networks realized in this manner are able to process both nonnumerical Data, such as information granules as well as numerical Data. Information granules are meaningful and semantically sound entities formed by organizing existing knowledge and available experimental Data. The directional nature of mapping between the input and output Data needs to be considered when building information granules. The development of neural networks advocated in this article is realized as a two-phase process. First, a collection of information granules is formed through granulation of numeric Data in the input and output spaces. Second, neural networks are constructed on the basis of information granules rather than original (numeric) Data. The proposed method leads to the construction of neural networks in a completely new way. In comparison with traditional (numeric) neural networks, the networks developed in the presence of Granular Data require shorter learning time. They also produce the results (outputs) that are information granules rather than numeric entities. The quality of Granular outputs generated by our neural networks is evaluated in terms of the coverage and specificity criteria that are pertinent to the characterization of the information granules.

  • Clustering Homogeneous Granular Data: Formation and Evaluation
    IEEE transactions on cybernetics, 2018
    Co-Authors: Yinghua Shen, Witold Pedrycz, Xianmin Wang
    Abstract:

    In this paper, we develop a comprehensive conceptual and algorithmic framework to cope with a problem of clustering homogeneous information granules. While there have been several approaches to coping with Granular (viz. non-numeric) Data, the origin of Granular Data themselves considered there is somewhat unclear and, as a consequence, the results of clustering start lacking some full-fledged interpretation. In this paper, we offer a holistic view at clustering information granules and an evaluation of the results of clustering. We start with a process of forming information granules with the use of the principle of justifiable Granularity (PJG). With this regard, we discuss a number of parameters used in this development of information granules as well as quantify the quality of the granules produced in this manner. In the sequel, Fuzzy ${C}$ -Means is applied to cluster the derived information granules, which are represented in a parametric manner and associated with weights resulting from the usage of the PJG. The quality of clustering results is evaluated through the use of the reconstruction criterion (quantifying the concept of information granulation and degranulation). A suite of experiments using synthetic and publicly available Datasets is reported to quantify the performance of the proposed approach and highlight its key features.

  • Granular Data Aggregation: An Adaptive Principle of the Justifiable Granularity Approach
    IEEE transactions on cybernetics, 2018
    Co-Authors: Dan Wang, Witold Pedrycz
    Abstract:

    The design of information granules assumes a central position in the discipline of Granular Computing and its applications. The principle of justifiable Granularity offers a conceptually and algorithmically attractive way of designing information granule completed on a basis of some experimental evidence (especially present in the form of numeric Data). This paper builds upon the existing principle and presents its significant generalization, referred here as an adaptive principle of justifiable information Granularity. The method supports a Granular Data aggregation producing an optimal information granule (with the optimality expressed in terms of the criteria of coverage and specificity commonly used when characterizing quality of information granules). The flexibility of the method stems from an introduction of the adaptive weighting scheme of the Data leading to a vector of weights used in the construction of the optimal information granule. A detailed design procedure is provided along with the required optimization vehicle (realized with the aid of the population-based optimization techniques, such as particle swarm optimization and differential evolution). Two direct application areas in which the principle becomes of direct usage include prediction of time series and prediction of spatial Data. In both cases, it is advocated that the results formed by the principle are reflective of the precision (quality) of the prediction process.

  • Granular Data Description: Designing Ellipsoidal Information Granules
    IEEE transactions on cybernetics, 2016
    Co-Authors: Xiubin Zhu, Witold Pedrycz
    Abstract:

    Granular computing (GrC) has emerged as a unified conceptual and processing framework. Information granules are fundamental constructs that permeate concepts and models of GrC. This paper is concerned with a design of a collection of meaningful, easily interpretable ellipsoidal information granules with the use of the principle of justifiable Granularity by taking into consideration reconstruction abilities of the designed information granules. The principle of justifiable Granularity supports designing of information granules based on numeric or Granular evidence, and aims to achieve a compromise between justifiability and specificity of the information granules to be constructed. A two-stage development strategy behind the construction of justifiable information granules is considered. First, a collection of numeric prototypes is determined with the use of fuzzy clustering. Second, the lengths of the semi-axes of ellipsoidal information granules to be formed around such prototypes are optimized. Two optimization criteria are introduced and studied. Experimental studies involving synthetic Data set and Data sets coming from the machine learning repository are reported.

Torben Bach Pedersen - One of the best experts on this subject based on the ideXlab platform.

  • Using a Time Granularity Table for Gradual Granular Data Aggregation
    Fundamenta Informaticae, 2014
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    The majority of today's systems increasingly require sophisticated Data management as they need to store and to query large amounts of Data for analysis and reporting purposes. In order to keep more “detailed” Data available for longer periods, “old” Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. A number of Data reduction solutions have been developed, however an effective solution particularly based on gradual Data reduction is missing. This paper presents an effective solution for Data reduction based on gradual Granular Data aggregation. With the gradual Granular Data aggregation mechanism, older Data can be made coarse-grained while keeping the newest Data fine-grained. For instance, when Data is 3 months old aggregate to 1 minute level from 1 second level, when Data is 6 months old aggregate to 2 minutes level from 1 minute level and so on. The proposed solution introduces a time Granularity based Data structure, namely a relational time Granularity table that enables long term storage of old Data by maintaining it at different levels of Granularity and effective query processing due to a reduction in Data volume. In addition, the paper describes the implementation strategy derived from a farming case study using standard Database technologies.

  • a rule based tool for gradual Granular Data aggregation
    Data Warehousing and OLAP, 2011
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    In order to keep more detailed Data available for longer periods, old Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. In this regard, some hand-coded Data aggregation solutions have been developed; however, their actual usage have been limited, for the reason that hand-coded Data aggregation solutions have proven themselves too complex to maintain. Maintenance need to occur as requirements change frequently and the existing Data aggregation techniques lack flexibility with regards to efficient requirements change management. This paper presents an effective rule-based tool for Data reduction based on gradual Granular Data aggregation. With the proposed solution, Data can be maintained at different levels of Granularity. The solution is based on high-level Data aggregation rules. Based on these rules, Data aggregation code can be auto-generated. The solution is effective, easy-to-use and easy-to-maintain. In addition, the paper also demonstrates the use of the proposed tool based on a farming case study using standard Database technologies. The results show productivity of the proposed tool-based solution in terms of initial development time, maintenance time and alteration time as compared to a hand-coded solution.

  • DOLAP - A rule-based tool for gradual Granular Data aggregation
    Proceedings of the ACM 14th international workshop on Data Warehousing and OLAP - DOLAP '11, 2011
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    In order to keep more detailed Data available for longer periods, old Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. In this regard, some hand-coded Data aggregation solutions have been developed; however, their actual usage have been limited, for the reason that hand-coded Data aggregation solutions have proven themselves too complex to maintain. Maintenance need to occur as requirements change frequently and the existing Data aggregation techniques lack flexibility with regards to efficient requirements change management. This paper presents an effective rule-based tool for Data reduction based on gradual Granular Data aggregation. With the proposed solution, Data can be maintained at different levels of Granularity. The solution is based on high-level Data aggregation rules. Based on these rules, Data aggregation code can be auto-generated. The solution is effective, easy-to-use and easy-to-maintain. In addition, the paper also demonstrates the use of the proposed tool based on a farming case study using standard Database technologies. The results show productivity of the proposed tool-based solution in terms of initial development time, maintenance time and alteration time as compared to a hand-coded solution.

  • schema design alternatives for multi Granular Data warehousing
    Database and Expert Systems Applications, 2010
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    Data warehousing is widely used in industry for reporting and analysis of huge volumes of Data at different levels of detail. In general, Data warehouses use standard dimensional schema designs to organize their Data. However, current Data warehousing schema designs fall short in their ability to model the multi-Granular Data found in various real-world application domains. For example, modern farm equipment in a field produces massive amounts of Data at different levels of Granularity that has to be stored and queried. A study of the commonly used Data warehousing schemas exposes the limitation that the schema designs are intended to simply store Data at the same single level of Granularity. This paper on the other hand, presents several extended dimensional Data warehousing schema design alternatives to store both detail and aggregated Data at different levels of Granularity. The paper presents three solutions to design the time dimension tables and four solutions to design the fact tables. Moreover, each of these solutions is evaluated in different combinations of the time dimension and the fact tables based on a real world farming case study.

  • ADBIS - Using a time Granularity table for gradual Granular Data aggregation
    Advances in Databases and Information Systems, 2010
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    The majority of today's systems increasingly require sophisticated Data management as they need to store and to query large amounts of Data for analysis and reporting purposes. In order to keep more "detailed" Data available for longer periods, "old" Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. A number of Data reduction solutions have been developed, however an effective solution particularly based on gradual Data reduction is missing. This paper presents an effective solution for Data reduction based on gradual Granular Data aggregation. With the gradual Granular Data aggregation mechanism, older Data can be made coarse-grained while keeping the newest Data fine-grained. For instance, when Data is 3 months old aggregate to 1 minute level from 1 second level, when Data is 6 months old aggregate to 2 minutes level from 1 minute level and so on. The proposed solution introduces a time Granularity based Data structure, namely a relational time Granularity table that enables long term storage of old Data by maintaining it at different levels of Granularity and effective query processing due to a reduction in Data volume. In addition, the paper describes the implementation strategy derived from a farming case study using standard technologies.

Prabir Kumar Biswas - One of the best experts on this subject based on the ideXlab platform.

  • A Granular Reflex Fuzzy Min–Max Neural Network for Classification
    IEEE transactions on neural networks, 2009
    Co-Authors: Abhijeet V. Nandedkar, Prabir Kumar Biswas
    Abstract:

    Granular Data classification and clustering is an upcoming and important issue in the field of pattern recognition. Conventionally, computing is thought to be manipulation of numbers or symbols. However, human recognition capabilities are based on ability to process nonnumeric clumps of information (information granules) in addition to individual numeric values. This paper proposes a Granular neural network (GNN) called Granular reflex fuzzy min-max neural network (GrRFMN) which can learn and classify Granular Data. GrRFMN uses hyperbox fuzzy set to represent Granular Data. Its architecture consists of a reflex mechanism inspired from human brain to handle class overlaps. The network can be trained online using Granular or point Data. The neuron activation functions in GrRFMN are designed to tackle Data of different Granularity (size). This paper also addresses an issue to granulate the training Data and learn from it. It is observed that such a preprocessing of Data can improve performance of a classifier. Experimental results on real Data sets show that the proposed GrRFMN can classify granules of different Granularity more correctly. Results are compared with general fuzzy min-max neural network (GFMN) proposed by Gabrys and Bargiela and with some classical methods.

Abhijeet V. Nandedkar - One of the best experts on this subject based on the ideXlab platform.

  • A Granular Reflex Fuzzy Min–Max Neural Network for Classification
    IEEE transactions on neural networks, 2009
    Co-Authors: Abhijeet V. Nandedkar, Prabir Kumar Biswas
    Abstract:

    Granular Data classification and clustering is an upcoming and important issue in the field of pattern recognition. Conventionally, computing is thought to be manipulation of numbers or symbols. However, human recognition capabilities are based on ability to process nonnumeric clumps of information (information granules) in addition to individual numeric values. This paper proposes a Granular neural network (GNN) called Granular reflex fuzzy min-max neural network (GrRFMN) which can learn and classify Granular Data. GrRFMN uses hyperbox fuzzy set to represent Granular Data. Its architecture consists of a reflex mechanism inspired from human brain to handle class overlaps. The network can be trained online using Granular or point Data. The neuron activation functions in GrRFMN are designed to tackle Data of different Granularity (size). This paper also addresses an issue to granulate the training Data and learn from it. It is observed that such a preprocessing of Data can improve performance of a classifier. Experimental results on real Data sets show that the proposed GrRFMN can classify granules of different Granularity more correctly. Results are compared with general fuzzy min-max neural network (GFMN) proposed by Gabrys and Bargiela and with some classical methods.

  • ICPR (2) - A Reflex Fuzzy Min Max Neural Network for Granular Data Classification
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Abhijeet V. Nandedkar, Pradipta Biswas
    Abstract:

    Granular Data classification and clustering is an upcoming and important issue in the field of pattern recognition. The paper proposes a Granular neural network called as "Reflex fuzzy min-max neural network" for classification. Reflex mechanism inspired from human brain is exploited here to handle class overlaps. This network can be trained on-line using Granular or point Data. The proposed neuron activation functions are designed to tackle Data of different Granularity (size). Experimental results on real Datasets show that the proposed algorithm can classify granules of different Granularity more correctly compared to General Fuzzy Min max Neural network proposed by Gabrycz and Bargiela.

Nadeem Iftikhar - One of the best experts on this subject based on the ideXlab platform.

  • Using a Time Granularity Table for Gradual Granular Data Aggregation
    Fundamenta Informaticae, 2014
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    The majority of today's systems increasingly require sophisticated Data management as they need to store and to query large amounts of Data for analysis and reporting purposes. In order to keep more “detailed” Data available for longer periods, “old” Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. A number of Data reduction solutions have been developed, however an effective solution particularly based on gradual Data reduction is missing. This paper presents an effective solution for Data reduction based on gradual Granular Data aggregation. With the gradual Granular Data aggregation mechanism, older Data can be made coarse-grained while keeping the newest Data fine-grained. For instance, when Data is 3 months old aggregate to 1 minute level from 1 second level, when Data is 6 months old aggregate to 2 minutes level from 1 minute level and so on. The proposed solution introduces a time Granularity based Data structure, namely a relational time Granularity table that enables long term storage of old Data by maintaining it at different levels of Granularity and effective query processing due to a reduction in Data volume. In addition, the paper describes the implementation strategy derived from a farming case study using standard Database technologies.

  • a rule based tool for gradual Granular Data aggregation
    Data Warehousing and OLAP, 2011
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    In order to keep more detailed Data available for longer periods, old Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. In this regard, some hand-coded Data aggregation solutions have been developed; however, their actual usage have been limited, for the reason that hand-coded Data aggregation solutions have proven themselves too complex to maintain. Maintenance need to occur as requirements change frequently and the existing Data aggregation techniques lack flexibility with regards to efficient requirements change management. This paper presents an effective rule-based tool for Data reduction based on gradual Granular Data aggregation. With the proposed solution, Data can be maintained at different levels of Granularity. The solution is based on high-level Data aggregation rules. Based on these rules, Data aggregation code can be auto-generated. The solution is effective, easy-to-use and easy-to-maintain. In addition, the paper also demonstrates the use of the proposed tool based on a farming case study using standard Database technologies. The results show productivity of the proposed tool-based solution in terms of initial development time, maintenance time and alteration time as compared to a hand-coded solution.

  • DOLAP - A rule-based tool for gradual Granular Data aggregation
    Proceedings of the ACM 14th international workshop on Data Warehousing and OLAP - DOLAP '11, 2011
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    In order to keep more detailed Data available for longer periods, old Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. In this regard, some hand-coded Data aggregation solutions have been developed; however, their actual usage have been limited, for the reason that hand-coded Data aggregation solutions have proven themselves too complex to maintain. Maintenance need to occur as requirements change frequently and the existing Data aggregation techniques lack flexibility with regards to efficient requirements change management. This paper presents an effective rule-based tool for Data reduction based on gradual Granular Data aggregation. With the proposed solution, Data can be maintained at different levels of Granularity. The solution is based on high-level Data aggregation rules. Based on these rules, Data aggregation code can be auto-generated. The solution is effective, easy-to-use and easy-to-maintain. In addition, the paper also demonstrates the use of the proposed tool based on a farming case study using standard Database technologies. The results show productivity of the proposed tool-based solution in terms of initial development time, maintenance time and alteration time as compared to a hand-coded solution.

  • schema design alternatives for multi Granular Data warehousing
    Database and Expert Systems Applications, 2010
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    Data warehousing is widely used in industry for reporting and analysis of huge volumes of Data at different levels of detail. In general, Data warehouses use standard dimensional schema designs to organize their Data. However, current Data warehousing schema designs fall short in their ability to model the multi-Granular Data found in various real-world application domains. For example, modern farm equipment in a field produces massive amounts of Data at different levels of Granularity that has to be stored and queried. A study of the commonly used Data warehousing schemas exposes the limitation that the schema designs are intended to simply store Data at the same single level of Granularity. This paper on the other hand, presents several extended dimensional Data warehousing schema design alternatives to store both detail and aggregated Data at different levels of Granularity. The paper presents three solutions to design the time dimension tables and four solutions to design the fact tables. Moreover, each of these solutions is evaluated in different combinations of the time dimension and the fact tables based on a real world farming case study.

  • ADBIS - Using a time Granularity table for gradual Granular Data aggregation
    Advances in Databases and Information Systems, 2010
    Co-Authors: Nadeem Iftikhar, Torben Bach Pedersen
    Abstract:

    The majority of today's systems increasingly require sophisticated Data management as they need to store and to query large amounts of Data for analysis and reporting purposes. In order to keep more "detailed" Data available for longer periods, "old" Data has to be reduced gradually to save space and improve query performance, especially on resource-constrained systems with limited storage and query processing capabilities. A number of Data reduction solutions have been developed, however an effective solution particularly based on gradual Data reduction is missing. This paper presents an effective solution for Data reduction based on gradual Granular Data aggregation. With the gradual Granular Data aggregation mechanism, older Data can be made coarse-grained while keeping the newest Data fine-grained. For instance, when Data is 3 months old aggregate to 1 minute level from 1 second level, when Data is 6 months old aggregate to 2 minutes level from 1 minute level and so on. The proposed solution introduces a time Granularity based Data structure, namely a relational time Granularity table that enables long term storage of old Data by maintaining it at different levels of Granularity and effective query processing due to a reduction in Data volume. In addition, the paper describes the implementation strategy derived from a farming case study using standard technologies.