Incremental Construction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 14040 Experts worldwide ranked by ideXlab platform

Howie Choset - One of the best experts on this subject based on the ideXlab platform.

  • sensor based coverage of unknown environments Incremental Construction of morse decompositions
    The International Journal of Robotics Research, 2002
    Co-Authors: Ercan U Acar, Howie Choset
    Abstract:

    The goal of coverage path planning is to determine a path that passes a detector over all points in an environment. This work prescribes a provably complete coverage path planner for robots in unknown spaces. We achieve coverage using Morse decompositions which are exact cellular decompositions whose cells are defined in terms of critical points of Morse functions. Generically, two critical points define a cell. We encode the topology of the Morse decomposition using a graph that has nodes corresponding to the critical points and edges representing the cells defined by pairs of critical points. The robot simultaneously covers the space while Incrementally constructing this graph. To achieve this, the robot must sense all the critical points. Therefore, we first introduce a critical point sensing method that uses range sensors. Then we present a provably complete algorithm which guarantees that the robot will encounter all the critical points, thereby constructing the full graph, i.e., achieving complete c...

  • sensor based exploration Incremental Construction of the hierarchical generalized voronoi graph
    The International Journal of Robotics Research, 2000
    Co-Authors: Howie Choset, Sean Walker, Kunnayut Eiamsaard, Joel W Burdick
    Abstract:

    This paper prescribes an Incremental procedure to construct roadmaps of unknown environments. Recall that a roadmap is a geometric structure that a robot uses to plan a path between two points in an environment. If the robot knows the roadmap, then it knows the environment. Likewise, if the robot constructs the roadmap, then it has effectively explored the environment. This paper focuses on the hierarchical generalized Voronoi graph (HGVG), detailed in the companion paper in this issue. The Incremental Construction procedure of the HGVG requires only local distance sensor measurements, and therefore the method can be used as a basis for sensor-based planning algorithms. Simulations and experiments using a mobile robot with ultrasonic sensors verify this approach.

  • sensor based planning ii Incremental Construction of the generalized voronoi graph
    International Conference on Robotics and Automation, 1995
    Co-Authors: Howie Choset, Joel W Burdick
    Abstract:

    This paper prescribes an Incremental procedure to construct the generalized Voronoi graph (GVG) and the hierarchical generalized Voronoi graph (HGVG) detailed in the companion paper. The procedure requires only local distance sensor measurements, and therefore the method can be used as a basis for sensor based planning algorithms.

Geoffrey I Webb - One of the best experts on this subject based on the ideXlab platform.

  • an Incremental Construction of deep neuro fuzzy system for continual learning of nonstationary data streams
    IEEE Transactions on Fuzzy Systems, 2020
    Co-Authors: Mahardhika Pratama, Witold Pedrycz, Geoffrey I Webb
    Abstract:

    Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This article proposes a novel self-organizing deep FNN, namely deep evolving fuzzy neural network (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method, which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. The DEVFNN is developed under the stacked generalization principle via the feature augmentation concept, where a recently developed algorithm, namely generic classifier, drives the hidden layer. It is equipped by an automatic feature selection method, which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent the uncontrollable growth of dimensionality of input space due to the nature of the feature augmentation approach in building a deep network structure. The DEVFNN works in the samplewise fashion and is compatible for data stream applications. The efficacy of the DEVFNN has been thoroughly evaluated using seven datasets with nonstationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart, where the DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept of the drift detection method is an effective tool to control the depth of the network structure, while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.

  • an Incremental Construction of deep neuro fuzzy system for continual learning of non stationary data streams
    arXiv: Artificial Intelligence, 2018
    Co-Authors: Mahardhika Pratama, Witold Pedrycz, Geoffrey I Webb
    Abstract:

    Existing FNNs are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely gClass, drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of dimensionality of input space due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using seven datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.

Joel W Burdick - One of the best experts on this subject based on the ideXlab platform.

  • sensor based exploration Incremental Construction of the hierarchical generalized voronoi graph
    The International Journal of Robotics Research, 2000
    Co-Authors: Howie Choset, Sean Walker, Kunnayut Eiamsaard, Joel W Burdick
    Abstract:

    This paper prescribes an Incremental procedure to construct roadmaps of unknown environments. Recall that a roadmap is a geometric structure that a robot uses to plan a path between two points in an environment. If the robot knows the roadmap, then it knows the environment. Likewise, if the robot constructs the roadmap, then it has effectively explored the environment. This paper focuses on the hierarchical generalized Voronoi graph (HGVG), detailed in the companion paper in this issue. The Incremental Construction procedure of the HGVG requires only local distance sensor measurements, and therefore the method can be used as a basis for sensor-based planning algorithms. Simulations and experiments using a mobile robot with ultrasonic sensors verify this approach.

  • sensor based planning ii Incremental Construction of the generalized voronoi graph
    International Conference on Robotics and Automation, 1995
    Co-Authors: Howie Choset, Joel W Burdick
    Abstract:

    This paper prescribes an Incremental procedure to construct the generalized Voronoi graph (GVG) and the hierarchical generalized Voronoi graph (HGVG) detailed in the companion paper. The procedure requires only local distance sensor measurements, and therefore the method can be used as a basis for sensor based planning algorithms.

Mahardhika Pratama - One of the best experts on this subject based on the ideXlab platform.

  • an Incremental Construction of deep neuro fuzzy system for continual learning of nonstationary data streams
    IEEE Transactions on Fuzzy Systems, 2020
    Co-Authors: Mahardhika Pratama, Witold Pedrycz, Geoffrey I Webb
    Abstract:

    Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This article proposes a novel self-organizing deep FNN, namely deep evolving fuzzy neural network (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method, which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. The DEVFNN is developed under the stacked generalization principle via the feature augmentation concept, where a recently developed algorithm, namely generic classifier, drives the hidden layer. It is equipped by an automatic feature selection method, which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent the uncontrollable growth of dimensionality of input space due to the nature of the feature augmentation approach in building a deep network structure. The DEVFNN works in the samplewise fashion and is compatible for data stream applications. The efficacy of the DEVFNN has been thoroughly evaluated using seven datasets with nonstationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart, where the DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept of the drift detection method is an effective tool to control the depth of the network structure, while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.

  • palm an Incremental Construction of hyperplanes for data stream regression
    IEEE Transactions on Fuzzy Systems, 2019
    Co-Authors: Meftahul Ferdaus, Mahardhika Pratama, Sreenatha G Anavatti, Matthew Garratt
    Abstract:

    Data stream has been the underlying challenge in the age of big data because it calls for real-time data processing with the absence of a retraining process and/or an iterative learning approach. In the realm of the fuzzy system community, data stream is handled by algorithmic development of self-adaptive neuro-fuzzy systems (SANFS) characterized by the single-pass learning mode and the open structure property that enables effective handling of fast and rapidly changing natures of data streams. The underlying bottleneck of SANFSs lies in its design principle, which involves a high number of free parameters (rule premise and rule consequent) to be adapted in the training process. This figure can even double in the case of the type-2 fuzzy system. In this paper, a novel SANFS, namely parsimonious learning machine (PALM), is proposed. PALM features utilization of a new type of fuzzy rule based on the concept of hyperplane clustering, which significantly reduces the number of network parameters because it has no rule premise parameters. PALM is proposed in both type-1 and type-2 fuzzy systems where all of which characterize a fully dynamic rule-based system. That is, it is capable of automatically generating, merging, and tuning the hyperplane-based fuzzy rule in the single-pass manner. Moreover, an extension of PALM, namely recurrent PALM, is proposed and adopts the concept of teacher-forcing mechanism in the deep learning literature. The efficacy of PALM has been evaluated through numerical study with six real-world and synthetic data streams from public database and our own real-world project of autonomous vehicles. The proposed model showcases significant improvements in terms of computational complexity and number of required parameters against several renowned SANFSs, while attaining comparable and often better predictive accuracy.

  • an Incremental Construction of deep neuro fuzzy system for continual learning of non stationary data streams
    arXiv: Artificial Intelligence, 2018
    Co-Authors: Mahardhika Pratama, Witold Pedrycz, Geoffrey I Webb
    Abstract:

    Existing FNNs are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely gClass, drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of dimensionality of input space due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using seven datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.

  • palm an Incremental Construction of hyperplanes for data stream regression
    arXiv: Neural and Evolutionary Computing, 2018
    Co-Authors: Meftahul Ferdaus, Mahardhika Pratama, Sreenatha G Anavatti, Matthew Garratt
    Abstract:

    Data stream has been the underlying challenge in the age of big data because it calls for real-time data processing with the absence of a retraining process and/or an iterative learning approach. In realm of fuzzy system community, data stream is handled by algorithmic development of self-adaptive neurofuzzy systems (SANFS) characterized by the single-pass learning mode and the open structure property which enables effective handling of fast and rapidly changing natures of data streams. The underlying bottleneck of SANFSs lies in its design principle which involves a high number of free parameters (rule premise and rule consequent) to be adapted in the training process. This figure can even double in the case of type-2 fuzzy system. In this work, a novel SANFS, namely parsimonious learning machine (PALM), is proposed. PALM features utilization of a new type of fuzzy rule based on the concept of hyperplane clustering which significantly reduces the number of network parameters because it has no rule premise parameters. PALM is proposed in both type-1 and type-2 fuzzy systems where all of which characterize a fully dynamic rule-based system. That is, it is capable of automatically generating, merging and tuning the hyperplane-based fuzzy rule in the single pass manner. Moreover, an extension of PALM, namely recurrent PALM (rPALM), is proposed and adopts the concept of teacher-forcing mechanism in the deep learning literature. The efficacy of PALM has been evaluated through numerical study with six real-world and synthetic data streams from public database and our own real-world project of autonomous vehicles. The proposed model showcases significant improvements in terms of computational complexity and number of required parameters against several renowned SANFSs, while attaining comparable and often better predictive accuracy.

Vasant Honavar - One of the best experts on this subject based on the ideXlab platform.

  • constructive neural network learning algorithms for pattern classification
    IEEE Transactions on Neural Networks, 2000
    Co-Authors: Rajesh Parekh, Jihoon Yang, Vasant Honavar
    Abstract:

    Constructive learning algorithms offer an attractive approach for the Incremental Construction of near-minimal neural-network architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable weights in a priori fixed network architectures. Several such algorithms are proposed in the literature and shown to converge to zero classification errors (under certain assumptions) on tasks that involve learning a binary to binary mapping (i.e., classification problems involving binary-valued input attributes and two output categories). We present two constructive learning algorithms, MPyramid-real and MTiling-real, that extend the pyramid and tiling algorithms, respectively, for learning real to M-ary mappings (i.e., classification problems involving real-valued input attributes and multiple output classes). We prove the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems. Additionally, we show how the incorporation of a local pruning step can eliminate several redundant neurons from MTiling-real networks.

  • constructive neural network learning algorithms for multi category real valued pattern classification
    1997
    Co-Authors: Rajesh Parekh, Jihoon Yang, Vasant Honavar
    Abstract:

    Constructive learning algorithms o er an attractive approach for Incremental Construction of potentially near-minimal neural network architectures for pattern classi cation tasks. These algorithms help overcome the need for ad-hoc and often inappropriate choice of network topology in the use of algorithms that search for a suitable weight setting in an a-priori xed network architecture. Several such algorithms proposed in the literature have been shown to converge to zero classi cation errors (under certain assumptions) on nite, non-contradictory training sets in 2category classi cation tasks. The convergence proofs for each of these algorithms (with the exception of the Upstart and Perceptron Cascade ) rely on the assumption that the pattern attributes are either binary or bipolar valued. This paper explores multi-category extensions of several constructive neural network learning algorithms for classi cation tasks where the input patterns may take on real-valued attributes. In each case, we establish the convergence to zero classi cation errors on a multi-category classi cation task. Results of experiments with non-linearly separable multi-category datasets demonstrate the feasibility of this approach and suggest several interesting directions for future research. This research was partially supported by the National Science Foundation grants IRI-9409580 and IRI-9643299 to Vasant Honavar.

  • constructive neural network learning algorithms for multi category pattern classification
    1995
    Co-Authors: Rajesh Parekh, Jihoon Yang, Vasant Honavar
    Abstract:

    Constructive learning algorithms o er an approach for Incremental Construction of potentially near-minimal neural network architectures for pattern classi cation tasks. Such algorithms help overcome the need for ad-hoc and often inappropriate choice of network topology in the use of algorithms that search for a suitable weight setting in an otherwise a-priori xed network architecture. Several such algorithms proposed in the literature have been shown to converge to zero classi cation errors (under certain assumptions) on a nite, non-contradictory training set in a 2-category classi cation problem. This paper explores multi-category extensions of several constructive neural network learning algorithms for pattern classi cation. In each case, we establish the convergence to zero classi cation errors on a multicategory classi cation task (under certain assumptions). Results of experiments with non linearly separable multi-category data sets demonstrate the feasibility of this approach to multi-category pattern classi cation and also suggest several interesting directions for future research. This research was partially supported by the National Science Foundation grant IRI-9409580 to Vasant Honavar.