Extreme Learning Machine

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 21588 Experts worldwide ranked by ideXlab platform

Nan Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Semi-supervised Extreme Learning Machine with wavelet kernel
    International Journal of Collaborative Intelligence, 2020
    Co-Authors: Nan Zhang
    Abstract:

    Extreme Learning Machine (ELM) not only is an effective classifier in supervised Learning, but also can be applied on unsupervised Learning and semi-supervised Learning. The model structure of semi-supervised Machine Learning Machine (SS-ELM) is same as ELM, the difference between them is the cost function. In this paper, we introduce kernel function to SS-ELM and proposed semi-supervised Extreme Learning Machine with kernel (SS-KELM). Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and the wavelet kernel is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Therefore, we propose semi-supervised Extreme Learning Machine with wavelet kernel (SS-WKELM) based on the wavelet kernel function and SS-ELM. The experimental results show the feasibility and validity of SS-WKELM in classification.

  • Unsupervised and semi-supervised Extreme Learning Machine with wavelet kernel for high dimensional data
    Memetic Computing, 2016
    Co-Authors: Nan Zhang, Shifei Ding
    Abstract:

    Extreme Learning Machine (ELM) not only is an effective classifier in supervised Learning, but also can be applied on unsupervised Learning and semi-supervised Learning. The model structure of unsupervised Extreme Learning Machine (US-ELM) and semi-supervised Extreme Learning Machine (SS-ELM) are same as ELM, the difference between them is the cost function. We introduce kernel function to US-ELM and propose unsupervised Extreme Learning Machine with kernel (US-KELM). And SS-KELM has been proposed. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and Wavelet kernel functions have been widely used in support vector Machine. Therefore, to realize a combination of the wavelet kernel function, US-ELM, and SS-ELM, unsupervised Extreme Learning Machine with wavelet kernel function (US-WKELM) and semi-supervised Extreme Learning Machine with wavelet kernel function (SS-WKELM) are proposed in this paper. The experimental results show the feasibility and validity of US-WKELM and SS-WKELM in clustering and classification.

  • Denoising Laplacian multi-layer Extreme Learning Machine
    Neurocomputing, 2016
    Co-Authors: Nan Zhang, Shifei Ding, Zhongzhi Shi
    Abstract:

    Most of semi-supervised Learning algorithms based on manifold regularization framework are surface Learning algorithms, such as semi-supervised ELM (SS-ELM) and Laplacian smooth twin support vector Machine (Lap-STSVM). Multi-layer Extreme Learning Machine (ML-ELM) stacks Extreme Learning Machine based auto encoder (ELM-AE) to create a multi-layer neural network. ML-ELM not only approximates the complicated function but also achieves fast training time. The outputs of ELM-AE are the same as inputs, which cannot guarantee the effectiveness of the Learning feature representations. We put forward Extreme Learning Machine based denoising auto encoder (ELM-DAE) which introduces local denoising criterion into ELM-AE and is used as the basic component for Denoising ML-ELM. Resembling ML-ELM, Denoising ML-ELM stacks ELM-DAE to create a deep network. And then we introduce manifold regularization into the model of Denoising ML-ELM and propose denoising Laplacian ML-ELM (Denoising Lap-ML-ELM). Denoising Lap-ML-ELM is more efficient than SS-ELM in classification and does not need to spend too much time. Experimental results show that Denoising ML-ELM and Denoising Lap-ML-ELM are effective Learning algorithms.

  • Incremental Extreme Learning Machine based on deep feature embedded
    International Journal of Machine Learning and Cybernetics, 2015
    Co-Authors: Jian Zhang, Shifei Ding, Nan Zhang
    Abstract:

    Extreme Learning Machine (ELM) algorithm is used to train Single-hidden Layer Feed forward Neural Networks. And Deep Belief Network (DBN) is based on Restricted Boltzmann Machine (RBM). The conventional DBN algorithm has some insufficiencies, i.e., Contrastive Divergence (CD) Algorithm is not an ideal approximation method to Maximum Likelihood Estimation. And bad parameters selected in RBM algorithm will produce a bad initialization in DBN model so that we will spend more training time and get a low classification accuracy. To solve the problems above, we summarize the features of Extreme Learning Machine and deep belief networks, and then propose Incremental Extreme Learning Machine based on Deep Feature Embedded algorithm which combines the deep feature extracting ability of Deep Learning Networks with the feature mapping ability of Extreme Learning Machine. Firstly, we introduce Manifold Regularization to our model to attenuate the complexity of probability distribution. Secondly, we introduce the semi-restricted Boltzmann Machine (SRBM) to our algorithm, and build a deep belief network based on SRBM. Thirdly, we introduce the thought of incremental feature mapping in ELM to the classifier of DBN model. Finally, we show validity of the algorithm by experiments.

  • Unsupervised Extreme Learning Machine with representational features
    International Journal of Machine Learning and Cybernetics, 2015
    Co-Authors: Shifei Ding, Nan Zhang, Jian Zhang, Xinzheng Xu
    Abstract:

    Extreme Learning Machine (ELM) is not only an effective classifier but also a useful cluster. Unsupervised Extreme Learning Machine (US-ELM) gives favorable performance compared to state-of-the-art clustering algorithms. Extreme Learning Machine as an auto encoder (ELM-AE) can obtain principal components which represent original samples. The proposed unsupervised Extreme Learning Machine based on embedded features of ELM-AE (US-EF-ELM) algorithm applies ELM-AE to US-ELM. US-EF-ELM regards embedded features of ELM-AE as the outputs of US-ELM hidden layer, and uses US-ELM to obtain the embedded matrix of US-ELM. US-EF-ELM can handle the multi-cluster clustering. The Learning capability and computational efficiency of US-EF-ELM are as same as US-ELM. By experiments on UCI data sets, we compared US-EF-ELM k-means algorithm with k-means algorithm, spectral clustering algorithm, and US-ELM k-means algorithm in accuracy and efficiency.

Shifei Ding - One of the best experts on this subject based on the ideXlab platform.

  • Unsupervised and semi-supervised Extreme Learning Machine with wavelet kernel for high dimensional data
    Memetic Computing, 2016
    Co-Authors: Nan Zhang, Shifei Ding
    Abstract:

    Extreme Learning Machine (ELM) not only is an effective classifier in supervised Learning, but also can be applied on unsupervised Learning and semi-supervised Learning. The model structure of unsupervised Extreme Learning Machine (US-ELM) and semi-supervised Extreme Learning Machine (SS-ELM) are same as ELM, the difference between them is the cost function. We introduce kernel function to US-ELM and propose unsupervised Extreme Learning Machine with kernel (US-KELM). And SS-KELM has been proposed. Wavelet analysis has the characteristics of multivariate interpolation and sparse change, and Wavelet kernel functions have been widely used in support vector Machine. Therefore, to realize a combination of the wavelet kernel function, US-ELM, and SS-ELM, unsupervised Extreme Learning Machine with wavelet kernel function (US-WKELM) and semi-supervised Extreme Learning Machine with wavelet kernel function (SS-WKELM) are proposed in this paper. The experimental results show the feasibility and validity of US-WKELM and SS-WKELM in clustering and classification.

  • Denoising Laplacian multi-layer Extreme Learning Machine
    Neurocomputing, 2016
    Co-Authors: Nan Zhang, Shifei Ding, Zhongzhi Shi
    Abstract:

    Most of semi-supervised Learning algorithms based on manifold regularization framework are surface Learning algorithms, such as semi-supervised ELM (SS-ELM) and Laplacian smooth twin support vector Machine (Lap-STSVM). Multi-layer Extreme Learning Machine (ML-ELM) stacks Extreme Learning Machine based auto encoder (ELM-AE) to create a multi-layer neural network. ML-ELM not only approximates the complicated function but also achieves fast training time. The outputs of ELM-AE are the same as inputs, which cannot guarantee the effectiveness of the Learning feature representations. We put forward Extreme Learning Machine based denoising auto encoder (ELM-DAE) which introduces local denoising criterion into ELM-AE and is used as the basic component for Denoising ML-ELM. Resembling ML-ELM, Denoising ML-ELM stacks ELM-DAE to create a deep network. And then we introduce manifold regularization into the model of Denoising ML-ELM and propose denoising Laplacian ML-ELM (Denoising Lap-ML-ELM). Denoising Lap-ML-ELM is more efficient than SS-ELM in classification and does not need to spend too much time. Experimental results show that Denoising ML-ELM and Denoising Lap-ML-ELM are effective Learning algorithms.

  • Incremental Extreme Learning Machine based on deep feature embedded
    International Journal of Machine Learning and Cybernetics, 2015
    Co-Authors: Jian Zhang, Shifei Ding, Nan Zhang
    Abstract:

    Extreme Learning Machine (ELM) algorithm is used to train Single-hidden Layer Feed forward Neural Networks. And Deep Belief Network (DBN) is based on Restricted Boltzmann Machine (RBM). The conventional DBN algorithm has some insufficiencies, i.e., Contrastive Divergence (CD) Algorithm is not an ideal approximation method to Maximum Likelihood Estimation. And bad parameters selected in RBM algorithm will produce a bad initialization in DBN model so that we will spend more training time and get a low classification accuracy. To solve the problems above, we summarize the features of Extreme Learning Machine and deep belief networks, and then propose Incremental Extreme Learning Machine based on Deep Feature Embedded algorithm which combines the deep feature extracting ability of Deep Learning Networks with the feature mapping ability of Extreme Learning Machine. Firstly, we introduce Manifold Regularization to our model to attenuate the complexity of probability distribution. Secondly, we introduce the semi-restricted Boltzmann Machine (SRBM) to our algorithm, and build a deep belief network based on SRBM. Thirdly, we introduce the thought of incremental feature mapping in ELM to the classifier of DBN model. Finally, we show validity of the algorithm by experiments.

  • A wavelet Extreme Learning Machine
    Neural Computing and Applications, 2015
    Co-Authors: Shifei Ding, Xinzheng Xu, Jian Zhang, Yanan Zhang
    Abstract:

    Extreme Learning Machine (ELM) has been widely used in various fields to overcome the problem of low training speed of the conventional neural network. Kernel Extreme Learning Machine (KELM) introduces the kernel method to ELM model, which is applicable in Stat ML. However, if the number of samples in Stat ML is too small, perhaps the unbalanced samples cannot reflect the statistical characteristics of the input data, so that the Learning ability of Stat ML will be influenced. At the same time, the mix kernel functions used in KELM are conventional functions. Therefore, the selection of kernel function can still be optimized. Based on the problems above, we introduce the weighted method to KELM to deal with the unbalanced samples. Wavelet kernel functions have been widely used in support vector Machine and obtain a good classification performance. Therefore, to realize a combination of wavelet analysis and KELM, we introduce wavelet kernel functions to KELM model, which has a mix kernel function of wavelet kernel and sigmoid kernel, and introduce the weighted method to KELM model to balance the sample distribution, and then we propose the weighted wavelet---mix kernel Extreme Learning Machine. The experimental results show that this method can effectively improve the classification ability with better generalization. At the same time, the wavelet kernel functions perform very well compared with the conventional kernel functions in KELM model.

  • Unsupervised Extreme Learning Machine with representational features
    International Journal of Machine Learning and Cybernetics, 2015
    Co-Authors: Shifei Ding, Nan Zhang, Jian Zhang, Xinzheng Xu
    Abstract:

    Extreme Learning Machine (ELM) is not only an effective classifier but also a useful cluster. Unsupervised Extreme Learning Machine (US-ELM) gives favorable performance compared to state-of-the-art clustering algorithms. Extreme Learning Machine as an auto encoder (ELM-AE) can obtain principal components which represent original samples. The proposed unsupervised Extreme Learning Machine based on embedded features of ELM-AE (US-EF-ELM) algorithm applies ELM-AE to US-ELM. US-EF-ELM regards embedded features of ELM-AE as the outputs of US-ELM hidden layer, and uses US-ELM to obtain the embedded matrix of US-ELM. US-EF-ELM can handle the multi-cluster clustering. The Learning capability and computational efficiency of US-EF-ELM are as same as US-ELM. By experiments on UCI data sets, we compared US-EF-ELM k-means algorithm with k-means algorithm, spectral clustering algorithm, and US-ELM k-means algorithm in accuracy and efficiency.

Guang-bin Huang - One of the best experts on this subject based on the ideXlab platform.

  • Computational Intelligence - On-Line Sequential Extreme Learning Machine
    2020
    Co-Authors: Guang-bin Huang, Nanying Liang, Haijun Rong, P Saratchandran, N Sundararajan
    Abstract:

    The primitive Extreme Learning Machine (ELM) [1, 2, 3] with additive neurons and RBF kernels was implemented in batch mode. In this paper, its sequential modification based on recursive least-squares (RLS) algorithm, which referred as Online Sequential Extreme Learning Machine (OS-ELM), is introduced. Based on OS-ELM, Online Sequential Fuzzy Extreme Learning Machine (Fuzzy-ELM) is also introduced to implement zero order TSK model and first order TSK model. The performance of OS-ELM and Fuzzy-ELM are evaluated and compared with other popular sequential Learning algorithms, and experimental results on some real benchmark regression problems show that the proposedOnlineSequentialExtreme Learning Machine (OS-ELM) produces better generalization performance at very fast Learning speed.

  • Extreme Learning Machine for Clustering
    Proceedings of ELM-2014 Volume 1, 2020
    Co-Authors: Chamara Kasun Liyanaarachchi Lekamalage, Yan Yang, Guang-bin Huang
    Abstract:

    Extreme Learning Machine (ELM) is originally introduced for regression and classification. This paper extends ELM for clustering using Extreme Learning Machine Auto Encoder (ELM-AE) which learn key features of the input data. The embedding created by multiplying the input data with the output weights of ELM-AE is shown to produce better clustering results than clustering the original data space. Furthermore, ELM-AE is used to find the starting cluster points for k-means clustering, which produces better results than randomly assigning the cluster start points. The experimental results show that the proposed clustering algorithm Extreme Learning Machine Auto Encoder Clustering (ELM-AEC) is better than k-means clustering and is competitive with Unsupervised Extreme Learning Machine (USELM).

  • Multi layer multi objective Extreme Learning Machine
    2017 IEEE International Conference on Image Processing (ICIP), 2017
    Co-Authors: Chamara Kasun Liyanaarachchi Lekamalage, Guang-bin Huang, Kang Song, Ken Liang
    Abstract:

    Fully connected multi layer neural networks such as Deep Boltzmann Machines (DBM) performs better than fully connected single layer neural networks in image classification tasks and has a smaller number of hidden layer neurons than Extreme Learning Machine (ELM) based fully connected multi layer neural networks such as Multi Layer ELM (MLELM) and Hierarchical ELM (H-ELM) However, ML-ELM and H-ELM has a smaller training time than DBM. This paper introduces a fully connected multi layer neural network referred to as Multi Layer Multi Objective Extreme Learning Machine (MLMO-ELM) which uses a multi objective formulation to pass the label and non-linear information in order to learn a network model which has a similar number of hidden layer parameters as DBM and smaller training time than DBM. The experimental results show that MLMO-ELM outperforms DBM, ML-ELM and H-ELM on OCR and NORB datasets.

  • Voting base online sequential Extreme Learning Machine for multi-class classification
    Proceedings - IEEE International Symposium on Circuits and Systems, 2013
    Co-Authors: Jiuwen Cao, Zhiping Lin, Guang-bin Huang
    Abstract:

    In this paper, we propose a voting based online sequential Extreme Learning Machine (VOS-ELM) for single hidden layer feedforward networks (SLFNs) to perform the online sequential multi-class classification. Utilizing the recent voting based Extreme Learning Machine (V-ELM) and the online sequential Extreme Learning Machine (OS-ELM), the newly developed VOS-ELM is able to classify online sequences by Learning data one-by-one or chunk-by-chunk with fixed or varying chunk size and to reach a higher classification accuracy than the original OS-ELM. Simulations on several real world classification datasets show that VOS-ELM outperforms OS-ELM as well as several state-of-the-art online sequential algorithms. ? 2013 IEEE.

  • voting based Extreme Learning Machine
    Information Sciences, 2012
    Co-Authors: Guang-bin Huang
    Abstract:

    This paper proposes an improved Learning algorithm for classification which is referred to as voting based Extreme Learning Machine. The proposed method incorporates the voting method into the popular Extreme Learning Machine (ELM) in classification applications. Simulations on many real world classification datasets have demonstrated that this algorithm generally outperforms the original ELM algorithm as well as several recent classification algorithms.

N Sundararajan - One of the best experts on this subject based on the ideXlab platform.

  • Computational Intelligence - On-Line Sequential Extreme Learning Machine
    2020
    Co-Authors: Guang-bin Huang, Nanying Liang, Haijun Rong, P Saratchandran, N Sundararajan
    Abstract:

    The primitive Extreme Learning Machine (ELM) [1, 2, 3] with additive neurons and RBF kernels was implemented in batch mode. In this paper, its sequential modification based on recursive least-squares (RLS) algorithm, which referred as Online Sequential Extreme Learning Machine (OS-ELM), is introduced. Based on OS-ELM, Online Sequential Fuzzy Extreme Learning Machine (Fuzzy-ELM) is also introduced to implement zero order TSK model and first order TSK model. The performance of OS-ELM and Fuzzy-ELM are evaluated and compared with other popular sequential Learning algorithms, and experimental results on some real benchmark regression problems show that the proposedOnlineSequentialExtreme Learning Machine (OS-ELM) produces better generalization performance at very fast Learning speed.

  • letters fully complex Extreme Learning Machine
    Neurocomputing, 2005
    Co-Authors: Mingbin Li, Guang-bin Huang, P Saratchandran, N Sundararajan
    Abstract:

    Recently, a new Learning algorithm for the feedforward neural network named the Extreme Learning Machine (ELM) which can give better performance than traditional tuning-based Learning methods for feedforward neural networks in terms of generalization and Learning speed has been proposed by Huang et al. In this paper, we first extend the ELM algorithm from the real domain to the complex domain, and then apply the fully complex Extreme Learning Machine (C-ELM) for nonlinear channel equalization applications. The simulation results show that the ELM equalizer significantly outperforms other neural network equalizers such as the complex minimal resource allocation network (CMRAN), complex radial basis function (CRBF) network and complex backpropagation (CBP) equalizers. C-ELM achieves much lower symbol error rate (SER) and has faster Learning speed.

  • on line sequential Extreme Learning Machine
    Computational Intelligence, 2005
    Co-Authors: Guang-bin Huang, Nanying Liang, Haijun Rong, P Saratchandran, N Sundararajan
    Abstract:

    The primitive Extreme Learning Machine (ELM) [1, 2, 3] with additive neurons and RBF kernels was implemented in batch mode. In this paper, its sequential modification based on recursive least-squares (RLS) algorithm, which referred as Online Sequential Extreme Learning Machine (OS-ELM), is introduced. Based on OS-ELM, Online Sequential Fuzzy Extreme Learning Machine (Fuzzy-ELM) is also introduced to implement zero order TSK model and first order TSK model. The performance of OS-ELM and Fuzzy-ELM are evaluated and compared with other popular sequential Learning algorithms, and experimental results on some real benchmark regression problems show that the proposedOnlineSequentialExtreme Learning Machine (OS-ELM) produces better generalization performance at very fast Learning speed.

Yuhai Zhao - One of the best experts on this subject based on the ideXlab platform.

  • ELM∗: distributed Extreme Learning Machine with MapReduce
    World Wide Web, 2014
    Co-Authors: Junchang Xin, Zhiqiong Wang, Linlin Ding, Guoren Wang, Chen Chen, Yuhai Zhao
    Abstract:

    Extreme Learning Machine (ELM) has been widely used in many fields such as text classification, image recognition and bioinformatics, as it provides good generalization performance at a Extremely fast Learning speed. However, as the data volume in real-world applications becomes larger and larger, the traditional centralized ELM cannot learn such massive data efficiently. Therefore, in this paper, we propose a novel Distributed Extreme Learning Machine based on MapReduce framework, named ELM¿¿¿, which can cover the shortage of traditional ELM whose Learning ability is weak to huge dataset. Firstly, after adequately analyzing the property of traditional ELM, it can be found out that the most expensive computation part of the matrix Moore-Penrose generalized inverse operator in the output weight vector calculation is the matrix multiplication operator. Then, as the matrix multiplication operator is decomposable, a Distributed Extreme Learning Machine (ELM¿¿¿) based on MapReduce framework can be developed, which can first calculate the matrix multiplication effectively with MapReduce in parallel, and then calculate the corresponding output weight vector with centralized computing. Therefore, the Learning of massive data can be made effectively. Finally, we conduct extensive experiments on synthetic data to verify the effectiveness and efficiency of our proposed ELM¿¿¿ in Learning massive data with various experimental settings.

  • ELM * : distributed Extreme Learning Machine with MapReduce
    World Wide Web, 2013
    Co-Authors: Zhiqiong Wang, Linlin Ding, Guoren Wang, Chen Chen, Yuhai Zhao
    Abstract:

    Extreme Learning Machine (ELM) has been widely used in many fields such as text classification, image recognition and bioinformatics, as it provides good generalization performance at a Extremely fast Learning speed. However, as the data volume in real-world applications becomes larger and larger, the traditional centralized ELM cannot learn such massive data efficiently. Therefore, in this paper, we propose a novel Distributed Extreme Learning Machine based on MapReduce framework, named ELM???, which can cover the shortage of traditional ELM whose Learning ability is weak to huge dataset. Firstly, after adequately analyzing the property of traditional ELM, it can be found out that the most expensive computation part of the matrix Moore-Penrose generalized inverse operator in the output weight vector calculation is the matrix multiplication operator. Then, as the matrix multiplication operator is decomposable, a Distributed Extreme Learning Machine (ELM???) based on MapReduce framework can be developed, which can first calculate the matrix multiplication effectively with MapReduce in parallel, and then calculate the corresponding output weight vector with centralized computing. Therefore, the Learning of massive data can be made effectively. Finally, we conduct extensive experiments on synthetic data to verify the effectiveness and efficiency of our proposed ELM??? in Learning massive data with various experimental settings.