Multiplication of Matrix

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 57 Experts worldwide ranked by ideXlab platform

Chao Wang - One of the best experts on this subject based on the ideXlab platform.

  • Distributed extreme learning machine with kernels based on mapreduce
    Neurocomputing, 2015
    Co-Authors: Xin Bi, Xiangguo Zhao, Guoren Wang, Pan Zhang, Chao Wang
    Abstract:

    Extreme Learning Machine (ELM) has shown its good generalization performance and extremely fast learning speed in many learning applications. Recently, it has been proved that ELM outperforms Support Vector Machine (SVM) with less constraints from the optimization point of view. ELM provides unified learning schemes with a widespread type of feature mappings. Among these unified algorithms, ELM with kernels applies kernels instead of random feature mappings. However, with the exponentially increasing volume of training data in massive learning applications, centralized ELM with kernels suffers from the great memory consumption of large Matrix operations. Besides, due to the high communication cost, some of these Matrix operations cannot be directly implemented on shared-nothing distributed computing model like MapReduce. This paper proposes a distributed solution named Distributed Kernelized ELM (DK-ELM), which realizes an implementation of ELM with kernels on MapReduce. Distributed kernel Matrix calculation and Multiplication of Matrix with vector are also applied to realize parallel calculation of DK-ELM. Extensive experiments on massive datasets are conducted to verify both the scalability and training performance of DK-ELM. Experimental results show that DK-ELM has good scalability for massive learning applications.

Xin Bi - One of the best experts on this subject based on the ideXlab platform.

  • Distributed extreme learning machine with kernels based on mapreduce
    Neurocomputing, 2015
    Co-Authors: Xin Bi, Xiangguo Zhao, Guoren Wang, Pan Zhang, Chao Wang
    Abstract:

    Extreme Learning Machine (ELM) has shown its good generalization performance and extremely fast learning speed in many learning applications. Recently, it has been proved that ELM outperforms Support Vector Machine (SVM) with less constraints from the optimization point of view. ELM provides unified learning schemes with a widespread type of feature mappings. Among these unified algorithms, ELM with kernels applies kernels instead of random feature mappings. However, with the exponentially increasing volume of training data in massive learning applications, centralized ELM with kernels suffers from the great memory consumption of large Matrix operations. Besides, due to the high communication cost, some of these Matrix operations cannot be directly implemented on shared-nothing distributed computing model like MapReduce. This paper proposes a distributed solution named Distributed Kernelized ELM (DK-ELM), which realizes an implementation of ELM with kernels on MapReduce. Distributed kernel Matrix calculation and Multiplication of Matrix with vector are also applied to realize parallel calculation of DK-ELM. Extensive experiments on massive datasets are conducted to verify both the scalability and training performance of DK-ELM. Experimental results show that DK-ELM has good scalability for massive learning applications.

Xiangguo Zhao - One of the best experts on this subject based on the ideXlab platform.

  • Distributed extreme learning machine with kernels based on mapreduce
    Neurocomputing, 2015
    Co-Authors: Xin Bi, Xiangguo Zhao, Guoren Wang, Pan Zhang, Chao Wang
    Abstract:

    Extreme Learning Machine (ELM) has shown its good generalization performance and extremely fast learning speed in many learning applications. Recently, it has been proved that ELM outperforms Support Vector Machine (SVM) with less constraints from the optimization point of view. ELM provides unified learning schemes with a widespread type of feature mappings. Among these unified algorithms, ELM with kernels applies kernels instead of random feature mappings. However, with the exponentially increasing volume of training data in massive learning applications, centralized ELM with kernels suffers from the great memory consumption of large Matrix operations. Besides, due to the high communication cost, some of these Matrix operations cannot be directly implemented on shared-nothing distributed computing model like MapReduce. This paper proposes a distributed solution named Distributed Kernelized ELM (DK-ELM), which realizes an implementation of ELM with kernels on MapReduce. Distributed kernel Matrix calculation and Multiplication of Matrix with vector are also applied to realize parallel calculation of DK-ELM. Extensive experiments on massive datasets are conducted to verify both the scalability and training performance of DK-ELM. Experimental results show that DK-ELM has good scalability for massive learning applications.

Guoren Wang - One of the best experts on this subject based on the ideXlab platform.

  • Distributed extreme learning machine with kernels based on mapreduce
    Neurocomputing, 2015
    Co-Authors: Xin Bi, Xiangguo Zhao, Guoren Wang, Pan Zhang, Chao Wang
    Abstract:

    Extreme Learning Machine (ELM) has shown its good generalization performance and extremely fast learning speed in many learning applications. Recently, it has been proved that ELM outperforms Support Vector Machine (SVM) with less constraints from the optimization point of view. ELM provides unified learning schemes with a widespread type of feature mappings. Among these unified algorithms, ELM with kernels applies kernels instead of random feature mappings. However, with the exponentially increasing volume of training data in massive learning applications, centralized ELM with kernels suffers from the great memory consumption of large Matrix operations. Besides, due to the high communication cost, some of these Matrix operations cannot be directly implemented on shared-nothing distributed computing model like MapReduce. This paper proposes a distributed solution named Distributed Kernelized ELM (DK-ELM), which realizes an implementation of ELM with kernels on MapReduce. Distributed kernel Matrix calculation and Multiplication of Matrix with vector are also applied to realize parallel calculation of DK-ELM. Extensive experiments on massive datasets are conducted to verify both the scalability and training performance of DK-ELM. Experimental results show that DK-ELM has good scalability for massive learning applications.

Pan Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Distributed extreme learning machine with kernels based on mapreduce
    Neurocomputing, 2015
    Co-Authors: Xin Bi, Xiangguo Zhao, Guoren Wang, Pan Zhang, Chao Wang
    Abstract:

    Extreme Learning Machine (ELM) has shown its good generalization performance and extremely fast learning speed in many learning applications. Recently, it has been proved that ELM outperforms Support Vector Machine (SVM) with less constraints from the optimization point of view. ELM provides unified learning schemes with a widespread type of feature mappings. Among these unified algorithms, ELM with kernels applies kernels instead of random feature mappings. However, with the exponentially increasing volume of training data in massive learning applications, centralized ELM with kernels suffers from the great memory consumption of large Matrix operations. Besides, due to the high communication cost, some of these Matrix operations cannot be directly implemented on shared-nothing distributed computing model like MapReduce. This paper proposes a distributed solution named Distributed Kernelized ELM (DK-ELM), which realizes an implementation of ELM with kernels on MapReduce. Distributed kernel Matrix calculation and Multiplication of Matrix with vector are also applied to realize parallel calculation of DK-ELM. Extensive experiments on massive datasets are conducted to verify both the scalability and training performance of DK-ELM. Experimental results show that DK-ELM has good scalability for massive learning applications.