Functional Neural Network

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 213 Experts worldwide ranked by ideXlab platform

Yiguang Liu - One of the best experts on this subject based on the ideXlab platform.

  • fundamental study a concise Functional Neural Network computing the largest modulus eigenvalues and their corresponding eigenvectors of a real skew matrix
    Theoretical Computer Science, 2006
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Quick extraction of the largest modulus eigenvalues of a real antisymmetric matrix is important for some engineering applications. As Neural Network runs in concurrent and asynchronous manner in essence, using it to complete this calculation can achieve high speed. This paper introduces a concise Functional Neural Network (FNN), which can be equivalently transformed into a complex differential equation, to do this work. After obtaining the analytic solution of the equation, the convergence behaviors of this FNN are discussed. Simulation result indicates that with general initial complex values, the Network will converge to the complex eigenvector which corresponds to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other Neural Networks designed for the like aim, this Network is applicable to real skew matrices.

  • A Concise Functional Neural Network for Computing the Extremum Eigenpairs of Real Symmetric Matrices
    Lecture Notes in Computer Science, 2006
    Co-Authors: Yiguang Liu, Zhisheng You
    Abstract:

    Quick extraction of the extremum eigenpairs of a real symmetric matrix is very important in engineering. Using Neural Networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise Functional Neural Network (FNN) to compute the largest (or smallest) eigenvalue and one corresponding eigenvector. After transforming the FNN into a differential equation, and obtaining the analytic solution, the convergence properties are completely analyzed. By this FNN, the method that can compute the extremum eigenpairs whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity. Comparing with the other ones used in the same field, the proposed FNN is very simple and concise, so it is very easy to realize.

  • ISNN (1) - A concise Functional Neural Network for computing the extremum eigenpairs of real symmetric matrices
    Advances in Neural Networks - ISNN 2006, 2006
    Co-Authors: Yiguang Liu, Zhisheng You
    Abstract:

    Quick extraction of the extremum eigenpairs of a real symmetric matrix is very important in engineering. Using Neural Networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise Functional Neural Network (FNN) to compute the largest (or smallest) eigenvalue and one corresponding eigenvector. After transforming the FNN into a differential equation, and obtaining the analytic solution, the convergence properties are completely analyzed. By this FNN, the method that can compute the extremum eigenpairs whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity. Comparing with the other ones used in the same field, the proposed FNN is very simple and concise, so it is very easy to realize.

  • A Functional Neural Network computing some eigenvalues and eigenvectors of a special real matrix
    Neural networks : the official journal of the International Neural Network Society, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    How to quickly compute eigenvalues and eigenvectors of a matrix, especially, a general real matrix, is significant in engineering. Since Neural Network runs in asynchronous and concurrent manner, and can achieve high rapidity, this paper designs a concise Functional Neural Network (FNN) to extract some eigenvalues and eigenvectors of a special real matrix. After equivalent transforming the FNN into a complex differential equation and obtaining the analytic solution, the convergence properties of the FNN are analyzed. If the eigenvalue whose imaginary part is nonzero and the largest of all eigenvalues is unique, the FNN will converge to the eigenvector corresponding to this special eigenvalue with general nonzero initial vector. If all eigenvalues are real numbers or there are more than one eigenvalue whose imaginary part equals the largest, the FNN will converge to zero point or fall into a cycle procedure. Comparing with other Neural Networks designed for the same domain, the restriction to matrix is very slack. At last, three examples are employed to illustrate the performance of the FNN.

  • Letter: A Functional Neural Network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix
    Neurocomputing, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Efficient computation of the largest modulus eigenvalues of a real anti-symmetric matrix is a very important problem in engineering. Using a Neural Network to complete these operations is in an asynchronous manner and can achieve high performance. This paper proposes a Functional Neural Network (FNN) that can be transformed into a complex differential equation to do this work. Firstly, the mathematical analytic solution of the equation is received, and then the convergence properties of this FNN are analyzed. The simulation result indicates that with general initial complex values, the Network will converge to the complex eigenvector corresponding to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other Neural Networks used for computing eigenvalues and eigenvectors, this Network is adaptive to real anti-symmetric matrices for completing these operations.

Zhisheng You - One of the best experts on this subject based on the ideXlab platform.

  • fundamental study a concise Functional Neural Network computing the largest modulus eigenvalues and their corresponding eigenvectors of a real skew matrix
    Theoretical Computer Science, 2006
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Quick extraction of the largest modulus eigenvalues of a real antisymmetric matrix is important for some engineering applications. As Neural Network runs in concurrent and asynchronous manner in essence, using it to complete this calculation can achieve high speed. This paper introduces a concise Functional Neural Network (FNN), which can be equivalently transformed into a complex differential equation, to do this work. After obtaining the analytic solution of the equation, the convergence behaviors of this FNN are discussed. Simulation result indicates that with general initial complex values, the Network will converge to the complex eigenvector which corresponds to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other Neural Networks designed for the like aim, this Network is applicable to real skew matrices.

  • A Concise Functional Neural Network for Computing the Extremum Eigenpairs of Real Symmetric Matrices
    Lecture Notes in Computer Science, 2006
    Co-Authors: Yiguang Liu, Zhisheng You
    Abstract:

    Quick extraction of the extremum eigenpairs of a real symmetric matrix is very important in engineering. Using Neural Networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise Functional Neural Network (FNN) to compute the largest (or smallest) eigenvalue and one corresponding eigenvector. After transforming the FNN into a differential equation, and obtaining the analytic solution, the convergence properties are completely analyzed. By this FNN, the method that can compute the extremum eigenpairs whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity. Comparing with the other ones used in the same field, the proposed FNN is very simple and concise, so it is very easy to realize.

  • ISNN (1) - A concise Functional Neural Network for computing the extremum eigenpairs of real symmetric matrices
    Advances in Neural Networks - ISNN 2006, 2006
    Co-Authors: Yiguang Liu, Zhisheng You
    Abstract:

    Quick extraction of the extremum eigenpairs of a real symmetric matrix is very important in engineering. Using Neural Networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise Functional Neural Network (FNN) to compute the largest (or smallest) eigenvalue and one corresponding eigenvector. After transforming the FNN into a differential equation, and obtaining the analytic solution, the convergence properties are completely analyzed. By this FNN, the method that can compute the extremum eigenpairs whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity. Comparing with the other ones used in the same field, the proposed FNN is very simple and concise, so it is very easy to realize.

  • A Functional Neural Network computing some eigenvalues and eigenvectors of a special real matrix
    Neural networks : the official journal of the International Neural Network Society, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    How to quickly compute eigenvalues and eigenvectors of a matrix, especially, a general real matrix, is significant in engineering. Since Neural Network runs in asynchronous and concurrent manner, and can achieve high rapidity, this paper designs a concise Functional Neural Network (FNN) to extract some eigenvalues and eigenvectors of a special real matrix. After equivalent transforming the FNN into a complex differential equation and obtaining the analytic solution, the convergence properties of the FNN are analyzed. If the eigenvalue whose imaginary part is nonzero and the largest of all eigenvalues is unique, the FNN will converge to the eigenvector corresponding to this special eigenvalue with general nonzero initial vector. If all eigenvalues are real numbers or there are more than one eigenvalue whose imaginary part equals the largest, the FNN will converge to zero point or fall into a cycle procedure. Comparing with other Neural Networks designed for the same domain, the restriction to matrix is very slack. At last, three examples are employed to illustrate the performance of the FNN.

  • Letter: A Functional Neural Network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix
    Neurocomputing, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Efficient computation of the largest modulus eigenvalues of a real anti-symmetric matrix is a very important problem in engineering. Using a Neural Network to complete these operations is in an asynchronous manner and can achieve high performance. This paper proposes a Functional Neural Network (FNN) that can be transformed into a complex differential equation to do this work. Firstly, the mathematical analytic solution of the equation is received, and then the convergence properties of this FNN are analyzed. The simulation result indicates that with general initial complex values, the Network will converge to the complex eigenvector corresponding to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other Neural Networks used for computing eigenvalues and eigenvectors, this Network is adaptive to real anti-symmetric matrices for completing these operations.

Liping Cao - One of the best experts on this subject based on the ideXlab platform.

  • fundamental study a concise Functional Neural Network computing the largest modulus eigenvalues and their corresponding eigenvectors of a real skew matrix
    Theoretical Computer Science, 2006
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Quick extraction of the largest modulus eigenvalues of a real antisymmetric matrix is important for some engineering applications. As Neural Network runs in concurrent and asynchronous manner in essence, using it to complete this calculation can achieve high speed. This paper introduces a concise Functional Neural Network (FNN), which can be equivalently transformed into a complex differential equation, to do this work. After obtaining the analytic solution of the equation, the convergence behaviors of this FNN are discussed. Simulation result indicates that with general initial complex values, the Network will converge to the complex eigenvector which corresponds to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other Neural Networks designed for the like aim, this Network is applicable to real skew matrices.

  • A Functional Neural Network computing some eigenvalues and eigenvectors of a special real matrix
    Neural networks : the official journal of the International Neural Network Society, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    How to quickly compute eigenvalues and eigenvectors of a matrix, especially, a general real matrix, is significant in engineering. Since Neural Network runs in asynchronous and concurrent manner, and can achieve high rapidity, this paper designs a concise Functional Neural Network (FNN) to extract some eigenvalues and eigenvectors of a special real matrix. After equivalent transforming the FNN into a complex differential equation and obtaining the analytic solution, the convergence properties of the FNN are analyzed. If the eigenvalue whose imaginary part is nonzero and the largest of all eigenvalues is unique, the FNN will converge to the eigenvector corresponding to this special eigenvalue with general nonzero initial vector. If all eigenvalues are real numbers or there are more than one eigenvalue whose imaginary part equals the largest, the FNN will converge to zero point or fall into a cycle procedure. Comparing with other Neural Networks designed for the same domain, the restriction to matrix is very slack. At last, three examples are employed to illustrate the performance of the FNN.

  • Letter: A Functional Neural Network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix
    Neurocomputing, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Efficient computation of the largest modulus eigenvalues of a real anti-symmetric matrix is a very important problem in engineering. Using a Neural Network to complete these operations is in an asynchronous manner and can achieve high performance. This paper proposes a Functional Neural Network (FNN) that can be transformed into a complex differential equation to do this work. Firstly, the mathematical analytic solution of the equation is received, and then the convergence properties of this FNN are analyzed. The simulation result indicates that with general initial complex values, the Network will converge to the complex eigenvector corresponding to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other Neural Networks used for computing eigenvalues and eigenvectors, this Network is adaptive to real anti-symmetric matrices for completing these operations.

  • Letter: A simple Functional Neural Network for computing the largest and smallest eigenvalues and corresponding eigenvectors of a real symmetric matrix
    Neurocomputing, 2005
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Efficient computation of the largest eigenvalue and the smallest eigenvalue of a real symmetric matrix is a very important problem in engineering. Using Neural Networks to complete these operations is in an asynchronous manner and can achieve high performance. This paper proposes a concise Functional Neural Network (FNN) expressed as a differential equation and designs steps to do this work. Firstly, the mathematical analytic solution of the equation is received, and then the convergence properties of this FNN are fully gained. Finally, the computing steps are designed in detail. The proposed method can compute the smallest eigenvalue and the largest eigenvalue whether the matrix is non-definite, positive definite or negative definite. Compared with other methods based on Neural Networks, this FNN is very simple and concise, so it is very easy to realize.

  • A Concise Functional Neural Network Computing the Largest (Smallest) Eigenvalue and one Corresponding Eigenvector of a Real Symmetric Matrix
    2005 International Conference on Neural Networks and Brain, 1
    Co-Authors: Yiguang Liu, Zhisheng You, Liping Cao
    Abstract:

    Quick extraction of eigenpairs of a real symmetric matrix is very important in engineering. Using Neural Networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise Functional Neural Network (FNN) to compute the largest (or smallest) eigenvalue and one its eigenvector. When the FNN is converted into a differential equation, the component analytic solution of this equation is obtained. Using the component solution, the convergence properties are fully analyzed. On the basis of this FNN, the method that can compute the largest (or smallest) eigenvalue and one its eigenvector whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity of the method. Comparing with other Neural Networks designed for the same aim, the proposed FNN is very simple and concise, so it is very easy to be realized

Seunghwan Kim - One of the best experts on this subject based on the ideXlab platform.

  • Hierarchical Modularity of the Functional Neural Network Organized by Spike Timing Dependent Synaptic Plasticity
    International Journal of Modern Physics B, 2007
    Co-Authors: Chang-woo Shin, Seunghwan Kim
    Abstract:

    We study the emergent Functional Neural Network organized by synaptic reorganization by the spike timing dependent synaptic plasticity (STDP). We show that small-world and scale-free Functional structures organized by STDP, in the case of synaptic balance, exhibit hierarchial modularity.

  • Emergent Functional Neural Networks organized by spike timing dependent synaptic plasticity
    BMC Neuroscience, 2007
    Co-Authors: Chang-woo Shin, Seunghwan Kim
    Abstract:

    The synchronization of Neural activities plays very important roles in the information processing in the brain. Recent studies on complex systems have shown that the synchronization of oscillators, including neuronal ones, is faster, stronger, and more efficient in small-world Networks than in regular or random Networks, and many studies are based on the assumption that the brain may utilize the small-world and scale-free Network structure. The collective dynamical response and the Functional Neural Network structure depend on each other due to synaptic plasticities, and this feedback process is believed to be closely linked to the mechanisms for learning and memory in the brain. Recent experimental studies have shown that in various brain regions, such as the hippocampus and the neocortex, both the sign and the magnitude of synaptic modification depend on the precise temporal relation of spike timing of two neurons, which is called the spike timing dependent synaptic plasticity (STDP). Here, we study the emergent Functional Neural Networks organized by STDP. We show that STDP can lead a Neural oscillator Network into a Functional structure which has both the small-world behaviors and the scale-free properties with hierarchical modularity. The STDP Network has small average shortest path length between the neurons and high clustering coefficient. The degree distributions and the clustering coefficient depending on the degree follow power-low decays. We also show that the balance between the maximal excitatory and the inhibitory synaptic inputs is critical in the formation of the nontrivial Functional structure, which is found to lie in a self-organized critical state.

Ponnuthurai Nagaratnam Suganthan - One of the best experts on this subject based on the ideXlab platform.

  • enhancing multi class classification of random forest using random vector Functional Neural Network and oblique decision surfaces
    International Joint Conference on Neural Network, 2018
    Co-Authors: Rakesh Katuwal, Ponnuthurai Nagaratnam Suganthan
    Abstract:

    Both Neural Networks and decision trees are popular machine learning methods and are widely used to solve problems from diverse domains. These two classifiers are commonly used base classifiers in an ensemble framework. In this paper, we first present a new variant of oblique decision tree based on a linear classifier, then construct an ensemble classifier based on the fusion of a fast Neural Network, random vector Functional link Network and oblique decision trees. Random Vector Functional Link Network has an elegant closed form solution with extremely short training time. The Neural Network partitions each training bag (obtained using bagging) at the root level into C subsets where C is the number of classes in the dataset and subsequently, C oblique decision trees are trained on such partitions. The proposed method provides a rich insight into the data by grouping the confusing or hard to classify samples for each class and thus, provides an opportunity to employ fine-grained classification rule over the data. The performance of the ensemble classifier is evaluated on several multi-class datasets where it demonstrates a superior performance compared to other state-of-the-art classifiers.

  • IJCNN - Enhancing Multi-Class Classification of Random Forest using Random Vector Functional Neural Network and Oblique Decision Surfaces
    2018 International Joint Conference on Neural Networks (IJCNN), 2018
    Co-Authors: Rakesh Katuwal, Ponnuthurai Nagaratnam Suganthan
    Abstract:

    Both Neural Networks and decision trees are popular machine learning methods and are widely used to solve problems from diverse domains. These two classifiers are commonly used base classifiers in an ensemble framework. In this paper, we first present a new variant of oblique decision tree based on a linear classifier, then construct an ensemble classifier based on the fusion of a fast Neural Network, random vector Functional link Network and oblique decision trees. Random Vector Functional Link Network has an elegant closed form solution with extremely short training time. The Neural Network partitions each training bag (obtained using bagging) at the root level into C subsets where C is the number of classes in the dataset and subsequently, C oblique decision trees are trained on such partitions. The proposed method provides a rich insight into the data by grouping the confusing or hard to classify samples for each class and thus, provides an opportunity to employ fine-grained classification rule over the data. The performance of the ensemble classifier is evaluated on several multi-class datasets where it demonstrates a superior performance compared to other state-of-the-art classifiers.