Solvability Condition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 309 Experts worldwide ranked by ideXlab platform

M L De Andrade Netto - One of the best experts on this subject based on the ideXlab platform.

  • projection pursuit and the Solvability Condition applied to constructive learning
    Proceedings of International Conference on Neural Networks (ICNN'97), 1997
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    Single hidden layer neural networks with supervised learning have been successfully applied to approximate unknown functions defined in compact functional spaces. The more advanced results also give rates of convergence, stipulating how many hidden neurons with a given activation function should be used to achieve a specific order of approximation. However, independently of the activation function employed, these connectionist models for function approximation suffer from a severe limitation: all hidden neurons use the same activation function. If each activation function of a hidden neuron is optimally defined for every approximation problem, then better rates of convergence will be achieved. This is exactly the purpose of constructive learning using projection pursuit techniques. Since the training process operates the hidden neurons individually, a pertinent activation function employing automatic smoothing splines can be iteratively developed for each neuron as a function of the learning set. We apply projection pursuit in association with the optimization of the Solvability Condition, giving rise to a more efficient and accurate computational learning algorithm.

  • unit growing learning optimizing the Solvability Condition for model free regression
    International Conference on Networks, 1995
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    The universal approximation capability exhibited by one-hidden layer neural networks is explored to produce a supervised unit-growing learning for model-free nonlinear regression. The development is based on the Solvability Condition, which attests that the ability to learn a specific learning set increases with the number of nodes in the hidden layer. Since the training process operates the hidden nodes individually, a pertinent activation function can be iteratively developed for each node as a function of the learning set. The optimization of the Solvability Condition gives rise to neural networks of minimum dimension, an important step toward improving generalization.

  • ICNN - Unit-growing learning optimizing the Solvability Condition for model-free regression
    Proceedings of ICNN'95 - International Conference on Neural Networks, 1
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    The universal approximation capability exhibited by one-hidden layer neural networks is explored to produce a supervised unit-growing learning for model-free nonlinear regression. The development is based on the Solvability Condition, which attests that the ability to learn a specific learning set increases with the number of nodes in the hidden layer. Since the training process operates the hidden nodes individually, a pertinent activation function can be iteratively developed for each node as a function of the learning set. The optimization of the Solvability Condition gives rise to neural networks of minimum dimension, an important step toward improving generalization.

  • ICNN - Projection pursuit and the Solvability Condition applied to constructive learning
    Proceedings of International Conference on Neural Networks (ICNN'97), 1
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    Single hidden layer neural networks with supervised learning have been successfully applied to approximate unknown functions defined in compact functional spaces. The more advanced results also give rates of convergence, stipulating how many hidden neurons with a given activation function should be used to achieve a specific order of approximation. However, independently of the activation function employed, these connectionist models for function approximation suffer from a severe limitation: all hidden neurons use the same activation function. If each activation function of a hidden neuron is optimally defined for every approximation problem, then better rates of convergence will be achieved. This is exactly the purpose of constructive learning using projection pursuit techniques. Since the training process operates the hidden neurons individually, a pertinent activation function employing automatic smoothing splines can be iteratively developed for each neuron as a function of the learning set. We apply projection pursuit in association with the optimization of the Solvability Condition, giving rise to a more efficient and accurate computational learning algorithm.

F J Von Zuben - One of the best experts on this subject based on the ideXlab platform.

  • projection pursuit and the Solvability Condition applied to constructive learning
    Proceedings of International Conference on Neural Networks (ICNN'97), 1997
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    Single hidden layer neural networks with supervised learning have been successfully applied to approximate unknown functions defined in compact functional spaces. The more advanced results also give rates of convergence, stipulating how many hidden neurons with a given activation function should be used to achieve a specific order of approximation. However, independently of the activation function employed, these connectionist models for function approximation suffer from a severe limitation: all hidden neurons use the same activation function. If each activation function of a hidden neuron is optimally defined for every approximation problem, then better rates of convergence will be achieved. This is exactly the purpose of constructive learning using projection pursuit techniques. Since the training process operates the hidden neurons individually, a pertinent activation function employing automatic smoothing splines can be iteratively developed for each neuron as a function of the learning set. We apply projection pursuit in association with the optimization of the Solvability Condition, giving rise to a more efficient and accurate computational learning algorithm.

  • unit growing learning optimizing the Solvability Condition for model free regression
    International Conference on Networks, 1995
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    The universal approximation capability exhibited by one-hidden layer neural networks is explored to produce a supervised unit-growing learning for model-free nonlinear regression. The development is based on the Solvability Condition, which attests that the ability to learn a specific learning set increases with the number of nodes in the hidden layer. Since the training process operates the hidden nodes individually, a pertinent activation function can be iteratively developed for each node as a function of the learning set. The optimization of the Solvability Condition gives rise to neural networks of minimum dimension, an important step toward improving generalization.

  • ICNN - Unit-growing learning optimizing the Solvability Condition for model-free regression
    Proceedings of ICNN'95 - International Conference on Neural Networks, 1
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    The universal approximation capability exhibited by one-hidden layer neural networks is explored to produce a supervised unit-growing learning for model-free nonlinear regression. The development is based on the Solvability Condition, which attests that the ability to learn a specific learning set increases with the number of nodes in the hidden layer. Since the training process operates the hidden nodes individually, a pertinent activation function can be iteratively developed for each node as a function of the learning set. The optimization of the Solvability Condition gives rise to neural networks of minimum dimension, an important step toward improving generalization.

  • ICNN - Projection pursuit and the Solvability Condition applied to constructive learning
    Proceedings of International Conference on Neural Networks (ICNN'97), 1
    Co-Authors: F J Von Zuben, M L De Andrade Netto
    Abstract:

    Single hidden layer neural networks with supervised learning have been successfully applied to approximate unknown functions defined in compact functional spaces. The more advanced results also give rates of convergence, stipulating how many hidden neurons with a given activation function should be used to achieve a specific order of approximation. However, independently of the activation function employed, these connectionist models for function approximation suffer from a severe limitation: all hidden neurons use the same activation function. If each activation function of a hidden neuron is optimally defined for every approximation problem, then better rates of convergence will be achieved. This is exactly the purpose of constructive learning using projection pursuit techniques. Since the training process operates the hidden neurons individually, a pertinent activation function employing automatic smoothing splines can be iteratively developed for each neuron as a function of the learning set. We apply projection pursuit in association with the optimization of the Solvability Condition, giving rise to a more efficient and accurate computational learning algorithm.

Francesco Bullo - One of the best experts on this subject based on the ideXlab platform.

  • a Solvability Condition for reactive power flow
    Conference on Decision and Control, 2015
    Co-Authors: John W Simpsonporco, Florian Dorfler, Francesco Bullo
    Abstract:

    A central question in the analysis and operation of power networks is the feasibility of a unique high-voltage solution to the power flow equations satisfying operational constraints. For planning, monitoring, and contingency analysis in transmission networks, the high-voltage solution of these nonlinear equations can be constructed only numerically or roughly approximated using a linear DC power flow. In this work we analytically study the Solvability of the nonlinear decoupled reactive power flow equations, and present a Solvability Condition relating the existence of a unique high-voltage solution to the spatial distribution of loading and the effective impedances between load buses. We validate the accuracy and applicability of our results through standard power network test cases.

  • CDC - A Solvability Condition for reactive power flow
    2015 54th IEEE Conference on Decision and Control (CDC), 2015
    Co-Authors: John W. Simpson-porco, Florian Dorfler, Francesco Bullo
    Abstract:

    A central question in the analysis and operation of power networks is the feasibility of a unique high-voltage solution to the power flow equations satisfying operational constraints. For planning, monitoring, and contingency analysis in transmission networks, the high-voltage solution of these nonlinear equations can be constructed only numerically or roughly approximated using a linear DC power flow. In this work we analytically study the Solvability of the nonlinear decoupled reactive power flow equations, and present a Solvability Condition relating the existence of a unique high-voltage solution to the spatial distribution of loading and the effective impedances between load buses. We validate the accuracy and applicability of our results through standard power network test cases.

Zhenwei Liu - One of the best experts on this subject based on the ideXlab platform.

  • Solvability Condition for synchronization of discrete time multi agent systems and design
    Advances in Computing and Communications, 2017
    Co-Authors: Anton A Stoorvogel, Ali Saberi, Meirong Zhang, Zhenwei Liu
    Abstract:

    This paper provides Solvability Conditions for state synchronization with homogeneous discrete-time multi-agent systems (MAS) with a directed and weighted communication network under full-state coupling. We assume only a lower bound for the second eigenvalue of the Laplacian matrices associated with the communication network is known. For the rest the weighted, directed graph is completely arbitrary. Our Solvability Conditions reveal that the synchronization problem is solvable for any nonzero lower bound if and only if the agents are at most weakly unstable (i.e., agents have all eigenvalues in the closed unit disc). However for a given lower bound, we can achieve synchronization for a class of unstable agents. We provide protocol design for at most weakly unstable agents based on either a direct eigenstructure assignment method or a standard H 2 discrete-time algebraic Riccati equation (DARE). We also provide a protocol design for strictly unstable agents based on the standard H 2 DARE.

  • ACC - Solvability Condition for synchronization of discrete-time multi-agent systems and design
    2017 American Control Conference (ACC), 2017
    Co-Authors: Anton A Stoorvogel, Ali Saberi, Meirong Zhang, Zhenwei Liu
    Abstract:

    This paper provides Solvability Conditions for state synchronization with homogeneous discrete-time multi-agent systems (MAS) with a directed and weighted communication network under full-state coupling. We assume only a lower bound for the second eigenvalue of the Laplacian matrices associated with the communication network is known. For the rest the weighted, directed graph is completely arbitrary. Our Solvability Conditions reveal that the synchronization problem is solvable for any nonzero lower bound if and only if the agents are at most weakly unstable (i.e., agents have all eigenvalues in the closed unit disc). However for a given lower bound, we can achieve synchronization for a class of unstable agents. We provide protocol design for at most weakly unstable agents based on either a direct eigenstructure assignment method or a standard H 2 discrete-time algebraic Riccati equation (DARE). We also provide a protocol design for strictly unstable agents based on the standard H 2 DARE.

Anton A Stoorvogel - One of the best experts on this subject based on the ideXlab platform.

  • Solvability Condition for synchronization of discrete time multi agent systems and design
    Advances in Computing and Communications, 2017
    Co-Authors: Anton A Stoorvogel, Ali Saberi, Meirong Zhang, Zhenwei Liu
    Abstract:

    This paper provides Solvability Conditions for state synchronization with homogeneous discrete-time multi-agent systems (MAS) with a directed and weighted communication network under full-state coupling. We assume only a lower bound for the second eigenvalue of the Laplacian matrices associated with the communication network is known. For the rest the weighted, directed graph is completely arbitrary. Our Solvability Conditions reveal that the synchronization problem is solvable for any nonzero lower bound if and only if the agents are at most weakly unstable (i.e., agents have all eigenvalues in the closed unit disc). However for a given lower bound, we can achieve synchronization for a class of unstable agents. We provide protocol design for at most weakly unstable agents based on either a direct eigenstructure assignment method or a standard H 2 discrete-time algebraic Riccati equation (DARE). We also provide a protocol design for strictly unstable agents based on the standard H 2 DARE.

  • ACC - Solvability Condition for synchronization of discrete-time multi-agent systems and design
    2017 American Control Conference (ACC), 2017
    Co-Authors: Anton A Stoorvogel, Ali Saberi, Meirong Zhang, Zhenwei Liu
    Abstract:

    This paper provides Solvability Conditions for state synchronization with homogeneous discrete-time multi-agent systems (MAS) with a directed and weighted communication network under full-state coupling. We assume only a lower bound for the second eigenvalue of the Laplacian matrices associated with the communication network is known. For the rest the weighted, directed graph is completely arbitrary. Our Solvability Conditions reveal that the synchronization problem is solvable for any nonzero lower bound if and only if the agents are at most weakly unstable (i.e., agents have all eigenvalues in the closed unit disc). However for a given lower bound, we can achieve synchronization for a class of unstable agents. We provide protocol design for at most weakly unstable agents based on either a direct eigenstructure assignment method or a standard H 2 discrete-time algebraic Riccati equation (DARE). We also provide a protocol design for strictly unstable agents based on the standard H 2 DARE.