The Experts below are selected from a list of 574245 Experts worldwide ranked by ideXlab platform
Jun Wang - One of the best experts on this subject based on the ideXlab platform.
-
a one layer recurrent Neural Network for constrained nonsmooth optimization
Systems Man and Cybernetics, 2011Co-Authors: Jun WangAbstract:This paper presents a novel one-layer recurrent Neural Network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed Neural Network is the same as the number of decision variables of optimization problems. Compared with existing Neural Networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed Neural Network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed Neural Network.
-
a recurrent Neural Network for solving sylvester equation with time varying coefficients
IEEE Transactions on Neural Networks, 2002Co-Authors: Yunong Zhang, Danchi Jiang, Jun WangAbstract:Presents a recurrent Neural Network for solving the Sylvester equation with time-varying coefficient matrices. The recurrent Neural Network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation. Theoretical results of convergence and sensitivity analysis are presented to show the desirable properties of the recurrent Neural Network. Simulation results of time-varying matrix inversion and online nonlinear output regulation via pole assignment for the ball and beam system and the inverted pendulum on a cart system are also included to demonstrate the effectiveness and performance of the proposed Neural Network.
-
a multilayer recurrent Neural Network for solving continous time algebraic riccati equations
Neural Networks, 1998Co-Authors: Jun Wang, Guang WuAbstract:A multilayer recurrent Neural Network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent Neural Network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent Neural Network is shown to be capable of solving algebraic Riccati equations and synthesizing linear-quadratic control systems in real time. Analytical results on stability of the recurrent Neural Network and solvability of algebraic Riccati equations by use of the recurrent Neural Network are discussed. The operating characteristics of the recurrent Neural Network are also demonstrated through three illustrative examples.
Kevin Warwick - One of the best experts on this subject based on the ideXlab platform.
-
dynamic recurrent Neural Network for system identification and control
IEE Proceedings - Control Theory and Applications, 1995Co-Authors: A Delgado, C Kambhampati, Kevin WarwickAbstract:A dynamic recurrent Neural Network (DRNN) that can be viewed as a generalisation of the Hopfield Neural Network is proposed to identify and control a class of control affine systems. In this approach, the identified Network is used in the context of the differential geometric control to synthesise a state feedback that cancels the nonlinear terms of the plant yielding a linear plant which can then be controlled using a standard PID controller.
Huazhong Yang - One of the best experts on this subject based on the ideXlab platform.
-
large scale recurrent Neural Network on gpu
International Joint Conference on Neural Network, 2014Co-Authors: Boxun Li, Jiayi Duan, Erjin Zhou, Jiaxing Zhang, Bo Huang, Ningyi Xu, Yu Wang, Huazhong YangAbstract:Large scale artificial Neural Networks (ANNs) have been widely used in data processing applications. The recurrent Neural Network (RNN) is a special type of Neural Network equipped with additional recurrent connections. Such a unique architecture enables the recurrent Neural Network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train a recurrent Neural Network and therefore significantly limits the research on the recurrent Neural Network in the last 20 years. In recent years, the use of graphics processing units (GPUs) becomes a significant advance to speed up the training process of large scale Neural Networks by taking advantage of the massive parallelism capabilities of GPUs. In this paper, we propose an efficient GPU implementation of the large scale recurrent Neural Network and demonstrate the power of scaling up the recurrent Neural Network with GPUs. We first explore the potential parallelism of the recurrent Neural Network and propose a fine-grained two-stage pipeline implementation. Experiment results show that the proposed GPU implementation can achieve 2 ~ 11 x speed-up compared with the basic CPU implementation with the Intel Math Kernel Library. We then use the proposed GPU implementation to scale up the recurrent Neural Network and improve its performance. The experiment results of the Microsoft Research Sentence Completion Challenge demonstrate that the large scale recurrent Network without class layer is able to beat the traditional class-based modest-size recurrent Network and achieve an accuracy of 47%, the best result achieved by a single recurrent Neural Network on the same dataset.
Kazuyuki Aihara - One of the best experts on this subject based on the ideXlab platform.
-
chaotic simulated annealing by a Neural Network model with transient chaos
arXiv: Chaotic Dynamics, 1997Co-Authors: Luonan Chen, Kazuyuki AiharaAbstract:We propose a Neural Network model with transient chaos, or a transiently chaotic Neural Network (TCNN) as an approximation method for combinatorial optimization problem, by introducing transiently chaotic dynamics into Neural Networks. Unlike conventional Neural Networks only with point attractors, the proposed Neural Network has richer and more flexible dynamics, so that it can be expected to have higher ability of searching for globally optimal or near-optimal solutions. A significant property of this model is that the chaotic neurodynamics is temporarily generated for searching and self-organizing, and eventually vanishes with autonomous decreasing of a bifurcation parameter corresponding to the "temperature" in usual annealing process. Therefore, the Neural Network gradually approaches, through the transient chaos, to dynamical structure similar to such conventional models as the Hopfield Neural Network which converges to a stable equilibrium point. Since the optimization process of the transiently chaotic Neural Network is similar to simulated annealing, not in a stochastic way but in a deterministically chaotic way, the new method is regarded as chaotic simulated annealing (CSA). Fundamental characteristics of the transiently chaotic neurodynamics are numerically investigated with examples of a single neuron model and the Traveling Salesman Problem (TSP). Moreover, a maintenance scheduling problem for generators in a practical power system is also analysed to verify practical efficiency of this new method.
-
chaotic simulated annealing by a Neural Network model with transient chaos
Neural Networks, 1995Co-Authors: Luonan Chen, Kazuyuki AiharaAbstract:Abstract We propose a Neural Network model with transient chaos, or a transiently chaotic Neural Network (TCNN) as an approximation method for combinatorial optimization problems, by introducing transiently chaotic dynamics into Neural Networks. Unlike conventional Neural Networks only with point attractors, the proposed Neural Network has richer and more flexible dynamics, so that it can be expected to have higher ability of searching for globally optimal or near-optimal solutions. A significant property of this model is that the chaotic neurodynamics is temporarily generated for searching and self-organizing, and eventually vanishes with autonomous decrease of a bifurcation parameter corresponding to the “temperature” in the usual annealing process. Therefore, the Neural Network gradually approaches, through the transient chaos, to a dynamical structure similar to such conventional models as the Hopfield Neural Network which converges to a stable equilibrium point. Since the optimization process of the transiently chaotic Neural Network is similar to simulated annealing, not in a stochastic way but in a deterministically chaotic way, the new method is regarded as chaotic simulated annealing (CSA). Fundamental characteristics of the transiently chaotic neurodynamics are numerically investigated with examples of a single neuron model and the Traveling Salesman Problem (TSP). Moreover, a maintenance scheduling problem for generators in a practical power system is also analysed to verify practical efficiency of this new method.
Alex Nugent - One of the best experts on this subject based on the ideXlab platform.
-
adaptive Neural Network utilizing nanotechnology based components
2008Co-Authors: Alex NugentAbstract:Methods and systems for modifying at least one synapse of a physicallelectromechanical Neural Network. A physical/electromechanical Neural Network implemented as an adaptive Neural Network can be provided, which includes one or more neurons and one or more synapses thereof, wherein the neurons and synapses are formed from a plurality of nanoparticles disposed within a dielectric solution in association with one or more pre-synaptic electrodes and one or more post-synaptic electrodes and an applied electric field. At least one pulse can be generated from one or more of the neurons to one or more of the pre-synaptic electrodes of a succeeding neuron and one or more post-synaptic electrodes of one or more of the neurons of the physical/electromechanical Neural Network, thereby strengthening at least one nanoparticle of a plurality of nanoparticles disposed within the dielectric solution and at least one synapse thereof.
-
multilayer training in a physical Neural Network formed utilizing nanotechnology
2008Co-Authors: Alex NugentAbstract:A method for and system for training a connection Network located between neuron layers within a multi-layer physical Neural Network. A multi-layer physical Neural Network can be formed having a plurality of inputs and a plurality outputs thereof, wherein the multi-layer physical Neural Network comprises a plurality of layers, wherein each layer comprises one or more connection Networks and associated neurons. Thereafter, a training wave can be initiated across the connection Networks associated with an initial layer of the multi-layer physical Neural Network which propagates thereafter through succeeding connection Networks of succeeding layers of the Neural Network by successively closing and opening switches associated with each layer. One or more feedback signals thereof can be automatically provided to strengthen or weaken nanoconnections associated with each connection Network.
-
solution based apparatus of an artificial Neural Network formed utilizing nanotechnology
2005Co-Authors: Alex NugentAbstract:An apparatus for maintaining components in Neural Network formed utilizing nanotechnology is described herein. A connection gap can be formed between two terminals. A solution comprising a melting point at approximately room temperature can be provided, wherein the solution is maintained in the connection gap and comprises a plurality of nanoparticles forming nanoconnections thereof having connection strengths thereof, wherein the solution and the connection gap are adapted for use with a Neural Network formed utilizing nanotechnology, such when power is removed from the Neural Network, the solution freezes, thereby locking into place the connection strengths.
-
pattern recognition utilizing a nanotechnology based Neural Network
2005Co-Authors: Alex NugentAbstract:A pattern recognition system, comprising a Neural Network formed utilizing nanotechnology and a pattern input unit, which communicates with the Neural Network, wherein the Neural Network processes data input via the pattern input unit in order to recognize data patterns thereof. Such a pattern recognition system can be implemented in the context of a speech recognition system and/or other pattern recognition systems, such as visual and/or imaging recognition systems.
-
physical Neural Network liquid state machine utilizing nanotechnology
2003Co-Authors: Alex NugentAbstract:A physical Neural Network is disclosed, which comprises a liquid state machine. The physical Neural Network is configured from molecular connections located within a dielectric solvent between pre-synaptic and post-synaptic electrodes thereof, such that the molecular connections are strengthened or weakened according to an application of an electric field or a frequency thereof to provide physical Neural Network connections thereof. A supervised learning mechanism is associated with the liquid state machine, whereby connections strengths of the molecular connections are determined by pre-synaptic and post-synaptic activity respectively associated with the pre-synaptic and post-synaptic electrodes, wherein the liquid state machine comprises a dynamic fading memory mechanism.