Neural Network - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Neural Network

The Experts below are selected from a list of 574245 Experts worldwide ranked by ideXlab platform

Neural Network – Free Register to Access Experts & Abstracts

Jun Wang – One of the best experts on this subject based on the ideXlab platform.

  • a one layer recurrent Neural Network for constrained nonsmooth optimization
    Systems Man and Cybernetics, 2011
    Co-Authors: Jun Wang

    This paper presents a novel one-layer recurrent Neural Network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed Neural Network is the same as the number of decision variables of optimization problems. Compared with existing Neural Networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed Neural Network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed Neural Network.

  • a recurrent Neural Network for solving sylvester equation with time varying coefficients
    IEEE Transactions on Neural Networks, 2002
    Co-Authors: Yunong Zhang, Danchi Jiang, Jun Wang

    Presents a recurrent Neural Network for solving the Sylvester equation with time-varying coefficient matrices. The recurrent Neural Network with implicit dynamics is deliberately developed in the way that its trajectory is guaranteed to converge exponentially to the time-varying solution of a given Sylvester equation. Theoretical results of convergence and sensitivity analysis are presented to show the desirable properties of the recurrent Neural Network. Simulation results of time-varying matrix inversion and online nonlinear output regulation via pole assignment for the ball and beam system and the inverted pendulum on a cart system are also included to demonstrate the effectiveness and performance of the proposed Neural Network.

  • a multilayer recurrent Neural Network for solving continous time algebraic riccati equations
    Neural Networks, 1998
    Co-Authors: Jun Wang, Guang Wu

    A multilayer recurrent Neural Network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent Neural Network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent Neural Network is shown to be capable of solving algebraic Riccati equations and synthesizing linear-quadratic control systems in real time. Analytical results on stability of the recurrent Neural Network and solvability of algebraic Riccati equations by use of the recurrent Neural Network are discussed. The operating characteristics of the recurrent Neural Network are also demonstrated through three illustrative examples.

Kevin Warwick – One of the best experts on this subject based on the ideXlab platform.

  • dynamic recurrent Neural Network for system identification and control
    IEE Proceedings – Control Theory and Applications, 1995
    Co-Authors: A Delgado, C Kambhampati, Kevin Warwick

    A dynamic recurrent Neural Network (DRNN) that can be viewed as a generalisation of the Hopfield Neural Network is proposed to identify and control a class of control affine systems. In this approach, the identified Network is used in the context of the differential geometric control to synthesise a state feedback that cancels the nonlinear terms of the plant yielding a linear plant which can then be controlled using a standard PID controller.

Huazhong Yang – One of the best experts on this subject based on the ideXlab platform.

  • large scale recurrent Neural Network on gpu
    International Joint Conference on Neural Network, 2014
    Co-Authors: Boxun Li, Erjin Zhou, Bo Huang, Jiayi Duan, Yu Wang, Ningyi Xu, Jiaxing Zhang, Huazhong Yang

    Large scale artificial Neural Networks (ANNs) have been widely used in data processing applications. The recurrent Neural Network (RNN) is a special type of Neural Network equipped with additional recurrent connections. Such a unique architecture enables the recurrent Neural Network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train a recurrent Neural Network and therefore significantly limits the research on the recurrent Neural Network in the last 20 years. In recent years, the use of graphics processing units (GPUs) becomes a significant advance to speed up the training process of large scale Neural Networks by taking advantage of the massive parallelism capabilities of GPUs. In this paper, we propose an efficient GPU implementation of the large scale recurrent Neural Network and demonstrate the power of scaling up the recurrent Neural Network with GPUs. We first explore the potential parallelism of the recurrent Neural Network and propose a fine-grained two-stage pipeline implementation. Experiment results show that the proposed GPU implementation can achieve 2 ~ 11 x speed-up compared with the basic CPU implementation with the Intel Math Kernel Library. We then use the proposed GPU implementation to scale up the recurrent Neural Network and improve its performance. The experiment results of the Microsoft Research Sentence Completion Challenge demonstrate that the large scale recurrent Network without class layer is able to beat the traditional class-based modest-size recurrent Network and achieve an accuracy of 47%, the best result achieved by a single recurrent Neural Network on the same dataset.