Neural Net

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 30534 Experts worldwide ranked by ideXlab platform

Frank L. Lewis - One of the best experts on this subject based on the ideXlab platform.

  • discrete time Neural Net controller for a class of nonlinear dynamical systems
    IEEE Transactions on Automatic Control, 1996
    Co-Authors: S Jagannathan, Frank L. Lewis
    Abstract:

    A family of two-layer discrete-time Neural Net (NN) controllers is presented for the control of a class of mnth-order MIMO dynamical system. No initial learning phase is needed so that the control action is immediate; in other words, the Neural Network (NN) controller exhibits a learning-while-functioning-feature instead of a learning-then-control feature. A two-layer NN is used which is linear in the tunable weights. The structure of the Neural Net controller is derived using a filtered error approach. It is indicated that delta-rule-based tuning, when employed for closed-loop control, can yield unbounded NN weights if: 1) the Net cannot exactly reconstruct a certain required function, or 2) there are bounded unknown disturbances acting on the dynamical system. Certainty equivalence is not used, overcoming a major problem in discrete-time adaptive control. In this paper, new online tuning algorithms for discrete-time systems are derived which are similar to /spl epsiv/-modification for the case of continuous-time systems that include a modification to the learning rate parameter and a correction term to the standard delta rule.

  • Multilayer Neural-Net robot controller with guaranteed tracking performance
    IEEE Transactions on Neural Networks, 1996
    Co-Authors: Frank L. Lewis, Aydin Yeşildirek, Kai Liu
    Abstract:

    A multilayer Neural-Net (NN) controller for a general serial-link rigid robot arm is developed. The structure of the NN controller is derived using a filtered error/passivity approach. No off-line learning phase is needed for the proposed NN controller and the weights are easily initialized. The nonlinear nature of the NN, plus NN functional reconstruction inaccuracies and robot disturbances, mean that the standard delta rule using backpropagation tuning does not suffice for closed-loop dynamic control. Novel online weight tuning algorithms, including correction terms to the delta rule plus an added robust signal, guarantee bounded tracking errors as well as bounded NN weights. Specific bounds are determined, and the tracking error bound can be made arbitrarily small by increasing a certain feedback gain. The correction terms involve a second-order forward-propagated wave in the backpropagation Network. New NN properties including the notions of a passive NN, a dissipative NN, and a robust NN are introduced.

  • multilayer discrete time Neural Net controller with guaranteed performance
    IEEE Transactions on Neural Networks, 1996
    Co-Authors: S Jagannathan, Frank L. Lewis
    Abstract:

    A family of novel multilayer discrete-time Neural-Net (NN) controllers is presented for the control of a class of multi-input multi-output (MIMO) dynamical systems. The Neural Net controller includes modified delta rule weight tuning and exhibits a learning while-functioning-features. The structure of the NN controller is derived using a filtered error/passivity approach. Linearity in the parameters is not required and certainty equivalence is not used. This overcomes several limitations of standard adaptive control. The notion of persistency of excitation (PE) for multilayer NN is defined and explored. New online improved tuning algorithms for discrete-time systems are derived, which are similar to /spl sigma/ or /spl epsiv/-modification for the case of continuous-time systems, that include a modification to the learning rate parameter plus a correction term. These algorithms guarantee tracking as well as bounded NN weights in nonideal situations so that PE is not needed. An extension of these novel weight tuning updates to NN with an arbitrary number of hidden layers is discussed. The notions of discrete-time passive NN, dissipative NN, and robust NN are introduced. The NN makes the closed-loop system passive.

  • Neural Net robot controller with guaranteed tracking performance
    IEEE Transactions on Neural Networks, 1995
    Co-Authors: Frank L. Lewis, Aydin Yeşildirek
    Abstract:

    A Neural Net (NN) controller for a general serial-link robot arm is developed. The NN has two layers so that linearity in the parameters holds, but the "Net functional reconstruction error" and robot disturbance input are taken as nonzero. The structure of the NN controller is derived using a filtered error/passivity approach, leading to new NN passivity properties. Online weight tuning algorithms including a correction term to backpropagation, plus an added robustifying signal, guarantee tracking as well as bounded NN weights. The NN controller structure has an outer tracking loop so that the NN weights are conveniently initialized at zero, with learning occurring online in real-time. It is shown that standard backpropagation, when used for real-time closed-loop control, can yield unbounded NN weights if (1) the Net cannot exactly reconstruct a certain required control function or (2) there are bounded unknown disturbances in the robot dynamics. The role of persistency of excitation is explored. >

  • Neural Net robot controller with guaranteed tracking performance
    International Symposium on Intelligent Control, 1993
    Co-Authors: Frank L. Lewis, Aydin Yeşildirek
    Abstract:

    A Neural Net (NN) controller for a general serial-link robot arm is developed. The NN has two layers so that linearity in the parameters holds, but the "Net functional reconstruction error" is taken as nonzero. The structure of the NN controller is derived using a filtered error/passivity approach. It is shown that standard backpropagation, when used for real time closed-loop control, can yield unbounded NN weights if (1) the Net cannot exactly reconstruct a certain required control function, or (2) there are bounded unknown disturbances in the robot dynamics. An online weight tuning algorithm including a correction term to backpropagation guarantees tracking as well as bounded weights. The notions of a passive NN and a robust NN are introduced. >

Kurt Keutzer - One of the best experts on this subject based on the ideXlab platform.

  • Invited: Co-Design of Deep Neural Nets and Neural Net Accelerators for Embedded Vision Applications
    2018 55th ACM ESDA IEEE Design Automation Conference (DAC), 2018
    Co-Authors: Kiseok Kwon, Alon Amid, Amir Gholami, Krste Asanovic, Bichen Wu, Kurt Keutzer
    Abstract:

    Deep Learning is arguably the most rapidly evolving research area in recent years. As a result it is not surprising that the design of state-of-the-art deep Neural Net models proceeds without much consideration of the latest hardware targets, and the design of Neural Net accelerators proceeds without much consideration of the characteristics of the latest deep Neural Net models. Nevertheless, in this paper we show that there are significant improvements available if deep Neural Net models and Neural Net accelerators are co-designed.

  • co design of deep Neural Nets and Neural Net accelerators for embedded vision applications
    arXiv: Distributed Parallel and Cluster Computing, 2018
    Co-Authors: Kiseok Kwon, Alon Amid, Amir Gholami, Krste Asanovic, Kurt Keutzer
    Abstract:

    Deep Learning is arguably the most rapidly evolving research area in recent years. As a result it is not surprising that the design of state-of-the-art deep Neural Net models proceeds without much consideration of the latest hardware targets, and the design of Neural Net accelerators proceeds without much consideration of the characteristics of the latest deep Neural Net models. Nevertheless, in this paper we show that there are significant improvements available if deep Neural Net models and Neural Net accelerators are co-designed.

  • keynote small Neural Nets are beautiful enabling embedded systems with small deep Neural Network architectures
    International Conference on Hardware Software Codesign and System Synthesis, 2017
    Co-Authors: Forrest Iandola, Kurt Keutzer
    Abstract:

    Over the last five years Deep Neural Nets have offered more accurate solutions to many problems in speech recognition, and computer vision, and these solutions have surpassed a threshold of acceptability for many applications. As a result, Deep Neural Networks have supplanted other approaches to solving problems in these areas, and enabled many new applications. While the design of Deep Neural Nets is still something of an art form, in our work we have found basic principles of design space exploration used to develop embedded microprocessor architectures to be highly applicable to the design of Deep Neural Net architectures. In particular, we have used these design principles to create a novel Deep Neural Net called SqueezeNet that requires only 480KB of storage for its model parameters. We have further integrated all these experiences to develop something of a playbook for creating small Deep Neural Nets for embedded systems.

  • Keynote: Small Neural Nets Are Beautiful: Enabling Embedded Systems with Small Deep-Neural-Network Architectures
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Forrest Iandola, Kurt Keutzer
    Abstract:

    Over the last five years Deep Neural Nets have offered more accurate solutions to many problems in speech recognition, and computer vision, and these solutions have surpassed a threshold of acceptability for many applications. As a result, Deep Neural Networks have supplanted other approaches to solving problems in these areas, and enabled many new applications. While the design of Deep Neural Nets is still something of an art form, in our work we have found basic principles of design space exploration used to develop embedded microprocessor architectures to be highly applicable to the design of Deep Neural Net architectures. In particular, we have used these design principles to create a novel Deep Neural Net called SqueezeNet that requires as little as 480KB of storage for its model parameters. We have further integrated all these experiences to develop something of a playbook for creating small Deep Neural Nets for embedded systems.

S Jagannathan - One of the best experts on this subject based on the ideXlab platform.

  • discrete time Neural Net controller for a class of nonlinear dynamical systems
    IEEE Transactions on Automatic Control, 1996
    Co-Authors: S Jagannathan, Frank L. Lewis
    Abstract:

    A family of two-layer discrete-time Neural Net (NN) controllers is presented for the control of a class of mnth-order MIMO dynamical system. No initial learning phase is needed so that the control action is immediate; in other words, the Neural Network (NN) controller exhibits a learning-while-functioning-feature instead of a learning-then-control feature. A two-layer NN is used which is linear in the tunable weights. The structure of the Neural Net controller is derived using a filtered error approach. It is indicated that delta-rule-based tuning, when employed for closed-loop control, can yield unbounded NN weights if: 1) the Net cannot exactly reconstruct a certain required function, or 2) there are bounded unknown disturbances acting on the dynamical system. Certainty equivalence is not used, overcoming a major problem in discrete-time adaptive control. In this paper, new online tuning algorithms for discrete-time systems are derived which are similar to /spl epsiv/-modification for the case of continuous-time systems that include a modification to the learning rate parameter and a correction term to the standard delta rule.

  • multilayer discrete time Neural Net controller with guaranteed performance
    IEEE Transactions on Neural Networks, 1996
    Co-Authors: S Jagannathan, Frank L. Lewis
    Abstract:

    A family of novel multilayer discrete-time Neural-Net (NN) controllers is presented for the control of a class of multi-input multi-output (MIMO) dynamical systems. The Neural Net controller includes modified delta rule weight tuning and exhibits a learning while-functioning-features. The structure of the NN controller is derived using a filtered error/passivity approach. Linearity in the parameters is not required and certainty equivalence is not used. This overcomes several limitations of standard adaptive control. The notion of persistency of excitation (PE) for multilayer NN is defined and explored. New online improved tuning algorithms for discrete-time systems are derived, which are similar to /spl sigma/ or /spl epsiv/-modification for the case of continuous-time systems, that include a modification to the learning rate parameter plus a correction term. These algorithms guarantee tracking as well as bounded NN weights in nonideal situations so that PE is not needed. An extension of these novel weight tuning updates to NN with an arbitrary number of hidden layers is discussed. The notions of discrete-time passive NN, dissipative NN, and robust NN are introduced. The NN makes the closed-loop system passive.

Forrest Iandola - One of the best experts on this subject based on the ideXlab platform.

  • keynote small Neural Nets are beautiful enabling embedded systems with small deep Neural Network architectures
    International Conference on Hardware Software Codesign and System Synthesis, 2017
    Co-Authors: Forrest Iandola, Kurt Keutzer
    Abstract:

    Over the last five years Deep Neural Nets have offered more accurate solutions to many problems in speech recognition, and computer vision, and these solutions have surpassed a threshold of acceptability for many applications. As a result, Deep Neural Networks have supplanted other approaches to solving problems in these areas, and enabled many new applications. While the design of Deep Neural Nets is still something of an art form, in our work we have found basic principles of design space exploration used to develop embedded microprocessor architectures to be highly applicable to the design of Deep Neural Net architectures. In particular, we have used these design principles to create a novel Deep Neural Net called SqueezeNet that requires only 480KB of storage for its model parameters. We have further integrated all these experiences to develop something of a playbook for creating small Deep Neural Nets for embedded systems.

  • Keynote: Small Neural Nets Are Beautiful: Enabling Embedded Systems with Small Deep-Neural-Network Architectures
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Forrest Iandola, Kurt Keutzer
    Abstract:

    Over the last five years Deep Neural Nets have offered more accurate solutions to many problems in speech recognition, and computer vision, and these solutions have surpassed a threshold of acceptability for many applications. As a result, Deep Neural Networks have supplanted other approaches to solving problems in these areas, and enabled many new applications. While the design of Deep Neural Nets is still something of an art form, in our work we have found basic principles of design space exploration used to develop embedded microprocessor architectures to be highly applicable to the design of Deep Neural Net architectures. In particular, we have used these design principles to create a novel Deep Neural Net called SqueezeNet that requires as little as 480KB of storage for its model parameters. We have further integrated all these experiences to develop something of a playbook for creating small Deep Neural Nets for embedded systems.

Lan Chau Lee - One of the best experts on this subject based on the ideXlab platform.

  • Neural Net Classification of X-ray Pistachio Nut Data
    LWT - Food Science and Technology, 1998
    Co-Authors: David A. Casasent, Michael A. Sipe, Thomas F. Schatzki, Pamela M. Keagy, Lan Chau Lee
    Abstract:

    Classification results for agricultural products are presented using a new Neural Network. This Neural Network inherently produces higher-order decision surfaces. It achieves this with fewer hidden layer neurons than other classifiers require. This gives better generalization. It uses new techniques to select the number of hidden layer neurons and adaptive algorithms that avoid other suchad hocparameter selection problems and it allows selection of the best classifier parameters without the need to analyse the test set results. The agriculture case study considered is the inspection and classification of pistachio nuts using X-ray imagery. Present inspection techniques cannot provide good rejection of worm damaged nuts without rejecting too many good nuts. X-ray imagery has the potential to provide 100% inspection of such agricultural products in real time. Preliminary results presented indicate the potential to reduce major defects to 2% of the crop with only 1% of good nuts rejected. These results are preferable to present data. Future image processing techniques that should provide better features to improve performance and allow inspection of a larger variety of nuts are noted. These techniques and variations of them have uses in a number of other agricultural product inspection problems.