Nonlinear Networks

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 64581 Experts worldwide ranked by ideXlab platform

Leon O Chua - One of the best experts on this subject based on the ideXlab platform.

  • flux charge analysis method of memristor circuits
    2021
    Co-Authors: Fernando Corinto, Mauro Forti, Leon O Chua
    Abstract:

    Let us consider a relevant class of Nonlinear Networks, denoted by \(\mathcal {L}\mathcal {M}\), containing at least one memristor in addition to ideal (linear) resistors, inductors, capacitors, and independent voltage or currents sources. Thus, \(\mathcal {L}\mathcal {M}\) describes Nonlinear dynamic Networks including ideal memristors.

  • theoretical foundations of memristor cellular Nonlinear Networks a drm 2 based method to design memcomputers with dynamic memristors
    IEEE Transactions on Circuits and Systems I-regular Papers, 2020
    Co-Authors: Alon Ascoli, Ronald Tetzlaff, Sungmo Kang, Leon O Chua
    Abstract:

    In the memristive version of a standard space-invariant Cellular Nonlinear Network, each cell accommodates one first-order non-volatile memristor in parallel with a capacitor. In case, the resistance switching memory may only undergo almost-instantaneous switching transitions between two possible resistive states, acting at any time, as either the on or the off resistor, the processing elements effectively operate as first-order dynamical systems, and the classical Dynamic Route Map technique may be applied to investigate their operating principles. On the contrary, in case the memristors experience smooth conductance changes, as the bioinspired array implements memcomputing paradigms, each cell truly behaves as a second-order dynamical system. The recent extension of the Dynamic Route Map analysis tool to systems with two degrees of freedom constitutes a powerful technique to investigate the Nonlinear dynamics of memristive cellular Networks in these scenarios. This paper exploits this system-theoretic technique, called Second-Order Dynamic Route Map, to introduce a novel systematic procedure to design memristive arrays, in which a given memcomputing task is executed by ensuring that, depending upon the network inputs and initial conditions, the analogue dynamic routes of the states of the processing elements, namely capacitor voltages and memristor states, asymptotically converge toward pre-defined stable equilibria.

  • theoretical foundations of memristor cellular Nonlinear Networks memcomputing with bistable like memristors
    IEEE Transactions on Circuits and Systems I-regular Papers, 2020
    Co-Authors: Ronald Tetzlaff, Alon Ascoli, I Messaris, Leon O Chua
    Abstract:

    This paper presents the theory of a novel memcomputing paradigm based upon a memristive version of standard Cellular Nonlinear Networks. The insertion of a nonvolatile memristor in the circuit of each cell endows the dynamic array with the capability to store and retrieve data into and from the resistance switching memories, obviating the current need for extra memory blocks. Choosing the parameters of each cell circuit so that the memristors may undergo solely sharp transitions between two states, each processing element may be approximately described at any time as one of two first-order systems. Under this assumption, the classical Dynamic Route Map may be employed to synthesise and analyse the data storage and retrieval genes. A new system-theoretic methodology, called Second-Order Dynamic Route Map , is also introduced for the first time in this paper. This technique allows to study the operating principles of arrays with second-order processing elements, as is the case, in the proposed network, if the set up of cell circuit parameters induces analogue memristive dynamics. This paper shows how the novel tool may be adopted to investigate the operating mechanisms of a cellular array with second-order cells, which compute the element-wise logical OR between two binary images.

  • turing patterns in memristive cellular Nonlinear Networks
    IEEE Transactions on Circuits and Systems, 2016
    Co-Authors: Arturo Buscarino, Claudia Corradino, L Fortuna, Mattia Frasca, Leon O Chua
    Abstract:

    The formation of ordered structures, in particular Turing patterns, in complex spatially extended systems has been observed in many different contexts, spanning from natural sciences (chemistry, physics, and biology) to technology (mechanics and electronics). In this paper, it is shown that the use of memristors in a simple cell of a spatially-extended circuit architecture allows us to design systems able to generate Turing patterns. In addition, the memristor parameters play a key role in the selection of the type and characteristics of the emerging pattern, which is also influenced by the initial conditions. The problem of finding the regions of parameters where Turing patterns may emerge in the proposed cellular architecture is solved in an analytic way, and numerical results are shown to illustrate the system behavior with respect to its parameters.

  • awakening dynamics via passive coupling and synchronization mechanism in oscillatory cellular neural Nonlinear Networks
    International Journal of Circuit Theory and Applications, 2008
    Co-Authors: Istvan Szatmari, Leon O Chua
    Abstract:

    We have studied synchronization mechanism in locally coupled Nonlinear oscillators. Here, synchronization takes place by passive coupling based on a reaction–diffusion process. We will compare this mechanism with basic synchronization techniques, showing their similarities and specific properties. In addition to synchronization, passive and local coupling can also ‘awaken’ non-oscillating cell circuits and trigger oscillation, provided that cells are locally active. This result resembles Turing's and Smale's works showing that locally communicating simple elements can produce very different patterns even if separate elements do not show any activity. This property will be demonstrated for two second-order cells and also for a large ensemble of oscillatory cells. In latter case, the network of oscillatory cells exhibits very sophisticated spatio-temporal waves, e.g. spiral waves. Copyright © 2008 John Wiley & Sons, Ltd.

Surya Ganguli - One of the best experts on this subject based on the ideXlab platform.

  • resurrecting the sigmoid in deep learning through dynamical isometry theory and practice
    Neural Information Processing Systems, 2017
    Co-Authors: Jeffrey Pennington, Samuel S Schoenholz, Surya Ganguli
    Abstract:

    It is well known that weight initialization in deep Networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network's input-output Jacobian is O(1) is essential for avoiding exponentially vanishing or exploding gradients. Moreover, in deep linear Networks, ensuring that all singular values of the Jacobian are concentrated near 1 can yield a dramatic additional speed-up in learning; this is a property known as dynamical isometry. However, it is unclear how to achieve dynamical isometry in Nonlinear deep Networks. We address this question by employing powerful tools from free probability theory to analytically compute the {\it entire} singular value distribution of a deep network's input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of Nonlinearity. Intriguingly, we find that ReLU Networks are incapable of dynamical isometry. On the other hand, sigmoidal Networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep Nonlinear Networks achieving dynamical isometry learn orders of magnitude faster than Networks that do not. Indeed, we show that properly-initialized deep sigmoidal Networks consistently outperform deep ReLU Networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning.

  • resurrecting the sigmoid in deep learning through dynamical isometry theory and practice
    arXiv: Learning, 2017
    Co-Authors: Jeffrey Pennington, Samuel S Schoenholz, Surya Ganguli
    Abstract:

    It is well known that the initialization of weights in deep neural Networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network's input-output Jacobian is $O(1)$ is essential for avoiding the exponential vanishing or explosion of gradients. The stronger condition that all singular values of the Jacobian concentrate near $1$ is a property known as dynamical isometry. For deep linear Networks, dynamical isometry can be achieved through orthogonal weight initialization and has been shown to dramatically speed up learning; however, it has remained unclear how to extend these results to the Nonlinear setting. We address this question by employing powerful tools from free probability theory to compute analytically the entire singular value distribution of a deep network's input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of Nonlinearity. Intriguingly, we find that ReLU Networks are incapable of dynamical isometry. On the other hand, sigmoidal Networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep Nonlinear Networks achieving dynamical isometry learn orders of magnitude faster than Networks that do not. Indeed, we show that properly-initialized deep sigmoidal Networks consistently outperform deep ReLU Networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning.

  • exact solutions to the Nonlinear dynamics of learning in deep linear neural Networks
    International Conference on Learning Representations, 2014
    Co-Authors: Andrew M Saxe, James L Mcclelland, Surya Ganguli
    Abstract:

    Abstract: Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural Networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural Networks. Despite the linearity of their input-output map, such Networks have Nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear Networks exhibit Nonlinear learning phenomena similar to those seen in simulations of Nonlinear Networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the Nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed can nevertheless remain finite: for a special class of initial conditions on the weights, very deep Networks incur only a finite, depth independent, delay in learning speed relative to shallow Networks. We show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, while scaled random Gaussian initializations cannot. We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times. We further show that these initial conditions also lead to faithful propagation of gradients even in deep Nonlinear Networks, as long as they operate in a special regime known as the edge of chaos.

Ronald Tetzlaff - One of the best experts on this subject based on the ideXlab platform.

  • theoretical foundations of memristor cellular Nonlinear Networks a drm 2 based method to design memcomputers with dynamic memristors
    IEEE Transactions on Circuits and Systems I-regular Papers, 2020
    Co-Authors: Alon Ascoli, Ronald Tetzlaff, Sungmo Kang, Leon O Chua
    Abstract:

    In the memristive version of a standard space-invariant Cellular Nonlinear Network, each cell accommodates one first-order non-volatile memristor in parallel with a capacitor. In case, the resistance switching memory may only undergo almost-instantaneous switching transitions between two possible resistive states, acting at any time, as either the on or the off resistor, the processing elements effectively operate as first-order dynamical systems, and the classical Dynamic Route Map technique may be applied to investigate their operating principles. On the contrary, in case the memristors experience smooth conductance changes, as the bioinspired array implements memcomputing paradigms, each cell truly behaves as a second-order dynamical system. The recent extension of the Dynamic Route Map analysis tool to systems with two degrees of freedom constitutes a powerful technique to investigate the Nonlinear dynamics of memristive cellular Networks in these scenarios. This paper exploits this system-theoretic technique, called Second-Order Dynamic Route Map, to introduce a novel systematic procedure to design memristive arrays, in which a given memcomputing task is executed by ensuring that, depending upon the network inputs and initial conditions, the analogue dynamic routes of the states of the processing elements, namely capacitor voltages and memristor states, asymptotically converge toward pre-defined stable equilibria.

  • theoretical foundations of memristor cellular Nonlinear Networks memcomputing with bistable like memristors
    IEEE Transactions on Circuits and Systems I-regular Papers, 2020
    Co-Authors: Ronald Tetzlaff, Alon Ascoli, I Messaris, Leon O Chua
    Abstract:

    This paper presents the theory of a novel memcomputing paradigm based upon a memristive version of standard Cellular Nonlinear Networks. The insertion of a nonvolatile memristor in the circuit of each cell endows the dynamic array with the capability to store and retrieve data into and from the resistance switching memories, obviating the current need for extra memory blocks. Choosing the parameters of each cell circuit so that the memristors may undergo solely sharp transitions between two states, each processing element may be approximately described at any time as one of two first-order systems. Under this assumption, the classical Dynamic Route Map may be employed to synthesise and analyse the data storage and retrieval genes. A new system-theoretic methodology, called Second-Order Dynamic Route Map , is also introduced for the first time in this paper. This technique allows to study the operating principles of arrays with second-order processing elements, as is the case, in the proposed network, if the set up of cell circuit parameters induces analogue memristive dynamics. This paper shows how the novel tool may be adopted to investigate the operating mechanisms of a cellular array with second-order cells, which compute the element-wise logical OR between two binary images.

  • an improved cellular Nonlinear network architecture for binary and grayscale image processing
    IEEE Transactions on Circuits and Systems Ii-express Briefs, 2018
    Co-Authors: Jens Muller, Robert Wittig, Jan Muller, Ronald Tetzlaff
    Abstract:

    Cellular Nonlinear Networks (CNNs) constitute a very powerful paradigm for single instruction/multiple data computers with fine granularity. Analog and mixed-signal implementations have proven to be suitable for applications in high-speed image processing, robot control, medical signal processing, and many more. Especially digital emulations on field-programmable gate arrays (FPGAs) allow the development of general-purpose computers based on the CNN universal machine with an inherently parallel structure, a high degree of flexibility and a superior computational precision. However, these emulations turn out to be inefficient for the execution of binary operations, which account for more than two-thirds of all processing steps in a typical CNN algorithm. In this contribution, we present an architecture for the emulation of CNNs that supports both a fast and efficient processing of binary images, and a high computational accuracy when needed. With the FPGA implementation of this architecture, a speed-up factor of up to 5 is achieved for binary-data operations.

Jeffrey Pennington - One of the best experts on this subject based on the ideXlab platform.

  • the spectrum of the fisher information matrix of a single hidden layer neural network
    Neural Information Processing Systems, 2018
    Co-Authors: Jeffrey Pennington, Pratik Worah
    Abstract:

    An important factor contributing to the success of deep learning has been the remarkable ability to optimize large neural Networks using simple first-order optimization algorithms like stochastic gradient descent. While the efficiency of such methods depends crucially on the local curvature of the loss surface, very little is actually known about how this geometry depends on network architecture and hyperparameters. In this work, we extend a recently-developed framework for studying spectra of Nonlinear random matrices to characterize an important measure of curvature, namely the eigenvalues of the Fisher information matrix. We focus on a single-hidden-layer neural network with Gaussian data and weights and provide an exact expression for the spectrum in the limit of infinite width. We find that linear Networks suffer worse conditioning than Nonlinear Networks and that Nonlinear Networks are generically non-degenerate. We also predict and demonstrate empirically that by adjusting the Nonlinearity, the spectrum can be tuned so as to improve the efficiency of first-order optimization methods.

  • resurrecting the sigmoid in deep learning through dynamical isometry theory and practice
    Neural Information Processing Systems, 2017
    Co-Authors: Jeffrey Pennington, Samuel S Schoenholz, Surya Ganguli
    Abstract:

    It is well known that weight initialization in deep Networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network's input-output Jacobian is O(1) is essential for avoiding exponentially vanishing or exploding gradients. Moreover, in deep linear Networks, ensuring that all singular values of the Jacobian are concentrated near 1 can yield a dramatic additional speed-up in learning; this is a property known as dynamical isometry. However, it is unclear how to achieve dynamical isometry in Nonlinear deep Networks. We address this question by employing powerful tools from free probability theory to analytically compute the {\it entire} singular value distribution of a deep network's input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of Nonlinearity. Intriguingly, we find that ReLU Networks are incapable of dynamical isometry. On the other hand, sigmoidal Networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep Nonlinear Networks achieving dynamical isometry learn orders of magnitude faster than Networks that do not. Indeed, we show that properly-initialized deep sigmoidal Networks consistently outperform deep ReLU Networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning.

  • resurrecting the sigmoid in deep learning through dynamical isometry theory and practice
    arXiv: Learning, 2017
    Co-Authors: Jeffrey Pennington, Samuel S Schoenholz, Surya Ganguli
    Abstract:

    It is well known that the initialization of weights in deep neural Networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network's input-output Jacobian is $O(1)$ is essential for avoiding the exponential vanishing or explosion of gradients. The stronger condition that all singular values of the Jacobian concentrate near $1$ is a property known as dynamical isometry. For deep linear Networks, dynamical isometry can be achieved through orthogonal weight initialization and has been shown to dramatically speed up learning; however, it has remained unclear how to extend these results to the Nonlinear setting. We address this question by employing powerful tools from free probability theory to compute analytically the entire singular value distribution of a deep network's input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of Nonlinearity. Intriguingly, we find that ReLU Networks are incapable of dynamical isometry. On the other hand, sigmoidal Networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep Nonlinear Networks achieving dynamical isometry learn orders of magnitude faster than Networks that do not. Indeed, we show that properly-initialized deep sigmoidal Networks consistently outperform deep ReLU Networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning.

Alon Ascoli - One of the best experts on this subject based on the ideXlab platform.

  • theoretical foundations of memristor cellular Nonlinear Networks a drm 2 based method to design memcomputers with dynamic memristors
    IEEE Transactions on Circuits and Systems I-regular Papers, 2020
    Co-Authors: Alon Ascoli, Ronald Tetzlaff, Sungmo Kang, Leon O Chua
    Abstract:

    In the memristive version of a standard space-invariant Cellular Nonlinear Network, each cell accommodates one first-order non-volatile memristor in parallel with a capacitor. In case, the resistance switching memory may only undergo almost-instantaneous switching transitions between two possible resistive states, acting at any time, as either the on or the off resistor, the processing elements effectively operate as first-order dynamical systems, and the classical Dynamic Route Map technique may be applied to investigate their operating principles. On the contrary, in case the memristors experience smooth conductance changes, as the bioinspired array implements memcomputing paradigms, each cell truly behaves as a second-order dynamical system. The recent extension of the Dynamic Route Map analysis tool to systems with two degrees of freedom constitutes a powerful technique to investigate the Nonlinear dynamics of memristive cellular Networks in these scenarios. This paper exploits this system-theoretic technique, called Second-Order Dynamic Route Map, to introduce a novel systematic procedure to design memristive arrays, in which a given memcomputing task is executed by ensuring that, depending upon the network inputs and initial conditions, the analogue dynamic routes of the states of the processing elements, namely capacitor voltages and memristor states, asymptotically converge toward pre-defined stable equilibria.

  • theoretical foundations of memristor cellular Nonlinear Networks memcomputing with bistable like memristors
    IEEE Transactions on Circuits and Systems I-regular Papers, 2020
    Co-Authors: Ronald Tetzlaff, Alon Ascoli, I Messaris, Leon O Chua
    Abstract:

    This paper presents the theory of a novel memcomputing paradigm based upon a memristive version of standard Cellular Nonlinear Networks. The insertion of a nonvolatile memristor in the circuit of each cell endows the dynamic array with the capability to store and retrieve data into and from the resistance switching memories, obviating the current need for extra memory blocks. Choosing the parameters of each cell circuit so that the memristors may undergo solely sharp transitions between two states, each processing element may be approximately described at any time as one of two first-order systems. Under this assumption, the classical Dynamic Route Map may be employed to synthesise and analyse the data storage and retrieval genes. A new system-theoretic methodology, called Second-Order Dynamic Route Map , is also introduced for the first time in this paper. This technique allows to study the operating principles of arrays with second-order processing elements, as is the case, in the proposed network, if the set up of cell circuit parameters induces analogue memristive dynamics. This paper shows how the novel tool may be adopted to investigate the operating mechanisms of a cellular array with second-order cells, which compute the element-wise logical OR between two binary images.