Mathematical Analysis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 496023 Experts worldwide ranked by ideXlab platform

Mathias Quoy - One of the best experts on this subject based on the ideXlab platform.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    Neural Computation, 2008
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    arXiv: Chaotic Dynamics, 2007
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

Benoit Siri - One of the best experts on this subject based on the ideXlab platform.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    Neural Computation, 2008
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    arXiv: Chaotic Dynamics, 2007
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

Hugues Berry - One of the best experts on this subject based on the ideXlab platform.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    Neural Computation, 2008
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    arXiv: Chaotic Dynamics, 2007
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

Bruno Cessac - One of the best experts on this subject based on the ideXlab platform.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    Neural Computation, 2008
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    arXiv: Chaotic Dynamics, 2007
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

Bruno Delord - One of the best experts on this subject based on the ideXlab platform.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    Neural Computation, 2008
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  • a Mathematical Analysis of the effects of hebbian learning rules on the dynamics and structure of discrete time random recurrent neural networks
    arXiv: Chaotic Dynamics, 2007
    Co-Authors: Benoit Siri, Hugues Berry, Bruno Cessac, Bruno Delord, Mathias Quoy
    Abstract:

    We present a Mathematical Analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results Mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.