Processing Elements

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 139575 Experts worldwide ranked by ideXlab platform

Emilio Del Moral Hernandez - One of the best experts on this subject based on the ideXlab platform.

  • IWANN (1) - Studying Neural Networks of Bifurcating Recursive Processing Elements - Quantitative Methods for Architecture Design and Performance Analysis
    2001
    Co-Authors: Emilio Del Moral Hernandez
    Abstract:

    This paper addresses quantitative techniques for the design and characterization of artificial neural networks based on Chaotic Neural Nodes, Recursive Processing Elements, and Bifurcation Neurons. Such architectures can be programmed to store cyclic patterns, having as important applications spatio temporal Processing and computation with non fixed-point attractors. The paper also addresses the performance measurement of associative memories based on Recursive Processing Elements, considering situations of analog and digital noise in the prompting patterns, and evaluating how this noise reflects in the Hamming distance between the desired stored pattern and the answer pattern produced by the neural network.

  • Studying Neural Networks of Bifurcating Recursive Processing Elements — Quantitative Methods for Architecture Design
    Connectionist Models of Neurons Learning Processes and Artificial Intelligence, 2001
    Co-Authors: Emilio Del Moral Hernandez
    Abstract:

    This paper addresses quantitative techniques for the design and characterization of artificial neural networks based on Chaotic Neural Nodes, Recursive Processing Elements, and Bifurcation Neurons. Such architectures can be programmed to store cyclic patterns, having as important applications spatio temporal Processing and computation with non fixed-point attractors. The paper also addresses the performance measurement of associative memories based on Recursive Processing Elements, considering situations of analog and digital noise in the prompting patterns, and evaluating how this noise reflects in the Hamming distance between the desired stored pattern and the answer pattern produced by the neural network.

  • Pattern recovery in networks of recursive Processing Elements with continuous learning
    2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), 1
    Co-Authors: Emilio Del Moral Hernandez, H. Sandmann, L.a. Da Silva
    Abstract:

    This paper addresses a continuous learning method using associative memories based on recursive Processing Elements (RPEs). In order to decide if a pattern recovered by the associative RPEs is known or unknown, we are using two discriminators: a network stabilization criterion and a Hamming distance criterion. The network stabilization criterion is based on the disagreement between the current and the next state, and the Hamming distance criterion checks the number of bits flipped between the prompting pattern and the recovered pattern. Experiments for the performance of continuous learning when the prompting patterns are exposed to digital noise and experiments for the evaluation of the capacity of network storage are presented and analyzed.

  • A novel time-based neural coding for artificial neural networks with bifurcating recursive Processing Elements
    IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), 1
    Co-Authors: Emilio Del Moral Hernandez
    Abstract:

    This paper addresses a novel temporal coding technique, particularly adapted to artificial neural networks based on globally coupled maps, recursive Processing Elements, and model neurons with spiking dynamics defined by first order recursive maps. Such networks are used here to store quaternary patterns through programmed period-4 limit cycles. Important applications are the Processing of spatio-temporal information, and the computation with non-fixed point attractors and compact neural architectures.

Yasuhiko Nakashima - One of the best experts on this subject based on the ideXlab platform.

  • Cellular neural network formed by simplified Processing Elements composed of thin-film transistors
    Neurocomputing, 2017
    Co-Authors: Mutsumi Kimura, Ryohei Morita, Sumio Sugisaki, Tokiyoshi Matsuda, Tomoya Kameda, Yasuhiko Nakashima
    Abstract:

    We have developed a cellular neural network formed by simplified Processing Elements composed of thin-film transistors. First, we simplified the neuron circuit into a two-inverter two-switch circuit and the synapse device into only a transistor. Next, we composed the Processing Elements of thin-film transistors, which are promising for giant microelectronics applications, and formed a cellular neural network by the Processing Elements. Finally, we confirmed that the cellular neural network can learn multiple logics even in a small-scale neural network. Moreover, we verified that the cellular neural network can simultaneously recognize multiple simple alphabet letters. These results should serve as the theoretical bases to realize ultra-large scale integration for brain-type integrated circuits.

  • Simplification of Processing Elements in Cellular Neural Network
    Journal of Electrical & Electronic Systems, 2017
    Co-Authors: Mutsumi Kimura, Tokiyoshi Matsuda, Tomoya Kameda, Tomoharu Yokoyama, Nao Nakamura, Hiroki Nakanishi, Yasuhiko Nakashima
    Abstract:

    We have succeeded in simplification of Processing Elements in cellular neural network. First, we reduce a neuron to two-inverter two-switch circuit, two-inverter one-switch circuit, or two-inverter circuit. Next, we reduce a synapse only to one variable resistor or one variable capacitor. Finally, we confirm the correct operation of the cellular neural network by learning of arbitrary logics. These results will be theoretical bases to realize ultra-large scale integration for brain-type integrated circuits.

  • Simplification of Processing Elements in Cellular Neural Networks
    Neural Information Processing, 2016
    Co-Authors: Mutsumi Kimura, Tokiyoshi Matsuda, Tomoya Kameda, Tomoharu Yokoyama, Nao Nakamura, Yasuhiko Nakashima
    Abstract:

    Simplification of Processing Elements is greatly desired in cellular neural networks to realize ultra-large scale integration. First, we propose reducing a neuron to two-inverter two-switch circuit, two-inverter one-switch circuit, or two-inverter circuit. Next, we propose reducing a synapse only to one variable resistor or one variable capacitor. Finally, we confirm the correct workings of the cellular neural networks using circuit simulation. These results will be one of the theoretical bases to apply cellular neural networks to brain-type integrated circuits.

Mutsumi Kimura - One of the best experts on this subject based on the ideXlab platform.

  • Cellular neural network formed by simplified Processing Elements composed of thin-film transistors
    Neurocomputing, 2017
    Co-Authors: Mutsumi Kimura, Ryohei Morita, Sumio Sugisaki, Tokiyoshi Matsuda, Tomoya Kameda, Yasuhiko Nakashima
    Abstract:

    We have developed a cellular neural network formed by simplified Processing Elements composed of thin-film transistors. First, we simplified the neuron circuit into a two-inverter two-switch circuit and the synapse device into only a transistor. Next, we composed the Processing Elements of thin-film transistors, which are promising for giant microelectronics applications, and formed a cellular neural network by the Processing Elements. Finally, we confirmed that the cellular neural network can learn multiple logics even in a small-scale neural network. Moreover, we verified that the cellular neural network can simultaneously recognize multiple simple alphabet letters. These results should serve as the theoretical bases to realize ultra-large scale integration for brain-type integrated circuits.

  • Simplification of Processing Elements in Cellular Neural Network
    Journal of Electrical & Electronic Systems, 2017
    Co-Authors: Mutsumi Kimura, Tokiyoshi Matsuda, Tomoya Kameda, Tomoharu Yokoyama, Nao Nakamura, Hiroki Nakanishi, Yasuhiko Nakashima
    Abstract:

    We have succeeded in simplification of Processing Elements in cellular neural network. First, we reduce a neuron to two-inverter two-switch circuit, two-inverter one-switch circuit, or two-inverter circuit. Next, we reduce a synapse only to one variable resistor or one variable capacitor. Finally, we confirm the correct operation of the cellular neural network by learning of arbitrary logics. These results will be theoretical bases to realize ultra-large scale integration for brain-type integrated circuits.

  • Simplification of Processing Elements in Cellular Neural Networks
    Neural Information Processing, 2016
    Co-Authors: Mutsumi Kimura, Tokiyoshi Matsuda, Tomoya Kameda, Tomoharu Yokoyama, Nao Nakamura, Yasuhiko Nakashima
    Abstract:

    Simplification of Processing Elements is greatly desired in cellular neural networks to realize ultra-large scale integration. First, we propose reducing a neuron to two-inverter two-switch circuit, two-inverter one-switch circuit, or two-inverter circuit. Next, we propose reducing a synapse only to one variable resistor or one variable capacitor. Finally, we confirm the correct workings of the cellular neural networks using circuit simulation. These results will be one of the theoretical bases to apply cellular neural networks to brain-type integrated circuits.

  • Study on simplification of Processing Elements in neural networks using circuit simulation
    2016 IEEE International Meeting for Future of Electron Devices Kansai (IMFEDK), 2016
    Co-Authors: Tomoharu Yokoyama, Tokiyoshi Matsuda, Nao Nakamura, Hiroki Nakanishi, Yuki Watada, Mutsumi Kimura
    Abstract:

    We are developing cellular neural networks using thin-film transistors (TFTs). Although simplification of the Processing Elements such as neurons and synapses is also needed for the cellular neural network, it is difficult and time-consuming to fabricate and evaluate actual devices. Therefore, we are studying the simplification of the Processing Elements in the neural networks by using circuit simulation. We confirmed that the neuron can be realized only using a 2-inverter and 2 switch circuit, and the synapse can be realized only using a resister. These results indicate a future possibility for ultra-large scale integrated brain chips for artificial intelligences.

L.a. Da Silva - One of the best experts on this subject based on the ideXlab platform.

  • Pattern recovery in networks of recursive Processing Elements with continuous learning
    2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), 1
    Co-Authors: Emilio Del Moral Hernandez, H. Sandmann, L.a. Da Silva
    Abstract:

    This paper addresses a continuous learning method using associative memories based on recursive Processing Elements (RPEs). In order to decide if a pattern recovered by the associative RPEs is known or unknown, we are using two discriminators: a network stabilization criterion and a Hamming distance criterion. The network stabilization criterion is based on the disagreement between the current and the next state, and the Hamming distance criterion checks the number of bits flipped between the prompting pattern and the recovered pattern. Experiments for the performance of continuous learning when the prompting patterns are exposed to digital noise and experiments for the evaluation of the capacity of network storage are presented and analyzed.

Paul Chow - One of the best experts on this subject based on the ideXlab platform.

  • simplifying the integration of Processing Elements in computing systems using a programmable controller
    Field-Programmable Custom Computing Machines, 2005
    Co-Authors: Lesley Shannon, Paul Chow
    Abstract:

    As technology sizes decrease and die area increases, designers are creating increasingly complex computing systems using FPGAs. To reduce design time for new products, the reuse of previously designed intellectual property (IP) cores is essential. However, since no universally accepted interface standards exist for IP cores, there is often a certain amount of redesign necessary before they are incorporated into the new system. Furthermore, the core's functionality may need updating to support the requirements of the new application. This paper demonstrates how the SIMPPL system model allows designers to rapidly implement on-chip systems comprising multiple computing Elements (CEs). Furthermore, using a controller-based interface to manage inter-CE transfers enables users to easily adapt the control sequence of individual CEs to suit the needs of new applications without necessitating the redesign of other Elements in the system. Two systems using three different hardware modules adapted to CEs are described to illustrate the power and simplicity of the SIMPPL model. It required a total of six hours to implement both designs on-chip once the individual CEs had been designed.