Function Mapping

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 128511 Experts worldwide ranked by ideXlab platform

R.s. Neville - One of the best experts on this subject based on the ideXlab platform.

  • RAM-based Sigma-pi nets for high accuracy Function Mapping
    Fifth International Conference on Artificial Neural Networks, 1997
    Co-Authors: R.s. Neville
    Abstract:

    We investigate the use of digital "Higher Order" Sigma-pi nodes and study continuous input RAM-based Sigma-pi units trained with the backpropagation training regime to learn Functions to a high accuracy, using these hardware realisable units which may be implemented in microelectronic technology. One of our goals was to achieve accuracies of better than one percent for target output Functions in the range Y/spl isin/[0,1]. This is equivalent to an average mean square error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of a Sigma-pi node which enables us to provide high accuracy outputs utilising the cubic node's methodology of storing quantised weights (site-values) in locations that are stored in RAM-based units. The networks we present are trained with the backpropagation training regime that may be implemented on-line in hardware. One of the novelties of this article is that we show how one may utilise the bounded quantised site-values (weights) of Sigma-pi nodes to enable training of these neurocomputing systems to be relatively simple. We do this by using pre-calculated constrained look-up tables to train these nets.

R. J. Glover - One of the best experts on this subject based on the ideXlab platform.

  • Partially pre-calculated weights for the backpropagation learning regime and high accuracy Function Mapping using continuous input RAM-based sigma-pi nets
    Neural Networks, 2000
    Co-Authors: R. Neville, T. J. Stonham, R. J. Glover
    Abstract:

    In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy Function Mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma?pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital ?Higher Order? sigma?pi nodes and studies continuous input RAM-based sigma?pi units. The units are trained with the backpropagation learning regime to learn Functions to a high accuracy. The neural model is the sigma?pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for Mapping accurate real-valued Functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output Functions in the range Yset membership, variant[0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma?pi node which enables the provision of high accuracy outputs. The sigma?pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma?pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma?pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma?pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of Mapping them to RAM-based hardware using ?ring memories?. Finally, we study the sigma?pi units? ability to generalise once they are trained to map real-valued, high accuracy, continuous Functions. We use sigma?pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289?303).

R. Neville - One of the best experts on this subject based on the ideXlab platform.

  • Partially pre-calculated weights for the backpropagation learning regime and high accuracy Function Mapping using continuous input RAM-based sigma-pi nets
    Neural Networks, 2000
    Co-Authors: R. Neville, T. J. Stonham, R. J. Glover
    Abstract:

    In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy Function Mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma?pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital ?Higher Order? sigma?pi nodes and studies continuous input RAM-based sigma?pi units. The units are trained with the backpropagation learning regime to learn Functions to a high accuracy. The neural model is the sigma?pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for Mapping accurate real-valued Functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output Functions in the range Yset membership, variant[0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma?pi node which enables the provision of high accuracy outputs. The sigma?pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma?pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma?pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma?pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of Mapping them to RAM-based hardware using ?ring memories?. Finally, we study the sigma?pi units? ability to generalise once they are trained to map real-valued, high accuracy, continuous Functions. We use sigma?pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289?303).

Andrej Sali - One of the best experts on this subject based on the ideXlab platform.

  • structure Function Mapping of a heptameric module in the nuclear pore complex
    Journal of Cell Biology, 2012
    Co-Authors: Javier Fernandezmartinez, Brian T Chait, Rosemary Williams, David L Stokes, Jeremy Phillips, Matthew D Sekedat, Ruben Diazavalos, Javier Velazquezmuriel, Josef D Franke, Andrej Sali
    Abstract:

    The nuclear pore complex (NPC) is a multiprotein assembly that serves as the sole mediator of nucleocytoplasmic exchange in eukaryotic cells. In this paper, we use an integrative approach to determine the structure of an essential component of the yeast NPC, the ∼600-kD heptameric Nup84 complex, to a precision of ∼1.5 nm. The configuration of the subunit structures was determined by satisfaction of spatial restraints derived from a diverse set of negative-stain electron microscopy and protein domain–Mapping data. Phenotypic data were mapped onto the complex, allowing us to identify regions that stabilize the NPC’s interaction with the nuclear envelope membrane and connect the complex to the rest of the NPC. Our data allow us to suggest how the Nup84 complex is assembled into the NPC and propose a scenario for the evolution of the Nup84 complex through a series of gene duplication and loss events. This work demonstrates that integrative approaches based on low-resolution data of sufficient quality can generate Functionally informative structures at intermediate resolution.

  • Structure–Function Mapping of a heptameric module in the nuclear pore complex
    The Journal of cell biology, 2012
    Co-Authors: Javier Fernandez-martinez, Brian T Chait, Rosemary Williams, David L Stokes, Jeremy Phillips, Matthew D Sekedat, Josef D Franke, Ruben Diaz-avalos, Javier Velázquez-muriel, Andrej Sali
    Abstract:

    The nuclear pore complex (NPC) is a multiprotein assembly that serves as the sole mediator of nucleocytoplasmic exchange in eukaryotic cells. In this paper, we use an integrative approach to determine the structure of an essential component of the yeast NPC, the ∼600-kD heptameric Nup84 complex, to a precision of ∼1.5 nm. The configuration of the subunit structures was determined by satisfaction of spatial restraints derived from a diverse set of negative-stain electron microscopy and protein domain–Mapping data. Phenotypic data were mapped onto the complex, allowing us to identify regions that stabilize the NPC’s interaction with the nuclear envelope membrane and connect the complex to the rest of the NPC. Our data allow us to suggest how the Nup84 complex is assembled into the NPC and propose a scenario for the evolution of the Nup84 complex through a series of gene duplication and loss events. This work demonstrates that integrative approaches based on low-resolution data of sufficient quality can generate Functionally informative structures at intermediate resolution.

John Klein - One of the best experts on this subject based on the ideXlab platform.

  • Complementary Lipschitz continuity results for the distribution of intersections or unions of independent random sets in finite discrete spaces
    International Journal of Approximate Reasoning, 2019
    Co-Authors: John Klein
    Abstract:

    Abstract We prove that intersections and unions of independent random sets in finite spaces achieve a form of Lipschitz continuity. More precisely, given the distribution of a random set Ξ, the Function Mapping any random set distribution to the distribution of its intersection (under independence assumption) with Ξ is Lipschitz continuous with unit Lipschitz constant if the space of random set distributions is endowed with a metric defined as the L k norm distance between inclusion Functionals also known as commonalities. Moreover, the Function Mapping any random set distribution to the distribution of its union (under independence assumption) with Ξ is Lipschitz continuous with unit Lipschitz constant if the space of random set distributions is endowed with a metric defined as the L k norm distance between hitting Functionals also known as plausibilities. Using the epistemic random set interpretation of belief Functions, we also discuss the ability of these distances to yield conflict measures. All the proofs in this paper are derived in the framework of Dempster-Shafer belief Functions. Let alone the discussion on conflict measures, it is straightforward to transcribe the proofs into the general (non necessarily epistemic) random set terminology.

  • Complementary Lipschitz continuity results for the distribution of intersections or unions of independent random sets in finite spaces
    arXiv: Other Statistics, 2018
    Co-Authors: John Klein
    Abstract:

    We prove that intersections and unions of independent random sets in finite spaces achieve a form of Lipschitz continuity. More precisely, given the distribution of a random set $\Xi$, the Function Mapping any random set distribution to the distribution of its intersection (under independence assumption) with $\Xi$ is Lipschitz continuous with unit Lipschitz constant if the space of random set distributions is endowed with a metric defined as the $L_k$ norm distance between inclusion Functionals also known as commonalities. Moreover, the Function Mapping any random set distribution to the distribution of its union (under independence assumption) with $\Xi$ is Lipschitz continuous with unit Lipschitz constant if the space of random set distributions is endowed with a metric defined as the $L_k$ norm distance between hitting Functionals also known as plausibilities. Using the epistemic random set interpretation of belief Functions, we also discuss the ability of these distances to yield conflict measures. All the proofs in this paper are derived in the framework of Dempster-Shafer belief Functions. Let alone the discussion on conflict measures, it is straightforward to transcribe the proofs into the general (non necessarily epistemic) random set terminology.