Limited Precision

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 67125 Experts worldwide ranked by ideXlab platform

Sorin Draghici - One of the best experts on this subject based on the ideXlab platform.

  • on the capabilities of neural networks using Limited Precision weights
    Neural Networks, 2002
    Co-Authors: Sorin Draghici
    Abstract:

    This paper analyzes some aspects of the computational power of neural networks using integer weights in a very restricted range. Using Limited range integer values opens the road for efficient VLSI implementations because: (i) a Limited range for the weights can be translated into reduced storage requirements and (ii) integer computation can be implemented in a more efficient way than the floating point one. The paper concentrates on classification problems and shows that, if the weights are restricted in a drastic way (both range and Precision), the existence of a solution is not to be taken for granted anymore. The paper presents an existence result which relates the difficulty of the problem as characterized by the minimum distance between patterns of different classes to the weight range necessary to ensure that a solution exists. This result allows us to calculate a weight range for a given category of problems and be confident that the network has the capability to solve the given problems with integer weights in that range. Worst-case lower bounds are given for the number of entropy bits and weights necessary to solve a given problem. Various practical issues such as the relationship between the information entropy bits and storage bits are also discussed. The approach presented here uses a worst-case analysis. Therefore, the approach tends to overestimate the values obtained for the weight range, the number of bits and the number of weights. The paper also presents some statistical considerations that can be used to give up the absolute confidence of a successful training in exchange for values more appropriate for practical use. The approach presented is also discussed in the context of the VC-complexity.

  • on the computational power of Limited Precision weights neutral networks in classification problems how to calculate the weight range so that a solution will exist
    International Work-Conference on Artificial and Natural Neural Networks, 1999
    Co-Authors: Sorin Draghici
    Abstract:

    This paper analyzes some aspects of the computational power of neural networks using integer weights in a very restricted range. Using Limited range integer values opens the road for efficient VLSI implementations because i) a Limited range for the weights can be translated into reduced storage requirements and ii) integer computation can be implemented in a more efficient way than the floating point one. The paper concentrates on classification problems and shows that, if the weights are restricted in a drastic way (both range and Precision), the existence of a solution is not to be taken for granted anymore. We show that, if the weight range is not chosen carefully, the network will not be able to implement a solution independently on the number of units available on the first hidden layer. The paper presents an existence result which relates the difficulty of the problem as characterized by the minimum distance between patterns of different classes to the weight range necessary to ensure that a solution exists. This result allows us to calculate a weight range for a given category of problems and be confident that the network has the capability to solve the given problems with integer weights in that range.

  • on the possibilities of the Limited Precision weights neural networks in classification problems
    International Work-Conference on Artificial and Natural Neural Networks, 1997
    Co-Authors: Sorin Draghici, Ishwar K Sethi
    Abstract:

    Limited Precision neural networks are better suited for hardware implementations. Several researchers have proposed various algorithms which are able to train neural networks with Limited Precision weights. Also it has been suggested that the limits introduced by the Limited Precision weights can be compensated by an increased number of layers. This paper shows that, from a theoretical point of view, neural networks with integer weights in the range [-p,p] can solve classification problems for which the minimum euclidian distance in-between two patterns from opposite classes is 1/p. This result can be used in an information theory context to calculate a bound on the number of bits necessary for solving a problem. It is shown that the number of bits is Limited by m*n*log(2pD) where m is the number of patterns, n is the dimensionality of the space, p is the weight range and D is the radius of a sphere including all patterns.

Scott E Fahlman - One of the best experts on this subject based on the ideXlab platform.

  • probabilistic rounding in neural network learning with Limited Precision
    Neurocomputing, 1992
    Co-Authors: Markus Hohfeld, Scott E Fahlman
    Abstract:

    Abstract A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of Limited Precision can be used with existing learning algorithms. Several studies of the backpropagation algorithm report a collapse of learning ability at around 12 to 16 bits of Precision, depending on the details of the problem. In this paper, we investigate the effects of Limited Precision in the Cascade Correlation learning algorithm. As a general result, we introduce techniques for dynamic rescaling and probabilistic rounding that facilitate learning by gradient descent down to 6 bits of Precision.

  • learning with Limited numerical Precision using the cascade correlation algorithm
    IEEE Transactions on Neural Networks, 1992
    Co-Authors: M Hoehfeld, Scott E Fahlman
    Abstract:

    A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of Limited numerical Precision can be used with existing learning algorithms. An empirical study of the effects of Limited Precision in cascade-correlation networks on three different learning problems is presented. It is shown that learning can fail abruptly as the Precision of network weights or weight-update calculations is reduced below a certain level, typically about 13 bits including the sign. Techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 7 bits of Precision or less, with only a small and gradual reduction in the quality of the solutions, are introduced. >

Warwick P Bowen - One of the best experts on this subject based on the ideXlab platform.

  • evanescent single molecule biosensing with quantum Limited Precision
    Nature Photonics, 2017
    Co-Authors: Nicolas P Mauranyapin, Lars S Madsen, Michael A Taylor, Muhammad Waleed, Warwick P Bowen
    Abstract:

    An evanescent single-molecule biosensor that operates at the fundamental Precision limit, allowing a four-order-of-magnitude reduction in optical intensity while maintaining state-of-the-art sensitivity, is demonstrated.

  • evanescent single molecule biosensing with quantum Limited Precision
    European Quantum Electronics Conference, 2017
    Co-Authors: Nicolas P Mauranyapin, Lars S Madsen, Michael A Taylor, Muhammad Waleed, Warwick P Bowen
    Abstract:

    Techniques to observe and track single unlabelled biomolecules are crucial for many areas of nanobiotechnology; offering the possibility for lab-on-a-chip medical diagnostics operating at their ultimate detection limits, and to shed light on important nanoscale biological processes such as binding reactions, conformational changes, and motor molecule dynamics. Impressive progress has been made over the past few years to extend the sensitivity of such techniques, primarily via the evanescent field enhancement provided by optical microcavities [1, 2] or plasmonic resonators [3]. However, such approaches expose the biological system to greatly increased optical intensity levels, which can severely impact biological function, growth, structure and viability [4]. Here, we introduce an evanescent biosensing platform that operates at the fundamental Precision limit introduced by quantisation of light. This allows a five order-of magnitude reduction in optical intensity whilst maintaining state-of-the-art sensitivity and enabling quantum noise Limited tracking of single biomolecules as small as 3.5 nm.

  • evanescent single molecule biosensing with quantum Limited Precision
    arXiv: Optics, 2016
    Co-Authors: Nicolas P Mauranyapin, Lars S Madsen, Michael A Taylor, Muhammad Waleed, Warwick P Bowen
    Abstract:

    Sensors that are able to detect and track single unlabelled biomolecules are an important tool both to understand biomolecular dynamics and interactions at nanoscale, and for medical diagnostics operating at their ultimate detection limits. Recently, exceptional sensitivity has been achieved using the strongly enhanced evanescent fields provided by optical microcavities and nano-sized plasmonic resonators. However, at high field intensities photodamage to the biological specimen becomes increasingly problematic. Here, we introduce an optical nanofibre based evanescent biosensor that operates at the fundamental Precision limit introduced by quantisation of light. This allows a four order-of-magnitude reduction in optical intensity whilst maintaining state-of-the-art sensitivity. It enable quantum noise Limited tracking of single biomolecules as small as 3.5 nm, and surface-molecule interactions to be monitored over extended periods. By achieving quantum noise Limited Precision, our approach provides a pathway towards quantum-enhanced single-molecule biosensors.

Daniel E. Steffy - One of the best experts on this subject based on the ideXlab platform.

  • Linear programming using Limited-Precision oracles
    Mathematical Programming, 2019
    Co-Authors: Ambros Gleixner, Daniel E. Steffy
    Abstract:

    Since the elimination algorithm of Fourier and Motzkin, many different methods have been developed for solving linear programs. When analyzing the time complexity of LP algorithms, it is typically either assumed that calculations are performed exactly and bounds are derived on the number of elementary arithmetic operations necessary, or the cost of all arithmetic operations is considered through a bit-complexity analysis. Yet in practice, implementations typically use Limited-Precision arithmetic. In this paper we introduce the idea of a Limited-Precision LP oracle and study how such an oracle could be used within a larger framework to compute exact Precision solutions to LPs. Under mild assumptions, it is shown that a polynomial number of calls to such an oracle and a polynomial number of bit operations, is sufficient to compute an exact solution to an LP. This work provides a foundation for understanding and analyzing the behavior of the methods that are currently most effective in practice for solving LPs exactly.

  • linear programming using Limited Precision oracles
    Integer Programming and Combinatorial Optimization, 2019
    Co-Authors: Ambros M. Gleixner, Daniel E. Steffy
    Abstract:

    Linear programming is a foundational tool for many aspects of integer and combinatorial optimization. This work studies the complexity of solving linear programs exactly over the rational numbers through use of an oracle capable of returning Limited-Precision LP solutions. Under mild assumptions, it is shown that a polynomial number of calls to such an oracle and a polynomial number of bit operations, is sufficient to compute an exact solution to an LP. Previous work has often considered oracles that provide solutions of an arbitrary specified Precision. While this leads to polynomial-time algorithms, the level of Precision required is often unrealistic for practical computation. In contrast, our work provides a foundation for understanding and analyzing the behavior of the methods that are currently most effective in practice for solving LPs exactly.

  • improving the accuracy of linear programming solvers with iterative refinement
    International Symposium on Symbolic and Algebraic Computation, 2012
    Co-Authors: Ambros M. Gleixner, Daniel E. Steffy, Kati Wolter
    Abstract:

    We describe an iterative refinement procedure for computing extended Precision or exact solutions to linear programming problems (LPs). Arbitrarily precise solutions can be computed by solving a sequence of closely related LPs with Limited Precision arithmetic. The LPs solved share the same constraint matrix as the original problem instance and are transformed only by modification of the objective function, right-hand side, and variable bounds. Exact computation is used to compute and store the exact representation of the transformed problems, while numeric computation is used for solving LPs. At all steps of the algorithm the LP bases encountered in the transformed problems correspond directly to LP bases in the original problem description. We demonstrate that this algorithm is effective in practice for computing extended Precision solutions and that this leads to direct improvement of the best known methods for solving LPs exactly over the rational numbers.

Richard D Wesel - One of the best experts on this subject based on the ideXlab platform.

  • the cycle consistency matrix approach to absorbing sets in separable circulant based ldpc codes
    IEEE Transactions on Information Theory, 2013
    Co-Authors: Jiadong Wang, Lara Dolecek, Richard D Wesel
    Abstract:

    For low-density parity-check (LDPC) codes operating over additive white Gaussian noise channels and decoded using message-passing decoders with Limited Precision, absorbing sets have been shown to be a key factor in error floor behavior. Focusing on this scenario, this paper introduces the cycle consistency matrix (CCM) as a powerful analytical tool for characterizing and avoiding absorbing sets in separable circulant-based (SCB) LDPC codes. SCB codes include a wide variety of regular LDPC codes such as array-based LDPC codes as well as many common quasi-cyclic codes. As a consequence of its cycle structure, each potential absorbing set in an SCB LDPC code has a CCM, and an absorbing set can be present in an SCB LDPC code only if the associated CCM has a nontrivial null space. CCM-based analysis can determine the multiplicity of an absorbing set in an SCB code, and CCM-based constructions avoid certain small absorbing sets completely. While these techniques can be applied to an SCB code of any rate, lower rate SCB codes can usually avoid small absorbing sets because of their higher variable-node degree. This paper focuses attention on the high-rate scenario in which the CCM constructions provide the most benefit. Simulation results demonstrate that under Limited-Precision decoding the new codes have steeper error-floor slopes and can provide one order of magnitude of improvement in the low-frame-error-rate region.

  • ldpc decoding with Limited Precision soft information in flash memories
    arXiv: Information Theory, 2012
    Co-Authors: Jiadong Wang, Guiqiang Dong, Thomas A Courtade, Hari Shankar, Tong Zhang, Richard D Wesel
    Abstract:

    This paper investigates the application of low-density parity-check (LDPC) codes to Flash memories. Multiple cell reads with distinct word-line voltages provide Limited-Precision soft information for the LDPC decoder. The values of the word-line voltages (also called reference voltages) are optimized by maximizing the mutual information (MI) between the input and output of the multiple-read channel. Constraining the maximum mutual-information (MMI) quantization to enforce a constant-ratio constraint provides a significant simplification with no noticeable loss in performance. Our simulation results suggest that for a well-designed LDPC code, the quantization that maximizes the mutual information will also minimize the frame error rate. However, care must be taken to design the code to perform well in the quantized channel. An LDPC code designed for a full-Precision Gaussian channel may perform poorly in the quantized setting. Our LDPC code designs provide an example where quantization increases the importance of absorbing sets thus changing how the LDPC code should be optimized. Simulation results show that small increases in Precision enable the LDPC code to significantly outperform a BCH code with comparable rate and block length (but without the benefit of the soft information) over a range of frame error rates.

  • the cycle consistency matrix approach to absorbing sets in separable circulant based ldpc codes
    arXiv: Information Theory, 2012
    Co-Authors: Jiadong Wang, Lara Dolecek, Richard D Wesel
    Abstract:

    For LDPC codes operating over additive white Gaussian noise channels and decoded using message-passing decoders with Limited Precision, absorbing sets have been shown to be a key factor in error floor behavior. Focusing on this scenario, this paper introduces the cycle consistency matrix (CCM) as a powerful analytical tool for characterizing and avoiding absorbing sets in separable circulant-based (SCB) LDPC codes. SCB codes include a wide variety of regular LDPC codes such as array-based LDPC codes as well as many common quasi-cyclic codes. As a consequence of its cycle structure, each potential absorbing set in an SCB LDPC code has a CCM, and an absorbing set can be present in an SCB LDPC code only if the associated CCM has a nontrivial null space. CCM-based analysis can determine the multiplicity of an absorbing set in an SCB code and CCM-based constructions avoid certain small absorbing sets completely. While these techniques can be applied to an SCB code of any rate, lower-rate SCB codes can usually avoid small absorbing sets because of their higher variable node degree. This paper focuses attention on the high-rate scenario in which the CCM constructions provide the most benefit. Simulation results demonstrate that under Limited-Precision decoding the new codes have steeper error-floor slopes and can provide one order of magnitude of improvement in the low FER region.