Arithmetic

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 364545 Experts worldwide ranked by ideXlab platform

Javier Garrido - One of the best experts on this subject based on the ideXlab platform.

  • Parametrizable Fixed-Point Arithmetic for HIL With Small Simulation Steps
    IEEE Journal of Emerging and Selected Topics in Power Electronics, 2019
    Co-Authors: Alberto Sanchez, Angel De Castro, Javier Garrido
    Abstract:

    Hardware-in-the-loop (HIL) techniques are increasingly used for test purposes because of their advantages over classical simulations. Field-programmable gate arrays (FPGAs) are becoming popular in HIL systems because of their parallel computing capabilities. In most cases, FPGAs are mainly used for signal processing, such as input pulsewidth modulation sampling and conditioning, while there are also processors to model the system. However, there are other HIL systems that implement the model in the FPGA. For FPGA implementation and regarding the Arithmetics, there are two main possibilities: fixed-point and floating-point. Fixed-point is the best choice only when real-time simulations with small simulation steps are needed, while floating-point is the common choice because of its flexibility and ease of use. This paper presents a novel hybrid Arithmetic for FPGAs called parametrizable fixed-point which takes advantage of both Arithmetics as the internal operations are accomplished using simple signed integers, while the point location of the variables can be adjusted as necessary without redesigning the model of the plant. The experimental results show that a buck converter can be modeled using this novel Arithmetic with a simulation step below 20 ns. Besides, the experiments prove that the proposed model can be adjusted to any set of values (voltages, currents, capacitances, and so on.) keeping its accuracy without resynthesizing, showing the big advantage over the fixed-point Arithmetic.

Alberto Sanchez - One of the best experts on this subject based on the ideXlab platform.

  • Parametrizable Fixed-Point Arithmetic for HIL With Small Simulation Steps
    IEEE Journal of Emerging and Selected Topics in Power Electronics, 2019
    Co-Authors: Alberto Sanchez, Angel De Castro, Javier Garrido
    Abstract:

    Hardware-in-the-loop (HIL) techniques are increasingly used for test purposes because of their advantages over classical simulations. Field-programmable gate arrays (FPGAs) are becoming popular in HIL systems because of their parallel computing capabilities. In most cases, FPGAs are mainly used for signal processing, such as input pulsewidth modulation sampling and conditioning, while there are also processors to model the system. However, there are other HIL systems that implement the model in the FPGA. For FPGA implementation and regarding the Arithmetics, there are two main possibilities: fixed-point and floating-point. Fixed-point is the best choice only when real-time simulations with small simulation steps are needed, while floating-point is the common choice because of its flexibility and ease of use. This paper presents a novel hybrid Arithmetic for FPGAs called parametrizable fixed-point which takes advantage of both Arithmetics as the internal operations are accomplished using simple signed integers, while the point location of the variables can be adjusted as necessary without redesigning the model of the plant. The experimental results show that a buck converter can be modeled using this novel Arithmetic with a simulation step below 20 ns. Besides, the experiments prove that the proposed model can be adjusted to any set of values (voltages, currents, capacitances, and so on.) keeping its accuracy without resynthesizing, showing the big advantage over the fixed-point Arithmetic.

Angel De Castro - One of the best experts on this subject based on the ideXlab platform.

  • Parametrizable Fixed-Point Arithmetic for HIL With Small Simulation Steps
    IEEE Journal of Emerging and Selected Topics in Power Electronics, 2019
    Co-Authors: Alberto Sanchez, Angel De Castro, Javier Garrido
    Abstract:

    Hardware-in-the-loop (HIL) techniques are increasingly used for test purposes because of their advantages over classical simulations. Field-programmable gate arrays (FPGAs) are becoming popular in HIL systems because of their parallel computing capabilities. In most cases, FPGAs are mainly used for signal processing, such as input pulsewidth modulation sampling and conditioning, while there are also processors to model the system. However, there are other HIL systems that implement the model in the FPGA. For FPGA implementation and regarding the Arithmetics, there are two main possibilities: fixed-point and floating-point. Fixed-point is the best choice only when real-time simulations with small simulation steps are needed, while floating-point is the common choice because of its flexibility and ease of use. This paper presents a novel hybrid Arithmetic for FPGAs called parametrizable fixed-point which takes advantage of both Arithmetics as the internal operations are accomplished using simple signed integers, while the point location of the variables can be adjusted as necessary without redesigning the model of the plant. The experimental results show that a buck converter can be modeled using this novel Arithmetic with a simulation step below 20 ns. Besides, the experiments prove that the proposed model can be adjusted to any set of values (voltages, currents, capacitances, and so on.) keeping its accuracy without resynthesizing, showing the big advantage over the fixed-point Arithmetic.

Marek Landowski - One of the best experts on this subject based on the ideXlab platform.

  • ACS - Comparison of RDM Complex Interval Arithmetic and Rectangular Complex Arithmetic.
    Applied Categorical Structures, 2016
    Co-Authors: Marek Landowski
    Abstract:

    The article presents RDM complex interval Arithmetic in comparison with rectangular complex Arithmetic. The basic operations and the main properties of both complex interval Arithmetics are described. To show the application of RDM complex interval Arithmetic the examples with complex variables were solved using RDM and rectangular complex interval Arithmetics. RDM means relative distance measure. RDM complex interval Arithmetic is multidimensional, this property gives a possibilty to find a full solution of the problem with complex interval variables.

  • differences between moore and rdm interval Arithmetic
    IEEE Conf. on Intelligent Systems (1), 2015
    Co-Authors: Marek Landowski
    Abstract:

    The uncertainty theory solves problems with uncertain data. Often to perform Arithmetic operations on uncertain data, the calculations on intervals are necessary. Interval Arithmetic uses traditional mathematics in the calculations on intervals. There are many methods that solve the problems of uncertain data presented in the form of intervals, each of them can give in some cases different results. The most known Arithmetic, often used by scientists in calculations is Moore interval Arithmetic. The article presents a comparison of Moore interval Arithmetic and multidimensional RDM interval Arithmetic. Also, in both Moore and RDM Arithmetic the basic operations and their properties are described. Solved examples show that the results obtained using the RDM Arithmetic are multidimensional while Moore Arithmetic gives one-dimensional solution.

  • is the conventional interval Arithmetic correct
    Applied Computer Science, 2012
    Co-Authors: Andrzej Piegat, Marek Landowski
    Abstract:

    Interval Arithmetic as part of interval mathematics and Granular Computing is unusually im- portant for development of science and engineering in connection with necessity of taking into account uncertainty and approximativeness of data occurring in almost all calculations. Interval Arithmetic also conditions development of Artificial Intelligence and especially of automatic think- ing, Computing with Words, grey systems, fuzzy Arithmetic and probabilistic Arithmetic. However, the mostly used conventional Moore-Arithmetic has evident weak-points. These weak-points are well known, but nonetheless it is further on frequently used. The paper presents basic operations of RDM-Arithmetic that does not possess faults of Moore-Arithmetic. The RDM-Arithmetic is based on multi-dimensional approach, the Moore-Arithmetic on one-dimensional approach to interval calculations. The paper also presents a testing method, which allows for clear checking whether results of any interval Arithmetic are correct or not. The paper contains many examples and illustrations for better understanding of the RDM-Arithmetic. In the paper, because of volume limitations only operations of addition and subtraction are discussed. Operations of multiplica- tion and division of intervals will be presented in next publication. Author of the RDM-Arithmetic concept is Andrzej Piegat.

Pranesh Srikara - One of the best experts on this subject based on the ideXlab platform.

  • SIMULATING LOW PRECISION FLOATING-POINT Arithmetic
    2019
    Co-Authors: Higham, Nicholas J., Pranesh Srikara
    Abstract:

    The half precision (fp16) floating-point format, defined in the 2008 revision of the IEEE standard for floating-point Arithmetic, and a more recently proposed half precision format bfloat16, are increasingly available in GPUs and other accelerators. While the support for low precision Arithmetic is mainly motivated by machine learning applications, general purpose numerical algorithms can benefit from it, too, gaining in speed, energy usage, and reduced communication costs. Since the appropriate hardware is not always available, and one may wish to experiment with new Arithmetics not yet implemented in hardware, software simulations of low precision Arithmetic are needed. We discuss how to simulate low precision Arithmetic using Arithmetic of higher precision. We examine the correctness of such simulations and explain via rounding error analysis why a natural method of simulation can provide results that are more accurate than actual computations at low precision. We provide a MATLAB function chop that can be used to efficiently simulate fp16, bfloat16, and other low precision Arithmetics, with or without the representation of subnormal numbers and with the options of round to nearest, directed rounding, stochastic rounding, and random bit flips in the significand. We demonstrate the advantages of this approach over defining a new MATLAB class and overloading operators

  • SIMULATING LOW PRECISION FLOATING-POINT Arithmetic
    2019
    Co-Authors: Higham, Nicholas J., Pranesh Srikara
    Abstract:

    The half precision (fp16) floating-point format, defined in the 2008 revision of the IEEE standard for floating-point Arithmetic, and a more recently proposed half precision format bfloat16, are increasingly available in GPUs and other accelerators. While the support for low precision Arithmetic is mainly motivated by machine learning applications, general purpose numerical algorithms can benefit from it, too, gaining in speed, energy usage, and reduced communication costs. Since the appropriate hardware is not always available, and one may wish to experiment with new Arithmetics not yet implemented in hardware, software simulations of low precision Arithmetic are needed. We discuss how to simulate low precision Arithmetic using Arithmetic of higher precision. We examine the correctness of such simulations and explain via rounding error analysis why a natural method of simulation can provide results that are more accurate than actual computations at low precision. We provide a MATLAB function chop that can be used to efficiently simulate fp16 and bfloat16 Arithmetics, with or without the representation of subnormal numbers and with the options of round to nearest, directed rounding, stochastic rounding, and random bit flips in the significand. We demonstrate the advantages of this approach over defining a new MATLAB class and overloading operators