Computational Cost

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 258096 Experts worldwide ranked by ideXlab platform

Raúl Santiago-montero - One of the best experts on this subject based on the ideXlab platform.

  • On the accuracy and Computational Cost of spiking neuron implementation.
    Neural networks : the official journal of the International Neural Network Society, 2019
    Co-Authors: Sergio Valadez-godínez, Humberto Sossa, Raúl Santiago-montero
    Abstract:

    Abstract Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is Computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model ( Izhikevich, 2004 ). As suggested by  Hodgkin and Huxley (1952) , their model operates in two modes: by using the α ’s and β ’s rate functions directly (HH model) and by storing them into tables (HHT model) for Computational Cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in Computational Cost ( Skocik & Long, 2014 ). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, Computational Cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between Computational Cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, Computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α ’s and β ’s table discretization, and 5) HHT model is not comparable in Computational Cost to IZH model. These results refute the theory formulated over a decade ago ( Izhikevich, 2004 ) and go more in-depth in the statements formulated by  Skocik and Long (2014) . Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true Computational Cost. The metric we propose for the Computational Cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the Computational Cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.

  • The step size impact on the Computational Cost of spiking neuron simulation
    2017 Computing Conference, 2017
    Co-Authors: Sergio Valadez-godínez, Humberto Sossa, Raúl Santiago-montero
    Abstract:

    Spiking neurons are mathematical models that simulate the generation of the electrical pulse at the neuron membrane. Most spiking neurons are expressed as a non-linear system of ordinary differential equations. Because these systems are hard to solve analytically, they must be solved using a numerical method through a discrete sequence of time steps. The step length is a factor affecting both the accuracy and Computational Cost of spiking neuron simulation. It is known the step size implications on the accuracy for some spiking neurons. However, it is unknown in which way the step size impacts the Computational Cost. We found that the Computational Cost as a function of the step length follows a power-law distribution. We reviewed the Leaky Integrate-and-Fire, Izhikevich, and Hodgkin-Huxley spiking neurons. Additionally, it was found that, with any step size, simulating the cerebral cortex in a sequential processing computer is prohibitive.

Sergio Valadez-godínez - One of the best experts on this subject based on the ideXlab platform.

  • On the accuracy and Computational Cost of spiking neuron implementation.
    Neural networks : the official journal of the International Neural Network Society, 2019
    Co-Authors: Sergio Valadez-godínez, Humberto Sossa, Raúl Santiago-montero
    Abstract:

    Abstract Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is Computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model ( Izhikevich, 2004 ). As suggested by  Hodgkin and Huxley (1952) , their model operates in two modes: by using the α ’s and β ’s rate functions directly (HH model) and by storing them into tables (HHT model) for Computational Cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in Computational Cost ( Skocik & Long, 2014 ). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, Computational Cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between Computational Cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, Computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α ’s and β ’s table discretization, and 5) HHT model is not comparable in Computational Cost to IZH model. These results refute the theory formulated over a decade ago ( Izhikevich, 2004 ) and go more in-depth in the statements formulated by  Skocik and Long (2014) . Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true Computational Cost. The metric we propose for the Computational Cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the Computational Cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.

  • The step size impact on the Computational Cost of spiking neuron simulation
    2017 Computing Conference, 2017
    Co-Authors: Sergio Valadez-godínez, Humberto Sossa, Raúl Santiago-montero
    Abstract:

    Spiking neurons are mathematical models that simulate the generation of the electrical pulse at the neuron membrane. Most spiking neurons are expressed as a non-linear system of ordinary differential equations. Because these systems are hard to solve analytically, they must be solved using a numerical method through a discrete sequence of time steps. The step length is a factor affecting both the accuracy and Computational Cost of spiking neuron simulation. It is known the step size implications on the accuracy for some spiking neurons. However, it is unknown in which way the step size impacts the Computational Cost. We found that the Computational Cost as a function of the step length follows a power-law distribution. We reviewed the Leaky Integrate-and-Fire, Izhikevich, and Hodgkin-Huxley spiking neurons. Additionally, it was found that, with any step size, simulating the cerebral cortex in a sequential processing computer is prohibitive.

Humberto Sossa - One of the best experts on this subject based on the ideXlab platform.

  • On the accuracy and Computational Cost of spiking neuron implementation.
    Neural networks : the official journal of the International Neural Network Society, 2019
    Co-Authors: Sergio Valadez-godínez, Humberto Sossa, Raúl Santiago-montero
    Abstract:

    Abstract Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is Computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model ( Izhikevich, 2004 ). As suggested by  Hodgkin and Huxley (1952) , their model operates in two modes: by using the α ’s and β ’s rate functions directly (HH model) and by storing them into tables (HHT model) for Computational Cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in Computational Cost ( Skocik & Long, 2014 ). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, Computational Cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between Computational Cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, Computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α ’s and β ’s table discretization, and 5) HHT model is not comparable in Computational Cost to IZH model. These results refute the theory formulated over a decade ago ( Izhikevich, 2004 ) and go more in-depth in the statements formulated by  Skocik and Long (2014) . Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true Computational Cost. The metric we propose for the Computational Cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the Computational Cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.

  • The step size impact on the Computational Cost of spiking neuron simulation
    2017 Computing Conference, 2017
    Co-Authors: Sergio Valadez-godínez, Humberto Sossa, Raúl Santiago-montero
    Abstract:

    Spiking neurons are mathematical models that simulate the generation of the electrical pulse at the neuron membrane. Most spiking neurons are expressed as a non-linear system of ordinary differential equations. Because these systems are hard to solve analytically, they must be solved using a numerical method through a discrete sequence of time steps. The step length is a factor affecting both the accuracy and Computational Cost of spiking neuron simulation. It is known the step size implications on the accuracy for some spiking neurons. However, it is unknown in which way the step size impacts the Computational Cost. We found that the Computational Cost as a function of the step length follows a power-law distribution. We reviewed the Leaky Integrate-and-Fire, Izhikevich, and Hodgkin-Huxley spiking neurons. Additionally, it was found that, with any step size, simulating the cerebral cortex in a sequential processing computer is prohibitive.

Idris A Eckley - One of the best experts on this subject based on the ideXlab platform.

  • optimal detection of changepoints with a linear Computational Cost
    Journal of the American Statistical Association, 2012
    Co-Authors: Rebecca Killick, Paul Fearnhead, Idris A Eckley
    Abstract:

    In this article, we consider the problem of detecting multiple changepoints in large datasets. Our focus is on applications where the number of changepoints will increase as we collect more data: for example, in genetics as we analyze larger regions of the genome, or in finance as we observe time series over longer periods. We consider the common approach of detecting changepoints through minimizing a Cost function over possible numbers and locations of changepoints. This includes several established procedures for detecting changing points, such as penalized likelihood and minimum description length. We introduce a new method for finding the minimum of such Cost functions and hence the optimal number and location of changepoints that has a Computational Cost, which, under mild conditions, is linear in the number of observations. This compares favorably with existing methods for the same problem whose Computational Cost can be quadratic or even cubic. In simulation studies, we show that our new method can...

  • optimal detection of changepoints with a linear Computational Cost
    arXiv: Methodology, 2011
    Co-Authors: Rebecca Killick, Paul Fearnhead, Idris A Eckley
    Abstract:

    We consider the problem of detecting multiple changepoints in large data sets. Our focus is on applications where the number of changepoints will increase as we collect more data: for example in genetics as we analyse larger regions of the genome, or in finance as we observe time-series over longer periods. We consider the common approach of detecting changepoints through minimising a Cost function over possible numbers and locations of changepoints. This includes several established procedures for detecting changing points, such as penalised likelihood and minimum description length. We introduce a new method for finding the minimum of such Cost functions and hence the optimal number and location of changepoints that has a Computational Cost which, under mild conditions, is linear in the number of observations. This compares favourably with existing methods for the same problem whose Computational Cost can be quadratic or even cubic. In simulation studies we show that our new method can be orders of magnitude faster than these alternative exact methods. We also compare with the Binary Segmentation algorithm for identifying changepoints, showing that the exactness of our approach can lead to substantial improvements in the accuracy of the inferred segmentation of the data.

Peter W. Sauer - One of the best experts on this subject based on the ideXlab platform.

  • Modeling Approaches for Computational Cost Reduction in Stochastic Unit Commitment Formulations
    IEEE Transactions on Power Systems, 2010
    Co-Authors: Pablo A. Ruiz, Russ C. Philbrick, Peter W. Sauer
    Abstract:

    Although stochastic commitment policies provide robust and efficient solutions, they have high Computational Costs. This letter considers two modeling approaches for the reduction of their Computational effort: the relaxation of the integrality constraint of fast-start units, and the modeling of generation outages as load increments. These approaches are shown to reduce the Computational Cost while maintaining the good properties of stochastic policies.