Activation Function

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 450228 Experts worldwide ranked by ideXlab platform

Pravin Chandra - One of the best experts on this subject based on the ideXlab platform.

  • bi modal derivative adaptive Activation Function sigmoidal feedforward artificial neural networks
    Applied Soft Computing, 2017
    Co-Authors: Akash Mishra, Pravin Chandra, Udayan Ghose, Sartaj Singh Sodhi
    Abstract:

    Abstract In this work an adaptive mechanism for choosing the Activation Function is proposed and described. Four bi-modal derivative sigmoidal adaptive Activation Function is used as the Activation Function at the hidden layer of a single hidden layer sigmoidal feedforward artificial neural networks. These four bi-modal derivative Activation Functions are grouped as asymmetric and anti-symmetric Activation Functions (in groups of two each). For the purpose of comparison, the logistic Function (an asymmetric Function) and the Function obtained by subtracting 0.5 from it (an anti-symmetric) Function is also used as Activation Function for the hidden layer nodes’. The resilient backpropagation algorithm with improved weight-tracking (iRprop+) is used to adapt the parameter of the Activation Functions and also the weights and/or biases of the sigmoidal feedforward artificial neural networks. The learning tasks used to demonstrate the efficacy and efficiency of the proposed mechanism are 10 Function approximation tasks and four real benchmark problems taken from the UCI machine learning repository. The obtained results demonstrate that both for asymmetric as well as anti-symmetric Activation usage, the proposed/used adaptive Activation Functions are demonstratively as good as if not better than the sigmoidal Function without any adaptive parameter when used as Activation Function of the hidden layer nodes.

  • a non sigmoidal Activation Function for feedforward artificial neural networks
    International Joint Conference on Neural Network, 2015
    Co-Authors: Pravin Chandra, Udayan Ghose, Apoorvi Sood
    Abstract:

    For a single hidden layer feedforward artificial neural network to possess the universal approximation property, it is sufficient that the hidden layer nodes Activation Functions are continuous non-polynomial Function. It is not required that the Activation Function be a sigmoidal Function. In this paper a simple continuous, bounded, non-constant, differentiable, non-sigmoid and non-polynomial Function is proposed, for usage as the Activation Function at hidden layer nodes. The proposed Activation Function does require the computation of an exponential Function, and thus is computationally less intensive as compared to either the log-sigmoid or the hyperbolic tangent Function. On a set of 10 Function approximation tasks we demonstrate the efficiency and efficacy of the usage of the proposed Activation Functions. The results obtained allow us to assert that, at least on the 10 Function approximation tasks, the results demonstrate that in equal epochs of training, the networks using the proposed Activation Function reach deeper minima of the error Functional and also generalize better in most of the cases, and statistically are as good as if not better than networks using the logistic Function as the Activation Function at the hidden nodes.

  • bi modal derivative Activation Function for sigmoidal feedforward networks
    Neurocomputing, 2014
    Co-Authors: Sartaj Singh Sodhi, Pravin Chandra
    Abstract:

    A new class of Activation Functions is proposed as the sum of shifted log-sigmoid Activation Functions. This has the effect of making the derivative of the Activation Function with respect to the net inputs, be bi-modal. That is, the derivative of the Activation Functions has two maxima of equal values for nonzero values of the parameter, that parametrises the proposed class of Activation Functions. On a set of ten Function approximation tasks, the usage of the proposed Activation Function demonstrates that there exists network(s), using the proposed Activation, and are able to achieve lower generalisation error, in equal epochs of training using the resilient backpropagation algorithm. On a set of four benchmark problems taken from UCI machine learning repository, for which the networks are trained using the resilient backpropagation algorithm, the scaled conjugate algorithm, the Levenberg-Marquardt algorithm and the quasi-Newton BFGS algorithm, we observe that the usage of the proposed algorithms leads to better generalisation results, similar to the results for the ten Function approximation tasks wherein the networks were trained using the resilient backpropagation algorithm.

  • an adaptive sigmoidal Activation Function cascading neural networks
    Soft Computing, 2011
    Co-Authors: Sudhir Kumar Sharma, Pravin Chandra
    Abstract:

    In this paper, we propose an adaptive sigmoidal Activation Function cascading neural networks. The proposed algorithm emphasizes architectural adaptation and Functional adaptation during training. This algorithm is a constructive approach to building cascading architecture dynamically. To achieve Functional adaptation, an adaptive sigmoidal Activation Function is proposed for the hidden layers’ node. The algorithm determines not only optimum number of hidden layers’ nodes, as also optimum sigmoidal Function for them. Four variants of the proposed algorithm are developed and discussed on the basis of Activation Function used. All the variants are empirically evaluated on five regression Functions in terms of learning accuracy and generalization capability. Simulation results reveal that adaptive sigmoidal Activation Function presents several advantages over traditional fixed sigmoid Function, resulting in increased flexibility, smoother learning, better learning accuracy and better generalization performance.

  • an Activation Function adapting training algorithm for sigmoidal feedforward networks
    Neurocomputing, 2004
    Co-Authors: Pravin Chandra, Yogesh Singh
    Abstract:

    The universal approximation results for sigmoidal feedforward artificial neural networks do not recommend a preferred Activation Function. In this paper a new Activation Function adapting algorithm is proposed for sigmoidal feedforward neural network training. The algorithm is compared against the backpropagation algorithm on four Function approximation tasks. The results demonstrate that the proposed algorithm can be an order of magnitude faster than the backpropagation algorithm.

Aurelio Uncini - One of the best experts on this subject based on the ideXlab platform.

  • multilayer feedforward networks with adaptive spline Activation Function
    IEEE Transactions on Neural Networks, 1999
    Co-Authors: S Guarnieri, Francesco Piazza, Aurelio Uncini
    Abstract:

    In this paper, a new adaptive spline Activation Function neural network (ASNN) is presented. Due to the ASNN's high representation capabilities, networks with a small number of interconnections can be trained to solve both pattern recognition and data processing real-time problems. The main idea is to use a Catmull-Rom cubic spline as the neuron's Activation Function, which ensures a simple structure suitable for both software and hardware implementation. Experimental results demonstrate improvements in terms of generalization capability and of learning speed in both pattern recognition and data processing tasks.

  • neural networks with adaptive spline Activation Function
    Mediterranean Electrotechnical Conference, 1996
    Co-Authors: P Campolucci, F Capperelli, S Guarnieri, Francesco Piazza, Aurelio Uncini
    Abstract:

    In this paper a new neural network architecture, based on an adaptive Activation Function, called generalized sigmoidal neural network (GSNN), is proposed. The Activation Functions are usually sigmoidal but other Functions, also depending on some free parameters, have been studied and applied. Most approaches tend to use relatively simple Functions (as adaptive sigmoids), primarily due to computational complexity and difficulties hardware realization. The proposed adaptive Activation Function, built as a piecewise approximation with suitable cubic splines, can have arbitrary shape and allows to reduce the overall size of the neural networks, trading connection complexity with Activation Function complexity.

Sartaj Singh Sodhi - One of the best experts on this subject based on the ideXlab platform.

  • bi modal derivative adaptive Activation Function sigmoidal feedforward artificial neural networks
    Applied Soft Computing, 2017
    Co-Authors: Akash Mishra, Pravin Chandra, Udayan Ghose, Sartaj Singh Sodhi
    Abstract:

    Abstract In this work an adaptive mechanism for choosing the Activation Function is proposed and described. Four bi-modal derivative sigmoidal adaptive Activation Function is used as the Activation Function at the hidden layer of a single hidden layer sigmoidal feedforward artificial neural networks. These four bi-modal derivative Activation Functions are grouped as asymmetric and anti-symmetric Activation Functions (in groups of two each). For the purpose of comparison, the logistic Function (an asymmetric Function) and the Function obtained by subtracting 0.5 from it (an anti-symmetric) Function is also used as Activation Function for the hidden layer nodes’. The resilient backpropagation algorithm with improved weight-tracking (iRprop+) is used to adapt the parameter of the Activation Functions and also the weights and/or biases of the sigmoidal feedforward artificial neural networks. The learning tasks used to demonstrate the efficacy and efficiency of the proposed mechanism are 10 Function approximation tasks and four real benchmark problems taken from the UCI machine learning repository. The obtained results demonstrate that both for asymmetric as well as anti-symmetric Activation usage, the proposed/used adaptive Activation Functions are demonstratively as good as if not better than the sigmoidal Function without any adaptive parameter when used as Activation Function of the hidden layer nodes.

  • bi modal derivative Activation Function for sigmoidal feedforward networks
    Neurocomputing, 2014
    Co-Authors: Sartaj Singh Sodhi, Pravin Chandra
    Abstract:

    A new class of Activation Functions is proposed as the sum of shifted log-sigmoid Activation Functions. This has the effect of making the derivative of the Activation Function with respect to the net inputs, be bi-modal. That is, the derivative of the Activation Functions has two maxima of equal values for nonzero values of the parameter, that parametrises the proposed class of Activation Functions. On a set of ten Function approximation tasks, the usage of the proposed Activation Function demonstrates that there exists network(s), using the proposed Activation, and are able to achieve lower generalisation error, in equal epochs of training using the resilient backpropagation algorithm. On a set of four benchmark problems taken from UCI machine learning repository, for which the networks are trained using the resilient backpropagation algorithm, the scaled conjugate algorithm, the Levenberg-Marquardt algorithm and the quasi-Newton BFGS algorithm, we observe that the usage of the proposed algorithms leads to better generalisation results, similar to the results for the ten Function approximation tasks wherein the networks were trained using the resilient backpropagation algorithm.

Nikos Pleros - One of the best experts on this subject based on the ideXlab platform.

  • all optical recurrent neural network with sigmoid Activation Function
    Optical Fiber Communication Conference, 2020
    Co-Authors: George Mourgiasalexandris, Nikolaos Passalis, Anastasios Tefas, G Dabos, Angelina R Totovic, Nikos Pleros
    Abstract:

    We demonstrate experimentally, the first all-optical recurrent-neuron with a sigmoid Activation Function and four WDM-inputs with 100psec pulses. The proposed neuron geared up a neural-network for financial prediction-tasks exhibiting an accuracy of 42.57% on FI-2010.

  • An all-optical neuron with sigmoid Activation Function
    Optics Express, 2019
    Co-Authors: George Mourgias-alexandris, Apostolos Tsakyridis, Nikolaos Passalis, Anastasios Tefas, Konstantinos Vyrsokinos, Nikos Pleros
    Abstract:

    We present an all-optical neuron that utilizes a logistic sigmoid Activation Function, using a Wavelength-Division Multiplexing (WDM) input & weighting scheme. The Activation Function is realized by means of a deeply-saturated differentially-biased Semiconductor Optical Amplifier-Mach-Zehnder Interferometer (SOA-MZI) followed by a SOA-Cross-Gain-Modulation (XGM) gate. Its transfer Function is both experimentally and theoretically analyzed, showing excellent agreement between theory and experiment and an almost perfect fitting with a logistic sigmoid Function. The optical sigmoid transfer Function is then exploited in the experimental demonstration of a photonic neuron, demonstrating successful thresholding over a 100psec-long pulse sequence with 4 different weighted-and-summed power levels.

  • experimental demonstration of an optical neuron with a logistic sigmoid Activation Function
    Optical Fiber Communication Conference, 2019
    Co-Authors: George Mourgiasalexandris, Apostolos Tsakyridis, Nikolaos Passalis, Anastasios Tefas, Nikos Pleros
    Abstract:

    We experimentally demonstrate an optical neuron using an optical logistic Sigmoid Activation Function. Successful thresholding at 4 different power levels was achieved yielding a 100% improvement compared to state-of-the-art, employing a sequence of 100psec long pulses.

Masayuki Tanaka - One of the best experts on this subject based on the ideXlab platform.

  • weighted sigmoid gate unit for an Activation Function of deep neural network
    Pattern Recognition Letters, 2020
    Co-Authors: Masayuki Tanaka
    Abstract:

    Abstract An Activation Function has a crucial role in a deep neural network. A simple rectified linear unit (ReLU) is widely used for the Activation Function. In this paper, a weighted sigmoid gate unit (WiG) is proposed as the Activation Function. The proposed WiG consists of a multiplication of inputs and the weighted sigmoid gate. It is shown that the WiG includes the ReLU and the same Activation Functions as a special case. Many Activation Functions have been proposed to overcome the ReLU. In the literature, the performance is mainly evaluated with an object recognition task. The proposed WiG is evaluated with the object recognition task and the image restoration task. Then, the experimental comparisons demonstrate the proposed WiG overcomes the existing Activation Functions including the ReLU.

  • weighted sigmoid gate unit for an Activation Function of deep neural network
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Masayuki Tanaka
    Abstract:

    An Activation Function has crucial role in a deep neural network. A simple rectified linear unit (ReLU) are widely used for the Activation Function. In this paper, a weighted sigmoid gate unit (WiG) is proposed as the Activation Function. The proposed WiG consists of a multiplication of inputs and the weighted sigmoid gate. It is shown that the WiG includes the ReLU and same Activation Functions as a special case. Many Activation Functions have been proposed to overcome the performance of the ReLU. In the literature, the performance is mainly evaluated with an object recognition task. The proposed WiG is evaluated with the object recognition task and the image restoration task. Then, the expeirmental comparisons demonstrate the proposed WiG overcomes the existing Activation Functions including the ReLU.