Neural Nets

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 15573 Experts worldwide ranked by ideXlab platform

David Casasent - One of the best experts on this subject based on the ideXlab platform.

  • Optical processing and Neural Nets for scene analysis
    Optical Computing, 1993
    Co-Authors: David Casasent
    Abstract:

    We consider the problem of identifying each of multiple objects in a scene with object distortions and background clutter present. Attention is given to the role for Neural Nets (NNs) and optics and the type of NN used. A hierarchical/inference system is used with correlation Neural Nets used for low levels and new NNs with higher-order decision surfaces used for classification.

  • Neural Nets for Scene Analysis
    1992
    Co-Authors: David Casasent
    Abstract:

    Abstract : This project involved various new optical and digital Neural net techniques for scene analysis. The original Neural net concept was the adaptive clustering Neural net (ACNN). This is detailed in Chapter 2. Our original associative processor concept was the Ho-Kashyap Neural net. This is detailed in Chapter 3. Our overview of how Neural Nets should be used in scene analysis is detailed in Chapter 4. This also includes an overview of our two new higher order Neural Nets. Our new PQNN Neural net (which produces higher-order decision surfaces much more efficiently than other Neural Nets) is noted in Chapter 5. To achieve high performance on systems with components with analog accuracy and various nonidealities, we developed a new algorithm and technique discussed in Chapter 6. We have fabricated our optical laboratory Neural net and tested it on several different case studies and achieved excellent results as noted in Chapter 7.

  • Accuracy effects in pattern recognition Neural Nets
    [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, 1992
    Co-Authors: David Casasent
    Abstract:

    Various errors, including analog accuracy, nonlinearities, and noise, are present in all Neural networks. The author considers their effects in training and testing on two different pattern recognition Neural Nets. He shows that the Neural Nets considered allow some such effects to be included inherently in the Neural net synthesis algorithm and that the effect of the other error sources can be trained out by proper selection of Neural net design parameters. Multiclass distortion-invariant pattern recognition Neural Nets are considered. The results are applicable to analog VLSI and optical Neural Nets. >

  • Higher-order decision surfaces in Neural Nets
    [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, 1992
    Co-Authors: David Casasent
    Abstract:

    Several recent advances are described that use Neural-network methods to produce the higher-order decision surface required for difficult pattern recognition discrimination problems. Work at Carnegie Mellon University is emphasized and includes new hyperspherical Ho-Kashyap Neural Nets and new piecewise quadratic Neural Nets. Also addressed are Fourier Neural-net interconnections to handle multiple objects and achieve morphological, image processing, and enhancement functions. >

  • Invariance and Neural Nets
    IEEE Transactions on Neural Networks, 1991
    Co-Authors: Etienne Barnard, David Casasent
    Abstract:

    Application of Neural Nets to invariant pattern recognition is considered. The authors study various techniques for obtaining this invariance with Neural net classifiers and identify the invariant-feature technique as the most suitable for current Neural classifiers. A novel formulation of invariance in terms of constraints on the feature values leads to a general method for transforming any given feature space so that it becomes invariant to specified transformations. A case study using range imagery is used to exemplify these ideas, and good performance is obtained.

Stanley Osher - One of the best experts on this subject based on the ideXlab platform.

  • Deep Neural Nets with Interpolating Function as Output Activation
    arXiv: Learning, 2018
    Co-Authors: Bao Wang, Xiyang Luo, Wei Zhu, Zuoqiang Shi, Stanley Osher
    Abstract:

    We replace the output layer of deep Neural Nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical Neural Nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and code will be made publicly available.

  • deep Neural Nets with interpolating function as output activation
    Neural Information Processing Systems, 2018
    Co-Authors: Bao Wang, Xiyang Luo, Wei Zhu, Zuoqiang Shi, Stanley Osher
    Abstract:

    We replace the output layer of deep Neural Nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical Neural Nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and the code is available at https://github.com/ BaoWangMath/DNN-DataDependentActivation.

Jurgen Schmidhuber - One of the best experts on this subject based on the ideXlab platform.

  • better digit recognition with a committee of simple Neural Nets
    International Conference on Document Analysis and Recognition, 2011
    Co-Authors: Ueli Meier, Dan Ciresan, Luca Maria Gambardella, Jurgen Schmidhuber
    Abstract:

    We present a new method to train the members of a committee of one-hidden-layer Neural Nets. Instead of training various Nets on subsets of the training data we preprocess the training data for each individual model such that the corresponding errors are decor related. On the MNIST digit recognition benchmark set we obtain a recognition error rate of 0.39%, using a committee of 25 one-hidden-layer Neural Nets, which is on par with state-of-the-art recognition rates of more complicated systems.

  • ICDAR - Better Digit Recognition with a Committee of Simple Neural Nets
    2011 International Conference on Document Analysis and Recognition, 2011
    Co-Authors: Ueli Meier, Dan Ciresan, Luca Maria Gambardella, Jurgen Schmidhuber
    Abstract:

    We present a new method to train the members of a committee of one-hidden-layer Neural Nets. Instead of training various Nets on subsets of the training data we preprocess the training data for each individual model such that the corresponding errors are decor related. On the MNIST digit recognition benchmark set we obtain a recognition error rate of 0.39%, using a committee of 25 one-hidden-layer Neural Nets, which is on par with state-of-the-art recognition rates of more complicated systems.

H.m. El-bakry - One of the best experts on this subject based on the ideXlab platform.

  • ICIP (1) - Fast cooperative modular Neural Nets for human face detection
    Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001
    Co-Authors: H.m. El-bakry
    Abstract:

    A new approach to reduce the computation time taken by Neural Nets for the searching process is introduced. Both fast and cooperative modular Neural Nets are combined to enhance the performance of the detection process. Such approach is applied to identify human faces automatically in cluttered scenes. In the detection phase, Neural Nets are used to test whether a window of 20/spl times/20 pixels contains a face or not. The major difficulty in the learning process comes from the large database required for face/nonface images. A simple design for cooperative modular Neural Nets is presented to solve this problem by dividing these data into three groups. Such division results in reduction of computational complexity and thus decreases the time and memory needed during the test of an image. Simulation results for the proposed algorithm show good performance. Also, a correction in calculation for the speed up ratio (for object detection process) in another paper is presented (see S. Ben-Yacoub, "Fast Object Detection using MLP and FFT", IDIAP-RR 11, IDIAP, (1997)).

  • AICCSA - Human face detection using fast co-operative modular Neural Nets
    Proceedings ACS IEEE International Conference on Computer Systems and Applications, 2001
    Co-Authors: H.m. El-bakry
    Abstract:

    In this paper, a new approach to reduce the computation time taken by Neural Nets for the searching process is introduced. Both fast and co-operative modular Neural Nets are combined to enhance the performance of the detection process. Such an approach is applied to identify human faces automatically in cluttered scenes. In the detection phase, Neural Nets are used to test whether a window of 20/spl times/20 pixels contains a face or not. The major difficulty in the learning process comes from the large database required for face/non-face images. A simple design for cooperative modular Neural Nets is presented to solve this problem by dividing these data into three groups. Such a division results in a reduction in the computational complexity, and thus a decrease in the time and memory needed during the testing of an image. Simulation results for the proposed algorithm show good performance. Also, a correction in the calculation for the speed-up ratio (for the object detection process) made by S. Ben-Yacoob (1997) is presented.

  • Fast iris detection using Neural Nets
    Canadian Conference on Electrical and Computer Engineering 2001. Conference Proceedings (Cat. No.01TH8555), 2001
    Co-Authors: H.m. El-bakry
    Abstract:

    In this paper, a combination of fast and cooperative modular Neural Nets to enhance the performance of the detection process is introduced. We have applied such approach successfully to detect human faces in cluttered scenes (El-Bakry et al. 2000). Here, this technique is used to identify human irises automatically in a given image. In the detection phase, Neural Nets are used to test whether a window of 20/spl times/20 pixels contains an iris or not. The major difficulty in the learning process comes from the large database required for iris/non-iris images. A simple design for cooperative modular Neural Nets is presented to solve this problem by dividing these data into three groups. Such division results in reduction of computational complexity and thus decreasing the time and memory needed during the test of an image. Simulation results for the proposed algorithm show a good performance.

  • Fast Iris Detection Using Cooperative Modular Neural Nets
    Artificial Neural Nets and Genetic Algorithms, 2001
    Co-Authors: H.m. El-bakry
    Abstract:

    In this paper, a combination of fast and cooperative modular Neural Nets to enhance the performance of the detection process is introduced. We have applied such approach successfully to detect human faces in cluttered scenes [10]. Here, this technique is used to identify human irises automatically in a given image. In the detection phase, Neural Nets are used to test whether a window of 20x20 pixels contains an iris or not. The major difficulty in the learning process comes from the large database required for iris / noniris images. A simple design for cooperative modular Neural Nets is presented to solve this problem by dividing these data into three groups. Such division results in reduction of computational complexity and thus decreasing the time and memory needed during the test of an image. Simulation results for the proposed algorithm show a good performance.

  • Fast modular Neural Nets for detection of human faces
    ICM 2000. Proceedings of the 12th International Conference on Microelectronics. (IEEE Cat. No.00EX453), 2000
    Co-Authors: H.m. El-bakry, M.a. Abo-elsoud, Mohamed S. Kamel
    Abstract:

    In this paper, a new approach to reduce the computation time taken by Neural Nets for the searching process is introduced. We combine both fast and cooperative modular Neural Nets to enhance the detection process performance. Such an approach is applied to identify human faces automatically in cluttered scenes. In the detection phase, Neural Nets are used to test whether a window of 20/spl times/20 pixels contains a face or not. The major difficulty in the learning process comes from the large database required for face/nonface images. A simple design for cooperative modular Neural Nets is presented to solve this problem by dividing these data into three groups. Such division results in reduction of computational complexity and thus decreasing the time and memory needed during the test of an image. Simulation results for the proposed algorithm show good performance.

Adrian R.m. Upton - One of the best experts on this subject based on the ideXlab platform.

  • ICNN - Learning with ease: smart Neural Nets
    Proceedings of ICNN'95 - International Conference on Neural Networks, 1995
    Co-Authors: B.w. Dahanayake, Adrian R.m. Upton
    Abstract:

    Introduces smart Neural Nets that learn fast with ease by regular backpropagation. This is achieved by avoiding the use of the sigmoid non-linear function driven conventional or Socratic neurons, and choosing the neurons of the hidden layers and the output layer appropriately. To develop the smart Neural Nets, the authors introduce what they call 'the smart neurons' and 'the intelligent neurons' that have the underpinning of 'fuzzy thinking' or 'deBono thinking'. The intelligent neurons are obtained by introducing the non-emotional innovation feedback into the smart neurons. The intelligent neurons asymptotically become the same as the smart neurons. The smart Neural Nets are constructed by using the smart neurons and intelligent neurons. The smart neurons alone are employed to form the hidden layer (or layers) of the smart Neural net. The output layer of the smart Neural net is constructed by using the intelligent neurons alone. The authors compare the performance of the smart Neural Nets against that of the conventional Neural Nets toward the regular innovation backpropagation learning. Unlike the conventional Neural Nets, the smart Neural Nets seem to learn fast and smoothly by the regular innovation backpropagation learning. Further, the sigmoid non-linear function driven conventional or Socratic neurons are not essential to build feedforward Neural Nets. In fact, much more efficient and fast learning Neural Nets can be built by avoiding the conventional or Socratic neurons.

  • Paralysis free fast learning: smart Neural Nets
    Proceedings of 1995 IEEE International Conference on Fuzzy Systems. The International Joint Conference of the Fourth IEEE International Conference on , 1995
    Co-Authors: B.w. Dahanayake, Adrian R.m. Upton
    Abstract:

    We introduce fast learning fully connected feedforward smart Neural Nets by avoiding the use of sigmoid nonlinear function driven conventional neurons. We achieve this by introducing what we call the "smart neurons". The smart neurons together with the linear ADALINEs are used to construct the fast learning smart Neural Nets. The smart neurons alone are used to form the hidden layers of the smart Neural net. The output layer of the smart Neural net is constructed by using the linear ADALINEs alone. Like the conventional Neural Nets, the smart Neural Nets can be trained using the regular innovation backpropagation algorithm. We compare the performance of the smart Neural Nets against the conventional Neural Nets. It is shown that the smart Neural Nets learn extremely faster than the conventional Neural Nets. Unlike the conventional Neural Nets, the smart Neural Nets proposed here can learn without ever becoming paralysed. The smart Neural Nets also behave well during the learning. In addition, we show that much more efficient and fast learning Neural Nets can be built by avoiding the conventional neurons altogether. >

  • Smart Neural Nets for fast learning
    ETFA '94. 1994 IEEE Symposium on Emerging Technologies and Factory Automation. (SEIKEN) Symposium) -Novel Disciplines for the Next Century- Proceeding, 1994
    Co-Authors: B.w. Dahanayake, Adrian R.m. Upton
    Abstract:

    We introduce what we call the 'smart Neural Nets' for fast and well behaved learning. The smart Neural Nets are formed by completely avoiding the use of the sigmoid nonlinear function driven conventional neurons that are used to construct the Neural Nets, and by re-designing the neurons appropriately. To develop the smart Neural Nets, we introduce what we call the 'smart neurons'. The smart Neural Nets are constructed by forming layers of smart neurons, and interconnecting the adjacent layers. Like the conventional Neural Nets, the smart Neural Nets are trained by using the regular innovation backpropagation learning algorithm. We compare the performance of the smart Neural Nets against the conventional Neural Nets toward the regular innovation backpropagation learning using the implementation of the two-input exclusive-OR gate. >