Visual Neuroscience

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3321 Experts worldwide ranked by ideXlab platform

Shuye Zhang - One of the best experts on this subject based on the ideXlab platform.

  • ICFHR - Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
    2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016
    Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye Zhang
    Abstract:

    This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.

  • Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
    2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016
    Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye Zhang
    Abstract:

    This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.

Fan Yang - One of the best experts on this subject based on the ideXlab platform.

  • ICFHR - Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
    2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016
    Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye Zhang
    Abstract:

    This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.

  • Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
    2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016
    Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye Zhang
    Abstract:

    This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.

Paul E Downing - One of the best experts on this subject based on the ideXlab platform.

Frank Bremmer - One of the best experts on this subject based on the ideXlab platform.

Gregg J. Suaning - One of the best experts on this subject based on the ideXlab platform.

  • EMBC - A Cortical Integrate-and-Fire Neural Network Model for Blind Decoding of Visual Prosthetic Stimulation
    Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and, 2014
    Co-Authors: Calvin D. Eiber, John W. Morley, Nigel H. Lovell, Gregg J. Suaning
    Abstract:

    : We present a computational model of the optic pathway which has been adapted to simulate cortical responses to Visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional Visual Neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.

  • A cortical integrate-and-fire neural network model for blind decoding of Visual prosthetic stimulation
    2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014
    Co-Authors: Calvin D. Eiber, John W. Morley, Nigel H. Lovell, Gregg J. Suaning
    Abstract:

    We present a computational model of the optic pathway which has been adapted to simulate cortical responses to Visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional Visual Neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.