The Experts below are selected from a list of 3321 Experts worldwide ranked by ideXlab platform
Shuye Zhang - One of the best experts on this subject based on the ideXlab platform.
-
ICFHR - Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye ZhangAbstract:This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.
-
Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye ZhangAbstract:This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.
Fan Yang - One of the best experts on this subject based on the ideXlab platform.
-
ICFHR - Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye ZhangAbstract:This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.
-
Handwritten/Printed Receipt Classification Using Attention-Based Convolutional Neural Network
2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016Co-Authors: Fan Yang, Weixin Yang, Ziyong Feng, Shuye ZhangAbstract:This paper presents an approach for the classification of handwritten and printed receipts based on a convolutional neural network (CNN). One of the main challenges related to such classification is the diversity of the background interference in the receipt images. To overcome this problem, we propose a new technique named "attention-based CNN" (ABCNN), inspired by the concept of "attention" in Visual Neuroscience. This approach helps us to focus on the receipt in an image without bounding box annotation. Our experimental results showed that the proposed ABCNN (i) significantly improves the classification accuracy compared to normal CNN (from 95% to 98.25%), and (ii) enables the network to process images directly without object detection, and (iii) it is faster to train and test the network.
Paul E Downing - One of the best experts on this subject based on the ideXlab platform.
-
Visual Neuroscience a hat trick for modularity
Current Biology, 2009Co-Authors: Paul E DowningAbstract:Summary A new study using transcranial magnetic stimulation of the brain shows that each of three neighboring areas of Visual cortex plays a specific and causal role in perceiving faces, bodies and other kinds of objects.
Frank Bremmer - One of the best experts on this subject based on the ideXlab platform.
-
Visual Neuroscience the puzzle of perceptual stability
Current Biology, 2016Co-Authors: Eckart Zimmermann, Frank BremmerAbstract:Our world appears stable, although our eyes constantly shift its image across the retina. What brain mechanisms allow for this perceptual stability? A recent study has brought us a step closer to answering this millennial question.
-
Visual Neuroscience the brain s interest in natural flow
Current Biology, 2008Co-Authors: Frank BremmerAbstract:Optic flow is a key signal for heading perception. A new study has shown that the human brain can dissociate between consistent (natural) and inconsistent flow, revealing what is likely a new hierarchy in Visual motion processing.
Gregg J. Suaning - One of the best experts on this subject based on the ideXlab platform.
-
EMBC - A Cortical Integrate-and-Fire Neural Network Model for Blind Decoding of Visual Prosthetic Stimulation
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and, 2014Co-Authors: Calvin D. Eiber, John W. Morley, Nigel H. Lovell, Gregg J. SuaningAbstract:: We present a computational model of the optic pathway which has been adapted to simulate cortical responses to Visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional Visual Neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.
-
A cortical integrate-and-fire neural network model for blind decoding of Visual prosthetic stimulation
2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014Co-Authors: Calvin D. Eiber, John W. Morley, Nigel H. Lovell, Gregg J. SuaningAbstract:We present a computational model of the optic pathway which has been adapted to simulate cortical responses to Visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional Visual Neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.