The Experts below are selected from a list of 64866 Experts worldwide ranked by ideXlab platform
Hee-heon Song - One of the best experts on this subject based on the ideXlab platform.
-
A new recurrent Neural-Network Architecture for visual pattern recognition
IEEE Transactions on Neural Networks, 1997Co-Authors: Hee-heon SongAbstract:We propose a new type of recurrent Neural-Network Architecture, in which each output unit is connected to itself and is also fully connected to other output units and all hidden units. The proposed recurrent Neural Network differs from Jordan's and Elman's recurrent Neural Networks with respect to function and Architecture, because it has been originally extended from being a mere multilayer feedforward Neural Network, to improve discrimination and generalization powers. We also prove the convergence properties of the learning algorithm in the proposed recurrent Neural Network, and analyze the performance of the proposed recurrent Neural Network by performing recognition experiments with the totally unconstrained handwritten numeric database of Concordia University, Montreal, Canada. Experimental results have confirmed that the proposed recurrent Neural Network improves discrimination and generalization powers in the recognition of visual patterns.
-
ICPR - A new recurrent Neural Network Architecture for pattern recognition
Proceedings of 13th International Conference on Pattern Recognition, 1996Co-Authors: Hee-heon Song, Sun-mee KangAbstract:In this paper, we propose a new type of recurrent Neural Network Architecture in which each output unit is connected with itself and fully-connected with other output units and all hidden units. The proposed recurrent Neural Network differs from Jordan's and Elman's recurrent Neural Networks in view of functions and Architectures because it was originally extended from the multilayer feedforward Neural Network for improving the discrimination and generalization power. We also prove the convergence property of learning algorithm in the proposed recurrent Neural Network and analyze the performance of the proposed recurrent Neural Network by performing recognition experiments with the totally unconstrained handwritten numeral database of Concordia University of Canada. Experimental results confirmed that the proposed recurrent Neural Network improves the discrimination and generalization power in recognizing spatial patterns.
Gang Quan - One of the best experts on this subject based on the ideXlab platform.
-
A Fault-Tolerant Neural Network Architecture
2019 56th ACM IEEE Design Automation Conference (DAC), 2019Co-Authors: Lei Jiang, Chengmo Yang, Yanzhi Wang, Gang QuanAbstract:New DNN accelerators based on emerging technologies, such as resistive random access memory (ReRAM), are gaining increasing research attention given their potential of “in-situ” data processing. Unfortunately, device-level physical limitations that are unique to these technologies may cause weight disturbance in memory and thus compromising the performance and stability of DNN accelerators. In this work, we propose a novel fault-tolerant Neural Network Architecture to mitigate the weight disturbance problem without involving expensive retraining. Specifically, we propose a novel collaborative logistic classifier to enhance the DNN stability by redesigning the binary classifiers augmented from both traditional error correction output code (ECOC) and modern DNN training algorithm. We also develop an optimized variable-length “decodefree” scheme to further boost the accuracy under fewer number of classifiers. Experimental results on cutting-edge DNN models and complex datasets show that the proposed fault-tolerant Neural Network Architecture can effectively rectify the accuracy degradation against weight disturbance for DNN accelerators with low cost, thus allowing for its deployment in a variety of mainstream DNNs.
-
DAC - A Fault-Tolerant Neural Network Architecture
Proceedings of the 56th Annual Design Automation Conference 2019, 2019Co-Authors: Tao Liu, Chengmo Yang, Lei Jiang, Yanzhi Wang, Wujie Wen, Gang QuanAbstract:New DNN accelerators based on emerging technologies, such as resistive random access memory (ReRAM), are gaining increasing research attention given their potential of "in-situ" data processing. Unfortunately, device-level physical limitations that are unique to these technologies may cause weight disturbance in memory and thus compromising the performance and stability of DNN accelerators. In this work, we propose a novel fault-tolerant Neural Network Architecture to mitigate the weight disturbance problem without involving expensive retraining. Specifically, we propose a novel collaborative logistic classifier to enhance the DNN stability by redesigning the binary classifiers augmented from both traditional error correction output code (ECOC) and modern DNN training algorithm. We also develop an optimized variable-length "decode-free" scheme to further boost the accuracy under fewer number of classifiers. Experimental results on cutting-edge DNN models and complex datasets show that the proposed fault-tolerant Neural Network Architecture can effectively rectify the accuracy degradation against weight disturbance for DNN accelerators with low cost, thus allowing for its deployment in a variety of mainstream DNNs.
Olivia Mendoza - One of the best experts on this subject based on the ideXlab platform.
-
a hybrid modular Neural Network Architecture with fuzzy sugeno integration for time series forecasting
Applied Soft Computing, 2007Co-Authors: Patricia Melin, Alejandra Mancilla, Miguel Lopez, Olivia MendozaAbstract:We describe in this paper the application of a modular Neural Network Architecture to the problem of simulating and predicting the dynamic behavior of complex economic time series. We use several Neural Network models and training algorithms to compare the results and decide at the end, which one is best for this application. We also compare the simulation results with the traditional approach of using a statistical model. In this case, we use real time series of prices of consumer goods to test our models. Real prices of tomato in the U.S. show complex fluctuations in time and are very complicated to predict with traditional statistical approaches. For this reason, we have chosen a Neural Network approach to simulate and predict the evolution of these prices in the U.S. market.
David B Rosen - One of the best experts on this subject based on the ideXlab platform.
-
fuzzy artmap a Neural Network Architecture for incremental supervised learning of analog multidimensional maps
IEEE Transactions on Neural Networks, 1992Co-Authors: Gail A Carpenter, Stephen Grossberg, Natalya Markuzon, John H Reynolds, David B RosenAbstract:A Neural Network Architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors, which may represent fuzzy or crisp sets of features. The Architecture, called fuzzy ARTMAP, achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) Neural Networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Four classes of simulation illustrated fuzzy ARTMAP performance in relation to benchmark backpropagation and generic algorithm systems. These simulations include finding points inside versus outside a circle, learning to tell two spirals apart, incremental approximation of a piecewise-continuous function, and a letter recognition database. The fuzzy ARTMAP system is also compared with Salzberg's NGE systems and with Simpson's FMMC system. >
-
fuzzy artmap a Neural Network Architecture for incremental supervised learning of analog multidimensional maps
IEEE Transactions on Neural Networks, 1992Co-Authors: Gail A Carpenter, Stephen Grossberg, Natalya Markuzon, John H Reynolds, David B RosenAbstract:A Neural Network Architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors, which may represent fuzzy or crisp sets of features. The Architecture, called fuzzy ARTMAP, achieves a synthesis of fuzzy logic and adaptive resonance theory (ART) Neural Networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Four classes of simulation illustrated fuzzy ARTMAP performance in relation to benchmark backpropagation and generic algorithm systems. These simulations include finding points inside versus outside a circle, learning to tell two spirals apart, incremental approximation of a piecewise-continuous function, and a letter recognition database. The fuzzy ARTMAP system is also compared with Salzberg's NGE systems and with Simpson's FMMC system. >
Ah Chung Tsoi - One of the best experts on this subject based on the ideXlab platform.
-
fir and iir synapses a new Neural Network Architecture for time series modeling
Neural Computation, 1991Co-Authors: Andrew D Back, Ah Chung TsoiAbstract:A new Neural Network Architecture involving either local feedforward global feedforward, and/or local recurrent global feedforward structure is proposed. A learning rule minimizing a mean square error criterion is derived. The performance of this algorithm (local recurrent global feedforward Architecture) is compared with a local-feedforward global-feedforward Architecture. It is shown that the local-recurrent global-feedforward model performs better than the local-feedforward global-feedforward model.