Traffic Sign

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 14358 Experts worldwide ranked by ideXlab platform

Bogdan Stanciulescu - One of the best experts on this subject based on the ideXlab platform.

  • Real-Time Traffic-Sign Recognition Using Tree Classifiers
    IEEE Transactions on Intelligent Transportation Systems, 2012
    Co-Authors: Fatin Zaklouta, Bogdan Stanciulescu
    Abstract:

    Traffic-Sign recognition (TSR) is an essential component of a driver assistance system (DAS), providing drivers with safety and precaution information. In this paper, we evaluate the performance of k-d trees, random forests, and support vector machines (SVMs) for Traffic-Sign classification using different-sized histogram-of-oriented-gradient (HOG) descriptors and distance transforms (DTs). We also use the Fisher's criterion and random forests for the feature selection to reduce the memory requirements and enhance the performance. We use the German Traffic Sign Recognition Benchmark (GTSRB) data set containing 43 classes and more than 50 000 images.

  • Real-time Traffic Sign recognition in three stages
    Robotics and Autonomous Systems, 2012
    Co-Authors: Fatin Zaklouta, Bogdan Stanciulescu
    Abstract:

    Traffic Sign Recognition (TSR) is an important component of Advanced Driver Assistance Systems (ADAS). The Traffic Signs enhance Traffic safety by informing the driver of speed limits or possible dangers such as icy roads, imminent road works or pedestrian crossings. We present a three-stage real-time Traffic Sign Recognition system in this paper, consisting of a segmentation, a detection and a classification phase. We combine the color enhancement with an adaptive threshold to extract red regions in the image. The detection is performed using an efficient linear Support Vector Machine (SVM) with Histogram of Oriented Gradients (HOG) features. The tree classifiers, K-d tree and Random Forest, identify the content of the Traffic Signs found. A spatial weighting approach is proposed to improve the performance of the K-d tree. The Random Forest and Fisher's Criterion are used to reduce the feature space and accelerate the classification. We show that only a subset of about one third of the features is sufficient to attain a high classification accuracy on the German Traffic Sign Recognition Benchmark (GTSRB).

  • Traffic Sign classification using k d trees and random forests
    International Joint Conference on Neural Network, 2011
    Co-Authors: Fatin Zaklouta, Bogdan Stanciulescu, Omar Hamdoun
    Abstract:

    In this paper, we evaluate the performance of K-d trees and Random Forests for Traffic Sign classification using different size Histogram of Oriented Gradients (HOG) descriptors and Distance Transforms. We use the German Traffic Sign Benchmark data set [1] containing 43 classes and more than 50,000 images. The K-d tree is fast to build and search in. We combine the tree classifiers with the HOG descriptors as well as the Distance Transforms and achieve classification rates of up to 97% and 81.8% respectively.

Y.-j. Zheng - One of the best experts on this subject based on the ideXlab platform.

  • An adaptive system for Traffic Sign recognition
    Proceedings of the Intelligent Vehicles '94 Symposium, 1994
    Co-Authors: Y.-j. Zheng, Wolfgang Ritter, R Janssen
    Abstract:

    Traffic Sign recognition is a primary goal of almost all road environment understanding systems. A vision system for Traffic Sign recognition was developed by Daimler-Benz Research Center Ulm. The two main modules of the system are detection and verification (recognition). Here regions of possible Traffic Signs in a color image sequence are first detected before each of them is verified and recognized. In this paper the authors pay attention to the verification and recognition process. The authors present an adaptive approach and emphasize the importance of the adaptability to various road and Traffic Sign environments. The authors utilize a distance-weighted k-nearest-neighbor classifier for Traffic Sign recognition and show its equivalence to the kind of radial basis function networks which can be easily integrated into chips. The authors also present a way to evaluate the uncertainty of recognized Traffic Signs and demonstrate their approach using real images.

  • A real-time Traffic Sign recognition system
    IEEE Intelligent Vehicles Symposium, 1994
    Co-Authors: S. Estable, Ronald Ott, Wolfgang Ritter, F Stein, R Janssen, Jonathan Schick, Y.-j. Zheng
    Abstract:

    The ability of recognising Traffic Signs in a road Traffic scenario is an important feature of the Daimler-Benz autonomous vehicle VITA II. This real-time vision-based Traffic Sign recognition system has been developed by Daimler-Benz in the European research project PROMETHEUS. In this paper we focus on the overall system deSign, the real-time implementation, and field test evaluation. The software architecture of the system integrates three hierarchical levels of data processing. On each level the specific tasks are isolated. The lowest level comprises specialists for colour, shape and pictogram analysis; they perform the iconic to symbolic data transformation. On the highest level the administration processes organise data flow as a double bottom-up and top-down mechanism to dynamically interpret the image sequence. A hybrid parallel machine was deSigned for running the Traffic Sign recognition system in real time on a transputer network coupled to powerPC processors

Hengliang Luo - One of the best experts on this subject based on the ideXlab platform.

  • Traffic Sign recognition using a multi task convolutional neural network
    IEEE Transactions on Intelligent Transportation Systems, 2018
    Co-Authors: Hengliang Luo, Yi Yang, Bei Tong, Bin Fan
    Abstract:

    Although Traffic Sign recognition has been studied for many years, most existing works are focused on the symbol-based Traffic Signs. This paper proposes a new data-driven system to recognize all categories of Traffic Signs, which include both symbol-based and text-based Signs, in video sequences captured by a camera mounted on a car. The system consists of three stages, Traffic Sign regions of interest (ROIs) extraction, ROIs refinement and classification, and post-processing. Traffic Sign ROIs from each frame are first extracted using maximally stable extremal regions on gray and normalized RGB channels. Then, they are refined and asSigned to their detailed classes via the proposed multi-task convolutional neural network, which is trained with a large amount of data, including synthetic Traffic Signs and images labeled from street views. The post-processing finally combines the results in all frames to make a recognition decision. Experimental results have demonstrated the effectiveness of the proposed system.

  • Towards Real-Time Traffic Sign Detection and Classification
    IEEE Transactions on Intelligent Transportation Systems, 2016
    Co-Authors: Yi Yang, Hengliang Luo, Huarong Xu, Fuchao Wu
    Abstract:

    Traffic Sign recognition plays an important role in driver assistant systems and intelligent autonomous vehicles. Its real-time performance is highly desirable in addition to its recognition performance. This paper aims to deal with real-time Traffic Sign recognition, i.e., localizing what type of Traffic Sign appears in which area of an input image at a fast processing time. To achieve this goal, we first propose an extremely fast detection module, which is 20 times faster than the existing best detection module. Our detection module is based on Traffic Sign proposal extraction and classification built upon a color probability model and a color HOG. Then, we harvest from a convolutional neural network to further classify the detected Signs into their subclasses within each superclass. Experimental results on both German and Chinese roads show that both our detection and classification methods achieve comparable performance with the state-of-the-art methods, with Significantly improved computational efficiency.

  • towards real time Traffic Sign detection and classification
    International Conference on Intelligent Transportation Systems, 2014
    Co-Authors: Yi Yang, Hengliang Luo
    Abstract:

    This paper aims to deal with real-time Traffic Sign recognition, i.e. localizing what type of Traffic Sign appears in which area of an input image at a fast processing time. To achieve this goal, a two-module framework (detection module and classification module) is proposed. In detection module, the authors firstly transform the input color image to probability maps by using color probability model. Then the Traffic Sign proposals are extracted by finding maximally stable extremal regions on these maps. Finally, a support vector machine (SVM) classifier which trained with color Histograms of Oriented Gradient (HOG) features is utilized to further filter out the false positives and classify the remaining proposals to their super classes. In classification module, the authors use CNN to classify the detected Signs to their sub-classes within each super class. Experiments on the German Traffic Sign Detection Benchmark (GTSDB) show that the authors method achieves comparable performance to the state-of-the-art methods with Significantly improved computational efficiency, which is 20 times faster than the existing best method.

Yi Yang - One of the best experts on this subject based on the ideXlab platform.

  • Traffic Sign recognition using a multi task convolutional neural network
    IEEE Transactions on Intelligent Transportation Systems, 2018
    Co-Authors: Hengliang Luo, Yi Yang, Bei Tong, Bin Fan
    Abstract:

    Although Traffic Sign recognition has been studied for many years, most existing works are focused on the symbol-based Traffic Signs. This paper proposes a new data-driven system to recognize all categories of Traffic Signs, which include both symbol-based and text-based Signs, in video sequences captured by a camera mounted on a car. The system consists of three stages, Traffic Sign regions of interest (ROIs) extraction, ROIs refinement and classification, and post-processing. Traffic Sign ROIs from each frame are first extracted using maximally stable extremal regions on gray and normalized RGB channels. Then, they are refined and asSigned to their detailed classes via the proposed multi-task convolutional neural network, which is trained with a large amount of data, including synthetic Traffic Signs and images labeled from street views. The post-processing finally combines the results in all frames to make a recognition decision. Experimental results have demonstrated the effectiveness of the proposed system.

  • Towards Real-Time Traffic Sign Detection and Classification
    IEEE Transactions on Intelligent Transportation Systems, 2016
    Co-Authors: Yi Yang, Hengliang Luo, Huarong Xu, Fuchao Wu
    Abstract:

    Traffic Sign recognition plays an important role in driver assistant systems and intelligent autonomous vehicles. Its real-time performance is highly desirable in addition to its recognition performance. This paper aims to deal with real-time Traffic Sign recognition, i.e., localizing what type of Traffic Sign appears in which area of an input image at a fast processing time. To achieve this goal, we first propose an extremely fast detection module, which is 20 times faster than the existing best detection module. Our detection module is based on Traffic Sign proposal extraction and classification built upon a color probability model and a color HOG. Then, we harvest from a convolutional neural network to further classify the detected Signs into their subclasses within each superclass. Experimental results on both German and Chinese roads show that both our detection and classification methods achieve comparable performance with the state-of-the-art methods, with Significantly improved computational efficiency.

  • towards real time Traffic Sign detection and classification
    International Conference on Intelligent Transportation Systems, 2014
    Co-Authors: Yi Yang, Hengliang Luo
    Abstract:

    This paper aims to deal with real-time Traffic Sign recognition, i.e. localizing what type of Traffic Sign appears in which area of an input image at a fast processing time. To achieve this goal, a two-module framework (detection module and classification module) is proposed. In detection module, the authors firstly transform the input color image to probability maps by using color probability model. Then the Traffic Sign proposals are extracted by finding maximally stable extremal regions on these maps. Finally, a support vector machine (SVM) classifier which trained with color Histograms of Oriented Gradient (HOG) features is utilized to further filter out the false positives and classify the remaining proposals to their super classes. In classification module, the authors use CNN to classify the detected Signs to their sub-classes within each super class. Experiments on the German Traffic Sign Detection Benchmark (GTSDB) show that the authors method achieves comparable performance to the state-of-the-art methods with Significantly improved computational efficiency, which is 20 times faster than the existing best method.

Fatin Zaklouta - One of the best experts on this subject based on the ideXlab platform.

  • Real-Time Traffic-Sign Recognition Using Tree Classifiers
    IEEE Transactions on Intelligent Transportation Systems, 2012
    Co-Authors: Fatin Zaklouta, Bogdan Stanciulescu
    Abstract:

    Traffic-Sign recognition (TSR) is an essential component of a driver assistance system (DAS), providing drivers with safety and precaution information. In this paper, we evaluate the performance of k-d trees, random forests, and support vector machines (SVMs) for Traffic-Sign classification using different-sized histogram-of-oriented-gradient (HOG) descriptors and distance transforms (DTs). We also use the Fisher's criterion and random forests for the feature selection to reduce the memory requirements and enhance the performance. We use the German Traffic Sign Recognition Benchmark (GTSRB) data set containing 43 classes and more than 50 000 images.

  • Real-time Traffic Sign recognition in three stages
    Robotics and Autonomous Systems, 2012
    Co-Authors: Fatin Zaklouta, Bogdan Stanciulescu
    Abstract:

    Traffic Sign Recognition (TSR) is an important component of Advanced Driver Assistance Systems (ADAS). The Traffic Signs enhance Traffic safety by informing the driver of speed limits or possible dangers such as icy roads, imminent road works or pedestrian crossings. We present a three-stage real-time Traffic Sign Recognition system in this paper, consisting of a segmentation, a detection and a classification phase. We combine the color enhancement with an adaptive threshold to extract red regions in the image. The detection is performed using an efficient linear Support Vector Machine (SVM) with Histogram of Oriented Gradients (HOG) features. The tree classifiers, K-d tree and Random Forest, identify the content of the Traffic Signs found. A spatial weighting approach is proposed to improve the performance of the K-d tree. The Random Forest and Fisher's Criterion are used to reduce the feature space and accelerate the classification. We show that only a subset of about one third of the features is sufficient to attain a high classification accuracy on the German Traffic Sign Recognition Benchmark (GTSRB).

  • Traffic Sign classification using k d trees and random forests
    International Joint Conference on Neural Network, 2011
    Co-Authors: Fatin Zaklouta, Bogdan Stanciulescu, Omar Hamdoun
    Abstract:

    In this paper, we evaluate the performance of K-d trees and Random Forests for Traffic Sign classification using different size Histogram of Oriented Gradients (HOG) descriptors and Distance Transforms. We use the German Traffic Sign Benchmark data set [1] containing 43 classes and more than 50,000 images. The K-d tree is fast to build and search in. We combine the tree classifiers with the HOG descriptors as well as the Distance Transforms and achieve classification rates of up to 97% and 81.8% respectively.