Output Code

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3654 Experts worldwide ranked by ideXlab platform

Terry Windeatt - One of the best experts on this subject based on the ideXlab platform.

  • class separability weighting and bootstrapping in error correcting Output Code ensembles
    International Conference on Multiple Classifier Systems, 2010
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    A method for applying weighted decoding to error-correcting Output Code ensembles of binary classifiers is presented. This method is sensitive to the target class in that a separate weight is computed for each base classifier and target class combination. Experiments on 11 UCI datasets show that the method tends to improve classification accuracy when using neural network or support vector machine base classifiers. It is further shown that weighted decoding combines well with the technique of bootstrapping to improve classification accuracy still further.

  • MCS - Class-Separability weighting and bootstrapping in error correcting Output Code ensembles
    Multiple Classifier Systems, 2010
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    A method for applying weighted decoding to error-correcting Output Code ensembles of binary classifiers is presented. This method is sensitive to the target class in that a separate weight is computed for each base classifier and target class combination. Experiments on 11 UCI datasets show that the method tends to improve classification accuracy when using neural network or support vector machine base classifiers. It is further shown that weighted decoding combines well with the technique of bootstrapping to improve classification accuracy still further.

  • A Bias-Variance Analysis of Bootstrapped Class-Separability Weighting for Error-Correcting Output Code Ensembles
    2010 20th International Conference on Pattern Recognition, 2010
    Co-Authors: Raymond Stuart Smith, Terry Windeatt
    Abstract:

    We investigate the effects, in terms of a bias-variance decomposition of error, of applying class-separability weighting plus bootstrapping in the construction of error-correcting Output Code ensembles of binary classifiers. Evidence is presented to show that bias tends to be reduced at low training strength values whilst variance tends to be reduced across the full range. The relative importance of these effects, however, varies depending on the stability of the base classifier type.

  • decoding rules for error correcting Output Code ensembles
    International Conference on Multiple Classifier Systems, 2005
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    The ECOC technique for solving multi-class pattern recognition problems can be broken down into two distinct stages – encoding and decoding. Given a pattern vector of unknown class, the encoding stage consists in constructing a corresponding Output Code vector by applying to it each of the base classifiers in the ensemble. The decoding stage consists in making a classification decision based on the value of the Output Code. This paper focuses on the latter stage. Firstly, three different approaches to decoding rule design are reviewed and a new algorithm is presented. This new algorithm is then compared experimentally with two common decoding rules and evidence is presented that the new rule has some advantages in the form of slightly improved classification accuracy and reduced sensitivity to optimal training.

  • Multiple Classifier Systems - Decoding rules for error correcting Output Code ensembles
    Multiple Classifier Systems, 2005
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    The ECOC technique for solving multi-class pattern recognition problems can be broken down into two distinct stages – encoding and decoding. Given a pattern vector of unknown class, the encoding stage consists in constructing a corresponding Output Code vector by applying to it each of the base classifiers in the ensemble. The decoding stage consists in making a classification decision based on the value of the Output Code. This paper focuses on the latter stage. Firstly, three different approaches to decoding rule design are reviewed and a new algorithm is presented. This new algorithm is then compared experimentally with two common decoding rules and evidence is presented that the new rule has some advantages in the form of slightly improved classification accuracy and reduced sensitivity to optimal training.

Boonserm Kijsirikul - One of the best experts on this subject based on the ideXlab platform.

  • Sub-classifier construction for error correcting Output Code using minimum weight perfect matching
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise, Boonserm Kijsirikul
    Abstract:

    Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes enCoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including One-Versus-AU, the dense random Code, and the sparse random Code. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to One-Versus-One.

  • IJCNN - Sub-classifier construction for error correcting Output Code using minimum weight perfect matching
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise, Boonserm Kijsirikul
    Abstract:

    Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes enCoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including One-Versus-AU, the dense random Code, and the sparse random Code. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to One-Versus-One.

  • Sub-Classifier Construction for Error Correcting Output Code Using Minimum Weight Perfect Matching
    arXiv: Learning, 2013
    Co-Authors: Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise, Boonserm Kijsirikul
    Abstract:

    Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes enCoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including the dense random Code and the sparse random Code both in terms of accuracy and classification times. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to the One-Versus-One.

Patoomsiri Songsiri - One of the best experts on this subject based on the ideXlab platform.

  • Sub-classifier construction for error correcting Output Code using minimum weight perfect matching
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise, Boonserm Kijsirikul
    Abstract:

    Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes enCoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including One-Versus-AU, the dense random Code, and the sparse random Code. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to One-Versus-One.

  • IJCNN - Sub-classifier construction for error correcting Output Code using minimum weight perfect matching
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise, Boonserm Kijsirikul
    Abstract:

    Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes enCoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including One-Versus-AU, the dense random Code, and the sparse random Code. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to One-Versus-One.

  • Sub-Classifier Construction for Error Correcting Output Code Using Minimum Weight Perfect Matching
    arXiv: Learning, 2013
    Co-Authors: Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise, Boonserm Kijsirikul
    Abstract:

    Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes enCoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including the dense random Code and the sparse random Code both in terms of accuracy and classification times. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to the One-Versus-One.

Roger Reynaud - One of the best experts on this subject based on the ideXlab platform.

  • evidential framework for error correcting Output Code classification
    Engineering Applications of Artificial Intelligence, 2018
    Co-Authors: Marie Lachaize, Sylvie Le Hegaratmascle, Emanuel Aldea, Aude Maitrot, Roger Reynaud
    Abstract:

    Abstract The Error Correcting Output Codes offer a proper matrix framework to model the decomposition of a multiclass classification problem into simpler subproblems. How to perform the decomposition to best fit the data while using a small number of classifiers has been a research hotspot, as well as the decoding part, which deals with the subproblem combination. In this work, we propose an evidential unified framework that handles both the coding and decoding steps. Using the Belief Function Theory, we propose an efficient modelling, where each dichotomizer in the ECOC strategy is considered as an independent information source. This framework allows us to easily model the refutation information provided by sparse dichotomizers and also to derive measures to detect tricky samples for which additional dichotomizers could be needed to ensure decisions. Our approach was tested on hyperspectral data used to classify nine different types of material. According to the results obtained, our approach allows us to achieve top performance using compact ECOC while presenting a high level of modularity.

Raymond S Smith - One of the best experts on this subject based on the ideXlab platform.

  • class separability weighting and bootstrapping in error correcting Output Code ensembles
    International Conference on Multiple Classifier Systems, 2010
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    A method for applying weighted decoding to error-correcting Output Code ensembles of binary classifiers is presented. This method is sensitive to the target class in that a separate weight is computed for each base classifier and target class combination. Experiments on 11 UCI datasets show that the method tends to improve classification accuracy when using neural network or support vector machine base classifiers. It is further shown that weighted decoding combines well with the technique of bootstrapping to improve classification accuracy still further.

  • MCS - Class-Separability weighting and bootstrapping in error correcting Output Code ensembles
    Multiple Classifier Systems, 2010
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    A method for applying weighted decoding to error-correcting Output Code ensembles of binary classifiers is presented. This method is sensitive to the target class in that a separate weight is computed for each base classifier and target class combination. Experiments on 11 UCI datasets show that the method tends to improve classification accuracy when using neural network or support vector machine base classifiers. It is further shown that weighted decoding combines well with the technique of bootstrapping to improve classification accuracy still further.

  • decoding rules for error correcting Output Code ensembles
    International Conference on Multiple Classifier Systems, 2005
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    The ECOC technique for solving multi-class pattern recognition problems can be broken down into two distinct stages – encoding and decoding. Given a pattern vector of unknown class, the encoding stage consists in constructing a corresponding Output Code vector by applying to it each of the base classifiers in the ensemble. The decoding stage consists in making a classification decision based on the value of the Output Code. This paper focuses on the latter stage. Firstly, three different approaches to decoding rule design are reviewed and a new algorithm is presented. This new algorithm is then compared experimentally with two common decoding rules and evidence is presented that the new rule has some advantages in the form of slightly improved classification accuracy and reduced sensitivity to optimal training.

  • Multiple Classifier Systems - Decoding rules for error correcting Output Code ensembles
    Multiple Classifier Systems, 2005
    Co-Authors: Raymond S Smith, Terry Windeatt
    Abstract:

    The ECOC technique for solving multi-class pattern recognition problems can be broken down into two distinct stages – encoding and decoding. Given a pattern vector of unknown class, the encoding stage consists in constructing a corresponding Output Code vector by applying to it each of the base classifiers in the ensemble. The decoding stage consists in making a classification decision based on the value of the Output Code. This paper focuses on the latter stage. Firstly, three different approaches to decoding rule design are reviewed and a new algorithm is presented. This new algorithm is then compared experimentally with two common decoding rules and evidence is presented that the new rule has some advantages in the form of slightly improved classification accuracy and reduced sensitivity to optimal training.