Kernel Optimization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20373 Experts worldwide ranked by ideXlab platform

Zheng Bao - One of the best experts on this subject based on the ideXlab platform.

  • optimizing the data dependent Kernel under a unified Kernel Optimization framework
    Pattern Recognition, 2008
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    The Kernel functions play a central role in Kernel methods, accordingly over the years the Optimization of Kernel functions has been a promising research area. Ideally Fisher discriminant criteria can be used as an objective function to optimize the Kernel function to augment the margin between different classes. Unfortunately, Fisher criteria are optimal only in the case that all the classes are generated from underlying multivariate normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously are not a suitable choice as a Kernel Optimization rule in some applications such as the multimodally distributed data. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to Kernel Optimization, in this paper based on a data-dependent Kernel function we propose a unified Kernel Optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the Kernel Optimization framework, to employ different discriminant criteria, one has to only change the corresponding affinity matrices without having to resort to any complex derivations in feature space. Experimental results based on some benchmark data demonstrate the efficiency of our method.

  • a Kernel Optimization method based on the localized Kernel fisher criterion
    Pattern Recognition, 2008
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    It is widely recognized that whether the selected Kernel matches the data determines the performance of Kernel-based methods. Ideally it is expected that the data is linearly separable in the Kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the Kernel function. However, the data may not be linearly separable even after Kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as Kernel Optimization rule. Motivated by this issue, we propose a localized Kernel Fisher criterion, instead of traditional Fisher criterion, as the Kernel Optimization rule to increase the local margins between embedded classes in Kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.

  • a Kernel Optimization method based on the localized Kernel fisher criterion
    Lecture Notes in Computer Science, 2006
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    It is wildly recognized that whether the selected Kernel matches the data controls the performance of Kernel-based methods. Ideally it is expected that the data is linearly separable in the Kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a Kernel Optimization rule. However, the data may not be linearly separable even after Kernel transformation in many applications, a nonlinear classifier is preferred in this case, and obviously the Fisher criterion is not the best choice as a Kernel Optimization rule. Motivated by this issue, in this paper we present a novel Kernel Optimization method by maximizing the local class linear separability in Kernel space to increase the local margins between embedded classes via localized Kernel Fisher criterion, by which the classification performance of nonlinear classifier in the Kernel induced feature space can be improved. Extensive experiments are carried out to evaluate the efficiency of the proposed method.

Bo Chen - One of the best experts on this subject based on the ideXlab platform.

  • optimizing the data dependent Kernel under a unified Kernel Optimization framework
    Pattern Recognition, 2008
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    The Kernel functions play a central role in Kernel methods, accordingly over the years the Optimization of Kernel functions has been a promising research area. Ideally Fisher discriminant criteria can be used as an objective function to optimize the Kernel function to augment the margin between different classes. Unfortunately, Fisher criteria are optimal only in the case that all the classes are generated from underlying multivariate normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously are not a suitable choice as a Kernel Optimization rule in some applications such as the multimodally distributed data. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to Kernel Optimization, in this paper based on a data-dependent Kernel function we propose a unified Kernel Optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the Kernel Optimization framework, to employ different discriminant criteria, one has to only change the corresponding affinity matrices without having to resort to any complex derivations in feature space. Experimental results based on some benchmark data demonstrate the efficiency of our method.

  • a Kernel Optimization method based on the localized Kernel fisher criterion
    Pattern Recognition, 2008
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    It is widely recognized that whether the selected Kernel matches the data determines the performance of Kernel-based methods. Ideally it is expected that the data is linearly separable in the Kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the Kernel function. However, the data may not be linearly separable even after Kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as Kernel Optimization rule. Motivated by this issue, we propose a localized Kernel Fisher criterion, instead of traditional Fisher criterion, as the Kernel Optimization rule to increase the local margins between embedded classes in Kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.

  • a Kernel Optimization method based on the localized Kernel fisher criterion
    Lecture Notes in Computer Science, 2006
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    It is wildly recognized that whether the selected Kernel matches the data controls the performance of Kernel-based methods. Ideally it is expected that the data is linearly separable in the Kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a Kernel Optimization rule. However, the data may not be linearly separable even after Kernel transformation in many applications, a nonlinear classifier is preferred in this case, and obviously the Fisher criterion is not the best choice as a Kernel Optimization rule. Motivated by this issue, in this paper we present a novel Kernel Optimization method by maximizing the local class linear separability in Kernel space to increase the local margins between embedded classes via localized Kernel Fisher criterion, by which the classification performance of nonlinear classifier in the Kernel induced feature space can be improved. Extensive experiments are carried out to evaluate the efficiency of the proposed method.

Hongwei Liu - One of the best experts on this subject based on the ideXlab platform.

  • optimizing the data dependent Kernel under a unified Kernel Optimization framework
    Pattern Recognition, 2008
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    The Kernel functions play a central role in Kernel methods, accordingly over the years the Optimization of Kernel functions has been a promising research area. Ideally Fisher discriminant criteria can be used as an objective function to optimize the Kernel function to augment the margin between different classes. Unfortunately, Fisher criteria are optimal only in the case that all the classes are generated from underlying multivariate normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously are not a suitable choice as a Kernel Optimization rule in some applications such as the multimodally distributed data. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to Kernel Optimization, in this paper based on a data-dependent Kernel function we propose a unified Kernel Optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the Kernel Optimization framework, to employ different discriminant criteria, one has to only change the corresponding affinity matrices without having to resort to any complex derivations in feature space. Experimental results based on some benchmark data demonstrate the efficiency of our method.

  • a Kernel Optimization method based on the localized Kernel fisher criterion
    Pattern Recognition, 2008
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    It is widely recognized that whether the selected Kernel matches the data determines the performance of Kernel-based methods. Ideally it is expected that the data is linearly separable in the Kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the Kernel function. However, the data may not be linearly separable even after Kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as Kernel Optimization rule. Motivated by this issue, we propose a localized Kernel Fisher criterion, instead of traditional Fisher criterion, as the Kernel Optimization rule to increase the local margins between embedded classes in Kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.

  • a Kernel Optimization method based on the localized Kernel fisher criterion
    Lecture Notes in Computer Science, 2006
    Co-Authors: Bo Chen, Hongwei Liu, Zheng Bao
    Abstract:

    It is wildly recognized that whether the selected Kernel matches the data controls the performance of Kernel-based methods. Ideally it is expected that the data is linearly separable in the Kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a Kernel Optimization rule. However, the data may not be linearly separable even after Kernel transformation in many applications, a nonlinear classifier is preferred in this case, and obviously the Fisher criterion is not the best choice as a Kernel Optimization rule. Motivated by this issue, in this paper we present a novel Kernel Optimization method by maximizing the local class linear separability in Kernel space to increase the local margins between embedded classes via localized Kernel Fisher criterion, by which the classification performance of nonlinear classifier in the Kernel induced feature space can be improved. Extensive experiments are carried out to evaluate the efficiency of the proposed method.

Dimitris Samaras - One of the best experts on this subject based on the ideXlab platform.

  • leave one out Kernel Optimization for shadow detection and removal
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018
    Co-Authors: Tomas Yago F Vicente, Minh Hoai, Dimitris Samaras
    Abstract:

    The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a Kernel Least-Squares Support Vector Machine (LSSVM) for separating shadow and non-shadow regions. The parameters of the Kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in a Markov Random Field (MRF) framework and adding pairwise contextual cues. This leads to a method that outperforms the state-of-the-art for shadow detection. In addition we propose a new method for shadow removal based on region relighting. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Once a shadow is detected, we demonstrate that our shadow removal approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset.

  • leave one out Kernel Optimization for shadow detection
    International Conference on Computer Vision, 2015
    Co-Authors: Tomas Yago F Vicente, Minh Hoai, Dimitris Samaras
    Abstract:

    The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a Kernel Least-Squares SVM for separating shadow and non-shadow regions. The parameters of the Kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in an MRF framework and adding pairwise contextual cues. This leads to a method that significantly outperforms the state-of-the-art.

Tim Warburton - One of the best experts on this subject based on the ideXlab platform.

  • Acceleration of tensor-product operations for high-order finite element methods
    The International Journal of High Performance Computing Applications, 2019
    Co-Authors: Kasia Świrydowicz, Noel Chalmers, Ali Karakus, Tim Warburton
    Abstract:

    This article is devoted to graphics processing unit (GPU) Kernel Optimization and performance analysis of three tensor-product operations arising in finite element methods. We provide a mathematica...

  • Acceleration of tensor-product operations for high-order finite element methods
    arXiv: Mathematical Software, 2017
    Co-Authors: Kasia Świrydowicz, Noel Chalmers, Ali Karakus, Tim Warburton
    Abstract:

    This paper is devoted to GPU Kernel Optimization and performance analysis of three tensor-product operators arising in finite element methods. We provide a mathematical background to these operations and implementation details. Achieving close-to-the-peak performance for these operators requires extensive Optimization because of the operators' properties: low arithmetic intensity, tiered structure, and the need to store intermediate results inside the Kernel. We give a guided overview of Optimization strategies and we present a performance model that allows us to compare the efficacy of these Optimizations against an empirically calibrated roofline.