Smoothness Assumption

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4440 Experts worldwide ranked by ideXlab platform

Alexander Jung - One of the best experts on this subject based on the ideXlab platform.

  • EUSIPCO - Outlier Detection from Non-Smooth Sensor Data
    2019 27th European Signal Processing Conference (EUSIPCO), 2019
    Co-Authors: Timo Huuhtanen, Henrik Ambos, Alexander Jung
    Abstract:

    Outlier detection is usually based on smooth Assumption of the data. Most existing approaches for outlier detection from spatial sensor data assume the data to be a smooth function of the location. Spatial discontinuities in the data, such as arising from shadows in photovoltaic (PV) systems, may cause outlier detection methods based on the spatial Smoothness Assumption to fail. In this paper, we propose novel approaches for outlier detection of non-smooth spatial data. The methods are evaluated by numerical experiments involving PV panel measurements as well as synthetic data.

Michal Valko - One of the best experts on this subject based on the ideXlab platform.

  • a simple parameter free and adaptive approach to optimization under a minimal local Smoothness Assumption
    Algorithmic Learning Theory, 2019
    Co-Authors: Peter L Bartlett, Victor Gabillon, Michal Valko
    Abstract:

    We study the problem of optimizing a function under a budgeted number of evaluations. We only assume that the function is locally smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of noise b of the function evaluation and 2) the local Smoothness, d, of the function. A smaller d results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of b and d, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being agnostic to the values of both b and d. This leads to the first algorithm that naturally adapts to an unknown range of noise b and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback (b=0). There, under our minimal local Smoothness Assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize (d=0). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks.

  • ALT - A simple parameter-free and adaptive approach to optimization under a minimal local Smoothness Assumption
    2019
    Co-Authors: Peter L Bartlett, Victor Gabillon, Michal Valko
    Abstract:

    We study the problem of optimizing a function under a budgeted number of evaluations. We only assume that the function is locally smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of noise b of the function evaluation and 2) the local Smoothness, d, of the function. A smaller d results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of b and d, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being agnostic to the values of both b and d. This leads to the first algorithm that naturally adapts to an unknown range of noise b and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback (b=0). There, under our minimal local Smoothness Assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize (d=0). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks.

  • a simple parameter free and adaptive approach to optimization under a minimal local Smoothness Assumption
    arXiv: Learning, 2018
    Co-Authors: Peter L Bartlett, Victor Gabillon, Michal Valko
    Abstract:

    We study the problem of optimizing a function under a \emph{budgeted number of evaluations}. We only assume that the function is \emph{locally} smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of \emph{noise} $b$ of the function evaluation and 2) the local Smoothness, $d$, of the function. A smaller $d$ results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of $b$ and $d$, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being \textit{agnostic} to the values of both $b$ and $d$. This leads to the first algorithm that naturally adapts to an \textit{unknown} range of noise $b$ and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback ($b=0$). There, under our minimal local Smoothness Assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize ($d=0$). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks.

  • a simple parameter free and adaptive approach to optimization under a minimal local Smoothness Assumption
    arXiv: Learning, 2018
    Co-Authors: Peter L Bartlett, Victor Gabillon, Michal Valko
    Abstract:

    We study the problem of optimizing a function under a \emph{budgeted number of evaluations}. We only assume that the function is \emph{locally} smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of \emph{noise} $b$ of the function evaluation and 2) the local Smoothness, $d$, of the function. A smaller $d$ results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of $b$ and $d$, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being \textit{agnostic} to the values of both $b$ and $d$. This leads to the first algorithm that naturally adapts to an \textit{unknown} range of noise $b$ and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art \SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback ($b=0$). There, under our minimal local Smoothness Assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize ($d=0$). We show that our algorithmic improvement is also borne out in the numerical experiments, where we empirically show faster convergence on common benchmark functions.

Saeed Shiry Ghidary - One of the best experts on this subject based on the ideXlab platform.

  • Semi-supervised metric learning in stratified spaces via intergrating local constraints and information-theoretic non-local constraints
    Neurocomputing, 2018
    Co-Authors: Zohre Karimi, Saeed Shiry Ghidary
    Abstract:

    Abstract Considerable research efforts have been done in learning semi-supervised distance metric learning based on both manifold and cluster Assumptions in the past few years. However, there is a major problem with them once they are applied to data lying on stratified space. The problem is that label Smoothness Assumption on manifold and cluster may be violated in the intersecting regions of manifolds. This problem is caused by overlearning of locality that misleads the metric learning process in the absence of enough labeled data. In this paper, we will propose a novel semi-supervised metric learning for stratified spaces (S2MLS2) which removes unsuitable local constraints in the manifold based methods for adapting to the Smoothness Assumption on multi manifolds. We will also impose some non-local constraints to detect the shared structures at different positions in the absence of enough supervised information. Besides, a novel bootstrapping method based on Smoothness Assumption on multi manifolds will be proposed to enlarge the labeled data. The proposed algorithm is based on different behavior of Laplacian of piecewise-smooth function on multi manifolds in the neighborhood of non-interior points of the manifolds as compared with interior points of the manifolds. Experiments on artificial and real benchmark data sets demonstrate that the proposed metric learning method outperforms many state-of-the-art metric learning methods.

Timo Huuhtanen - One of the best experts on this subject based on the ideXlab platform.

  • EUSIPCO - Outlier Detection from Non-Smooth Sensor Data
    2019 27th European Signal Processing Conference (EUSIPCO), 2019
    Co-Authors: Timo Huuhtanen, Henrik Ambos, Alexander Jung
    Abstract:

    Outlier detection is usually based on smooth Assumption of the data. Most existing approaches for outlier detection from spatial sensor data assume the data to be a smooth function of the location. Spatial discontinuities in the data, such as arising from shadows in photovoltaic (PV) systems, may cause outlier detection methods based on the spatial Smoothness Assumption to fail. In this paper, we propose novel approaches for outlier detection of non-smooth spatial data. The methods are evaluated by numerical experiments involving PV panel measurements as well as synthetic data.

Argyris Kalogeratos - One of the best experts on this subject based on the ideXlab platform.

  • learning laplacian matrix from bandlimited graph signals
    International Conference on Acoustics Speech and Signal Processing, 2019
    Co-Authors: Batiste Le Bars, Pierre Humbert, Laurent Oudre, Argyris Kalogeratos
    Abstract:

    In this paper, we present a method for learning an underlying graph topology using observed graph signals as training data. The novelty of our method lies on the combination of two Assumptions that are imposed as constraints to the graph learning process: i) the standard Assumption used in the literature that signals are smooth with respect to graph structure (i.e. small signal variation at adjacent nodes), with ii) the additional Assumption that signals are bandlimited, which implies sparsity in the signals’ representation in the spectral domain. The latter Assumption affects the inference of the whole eigenvalue decomposition of the Laplacian matrix and leads to a challenging new optimization problem. The conducted experimental evaluation shows that the proposed algorithm to solve this problem outperforms a reference state-of-the-art method that is based only on the Smoothness Assumption, when compared in the graph learning task on synthetic and real graph signals.

  • ICASSP - Learning Laplacian Matrix from Bandlimited Graph Signals
    ICASSP 2019 - 2019 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2019
    Co-Authors: Batiste Le Bars, Pierre Humbert, Laurent Oudre, Argyris Kalogeratos
    Abstract:

    In this paper, we present a method for learning an underlying graph topology using observed graph signals as training data. The novelty of our method lies on the combination of two Assumptions that are imposed as constraints to the graph learning process: i) the standard Assumption used in the literature that signals are smooth with respect to graph structure (i.e. small signal variation at adjacent nodes), with ii) the additional Assumption that signals are bandlimited, which implies sparsity in the signals’ representation in the spectral domain. The latter Assumption affects the inference of the whole eigenvalue decomposition of the Laplacian matrix and leads to a challenging new optimization problem. The conducted experimental evaluation shows that the proposed algorithm to solve this problem outperforms a reference state-of-the-art method that is based only on the Smoothness Assumption, when compared in the graph learning task on synthetic and real graph signals.