Candidate State

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 273 Experts worldwide ranked by ideXlab platform

S. Bandyopadhyay - One of the best experts on this subject based on the ideXlab platform.

  • Simulated annealing using a reversible jump Markov chain Monte Carlo algorithm for fuzzy clustering
    IEEE Transactions on Knowledge and Data Engineering, 2005
    Co-Authors: S. Bandyopadhyay
    Abstract:

    In this paper, an approach for automatically clustering a data set into a number of fuzzy partitions with a simulated annealing using a reversible jump Markov chain Monte Carlo algorithm is proposed. This is in contrast to the widely used fuzzy clustering scheme, the fuzzy c-means (FCM) algorithm, which requires the a priori knowledge of the number of clusters. The said approach performs the clustering by optimizing a cluster validity index, the Xie-Beni index. It makes use of the homogeneous reversible jump Markov chain Monte Carlo (RJMCMC) kernel as the proposal so that the algorithm is able to jump between different dimensions, i.e., number of clusters, until the correct value is obtained. Different moves, like birth, death, split, merge, and update, are used for sampling a Candidate State given the current State. The effectiveness of the proposed technique in optimizing the Xie-Beni index and thereby determining the appropriate clustering is demonstrated for both artificial and real-life data sets. In a part of the investigation, the utility of the fuzzy clustering scheme for classifying pixels in an IRS satellite image of Kolkata is studied. A technique for reducing the computation efforts in the case of satellite image data is incorporated.

Petros Dellaportas - One of the best experts on this subject based on the ideXlab platform.

  • NeurIPS - Gradient-based Adaptive Markov Chain Monte Carlo
    2019
    Co-Authors: Michalis K. Titsias, Petros Dellaportas
    Abstract:

    We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when Candidate State values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.

  • Gradient-based Adaptive Markov Chain Monte Carlo.
    arXiv: Machine Learning, 2019
    Co-Authors: Michalis K. Titsias, Petros Dellaportas
    Abstract:

    We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when Candidate State values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.

Jiao Li - One of the best experts on this subject based on the ideXlab platform.

  • Robust H∞ State Observer Design for a Class of Linear Switched Composite Large-Scale Systems
    2006 6th World Congress on Intelligent Control and Automation, 2006
    Co-Authors: Jiao Li
    Abstract:

    The problem of robust Hinfin State observer design was considered for a class of switched composite large-scale systems. This paper was studied under the assumptions that there existed finite Candidate State observers, and the gain matrices of observers were known, but none of the individual State observers made the error system stable. Based on convex combination technique, LMI method and switching among the finite Candidate observers, the error system is shown to be stable with Hinfin norm-bounded through the design of switching laws. The results show that the choice range of the gain matrices of State observers is extended by switching technique and simplifies the design of systems. Simulations validate the correctness of the results

  • Robust H ∞ State Observer Design for a Class of Linear Switched Composite Large-Scale Systems
    2006 6th World Congress on Intelligent Control and Automation, 2006
    Co-Authors: Jiao Li
    Abstract:

    The problem of robust Hinfin State observer design was considered for a class of switched composite large-scale systems. This paper was studied under the assumptions that there existed finite Candidate State observers, and the gain matrices of observers were known, but none of the individual State observers made the error system stable. Based on convex combination technique, LMI method and switching among the finite Candidate observers, the error system is shown to be stable with Hinfin norm-bounded through the design of switching laws. The results show that the choice range of the gain matrices of State observers is extended by switching technique and simplifies the design of systems. Simulations validate the correctness of the results

Weiming Hu - One of the best experts on this subject based on the ideXlab platform.

  • Distance Map of Various Weights: A new feature for adaptive object tracking
    2013 IEEE International Conference on Acoustics Speech and Signal Processing, 2013
    Co-Authors: Junliang Xing, Xiaoqin Zhang, Weiming Hu
    Abstract:

    In this paper, we propose a new feature, Distance Map of Various Weights (DMVW) based on distances between rows' textures, to perform tracking. The proposed new feature provides an effective object appearance model which is both illumination-invariant and robust to occlusion. We also develop a 2D PCA based method to effectively evaluate the new feature. We demonstrate the validity of the rows' or column's weights in computing 2D PCA subspaces. To balance the importance of local and global information, we define a coefficient to revise the locality extent of the proposed feature. A new method based on entropy of Candidate State evaluation is proposed to select the most discriminative coefficient. Experimental results on challenging video sequences demonstrated the effectiveness of our method.

Michalis K. Titsias - One of the best experts on this subject based on the ideXlab platform.

  • NeurIPS - Gradient-based Adaptive Markov Chain Monte Carlo
    2019
    Co-Authors: Michalis K. Titsias, Petros Dellaportas
    Abstract:

    We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when Candidate State values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.

  • Gradient-based Adaptive Markov Chain Monte Carlo.
    arXiv: Machine Learning, 2019
    Co-Authors: Michalis K. Titsias, Petros Dellaportas
    Abstract:

    We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when Candidate State values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.