The Experts below are selected from a list of 315 Experts worldwide ranked by ideXlab platform
Alexander Jung - One of the best experts on this subject based on the ideXlab platform.
-
Learning conditional Independence Structure for high-dimensional uncorrelated vector processes
2017 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2017Co-Authors: Nguyen Tran Quang, Alexander JungAbstract:We formulate and analyze a graphical model selection method for inferring the conditional Independence graph of a high-dimensional nonstationary Gaussian random process (time series) from a finite-length observation. The observed process samples are assumed uncorrelated over time but having a time-varying marginal distribution. The selection method is based on testing conditional variances obtained for small subsets of process components. This allows to cope with the high-dimensional regime, where the sample size can be (much) smaller than the process dimension. We characterize the required sample size such that the proposed selection method is successful with high probability.
-
ICASSP - Learning conditional Independence Structure for high-dimensional uncorrelated vector processes
2017 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2017Co-Authors: Nguyen Tran Quang, Alexander JungAbstract:We formulate and analyze a graphical model selection method for inferring the conditional Independence graph of a high-dimensional nonstationary Gaussian random process (time series) from a finite-length observation. The observed process samples are assumed uncorrelated over time but having a time-varying marginal distribution. The selection method is based on testing conditional variances obtained for small subsets of process components. This allows to cope with the high-dimensional regime, where the sample size can be (much) smaller than the process dimension. We characterize the required sample size such that the proposed selection method is successful with high probability.
-
Learning the Conditional Independence Structure of Stationary Time Series: A Multitask Learning Approach
IEEE Transactions on Signal Processing, 2015Co-Authors: Alexander JungAbstract:We propose a method for inferring the conditional Independence graph (CIG) of a high-dimensional Gaussian vector time series (discrete-time process) from a finite-length observation. By contrast to existing approaches, we do not rely on a parametric process model (such as, e.g., an autoregressive model) for the observed random process. Instead, we only require certain smoothness properties (in the Fourier domain) of the process. The proposed inference scheme works even for sample sizes much smaller than the number of scalar process components if the true underlying CIG is sufficiently sparse. A theoretical performance analysis provides sufficient conditions on the sample size such that the new method is consistent asymptotically. Some numerical experiments validate our theoretical performance analysis and demonstrate superior performance of our scheme compared to an existing (parametric) approach in case of model mismatch.
Srinivas Aluru - One of the best experts on this subject based on the ideXlab platform.
-
cooperative neural networks conn exploiting prior Independence Structure for improved classification
arXiv: Learning, 2019Co-Authors: Harsh Shrivastava, Eugene Bart, Bob Price, Srinivas AluruAbstract:We propose a new approach, called cooperative neural networks (CoNN), which uses a set of cooperatively trained neural networks to capture latent representations that exploit prior given Independence Structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior Structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the Independence Structure of any graphical model. We illustrate the technique by showing that we can transfer the Independence Structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNN-sLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrates that the theoretical advantages of prior Independence Structure can be realized in practice -we demonstrate a 23\% reduction in error on the challenging MultiSent data set compared to state-of-the-art.
-
cooperative neural networks conn exploiting prior Independence Structure for improved classification
Neural Information Processing Systems, 2018Co-Authors: Harsh Shrivastava, Eugene Bart, Bob Price, Srinivas AluruAbstract:We propose a new approach, called cooperative neural networks (CoNN), which use a set of cooperatively trained neural networks to capture latent representations that exploit prior given Independence Structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior Structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the Independence Structure of any graphical model. We illustrate the technique by showing that we can transfer the Independence Structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNN-sLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrate that the theoretical advantages of prior Independence Structure can be realized in practice - we demonstrate a 23 percent reduction in error on the challenging MultiSent data set compared to state-of-the-art.
-
NeurIPS - Cooperative neural networks (CoNN): Exploiting prior Independence Structure for improved classification
2018Co-Authors: Harsh Shrivastava, Eugene Bart, Bob Price, Srinivas AluruAbstract:We propose a new approach, called cooperative neural networks (CoNN), which use a set of cooperatively trained neural networks to capture latent representations that exploit prior given Independence Structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior Structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the Independence Structure of any graphical model. We illustrate the technique by showing that we can transfer the Independence Structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNN-sLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrate that the theoretical advantages of prior Independence Structure can be realized in practice - we demonstrate a 23 percent reduction in error on the challenging MultiSent data set compared to state-of-the-art.
Les Atlas - One of the best experts on this subject based on the ideXlab platform.
-
Constrained robust submodular sensor selection with application to multistatic sonar arrays
IET Radar Sonar & Navigation, 2017Co-Authors: Thomas Powers, David W. Krout, Jeff Bilmes, Les AtlasAbstract:The authors develop a framework to select a subset of sensors from a field in which the sensors have an ingrained Independence Structure. Given an arbitrary Independence pattern, the authors construct a graph that denotes pairwise Independence between sensors, which means those sensors may operate simultaneously without interfering. The set of all fully-connected subgraphs (cliques) of this Independence graph forms the independent sets of matroids over which the authors maximise the average and minimum of a set of submodular objective functions. The average case is submodular, so it can be approximated. The minimum case is both non-submodular and inapproximable. The authors propose a novel algorithm GENSAT that exploits submodularity and, as a result, returns a near-optimal solution with approximation guarantees on a relaxed problem that are within a small factor of the average case scenario. The authors apply this framework to ping sequence optimisation for active multistatic sonar arrays by maximising sensor coverage for average and minimum case scenarios and derive lower bounds for minimum probability of detection for a fractional number of targets. In these ping sequence optimisation simulations, GENSAT exceeds the fractional lower bounds and reaches near-optimal performance, and submodular function optimisation vastly outperforms traditional approaches and nearly achieves optimal performance.
-
Constrained robust submodular sensor selection with applications to multistatic sonar arrays
2016 19th International Conference on Information Fusion (FUSION), 2016Co-Authors: Thomas Powers, David W. Krout, Jeff Bilmes, Les AtlasAbstract:We develop a framework to select a subset of sensors from a field in which the sensors have an ingrained Independence Structure. Given an arbitrary Independence pattern, we construct a graph that denotes pairwise Independence between sensors, which means those sensors may operate simultaneously. The set of all fully-connected subgraphs (cliques) of this Independence graph forms the independent sets of a matroid over which we maximize the minimum of a set of submodular objective functions. We propose a novel algorithm called MatSat that exploits submodularity and, as a result, returns a near-optimal solution with approximation guarantees that are within a small factor of the average-case scenario. We apply this framework to ping sequence optimization for active multistatic sonar arrays by maximizing sensor coverage and derive lower bounds for minimum probability of detection for a fractional number of targets. In these ping sequence optimization simulations, MatSat exceeds the fractional lower bounds and reaches near-optimal performance.
-
Sensor selection from Independence graphs using submodularity
2015 18th International Conference on Information Fusion (Fusion), 2015Co-Authors: Thomas Powers, David W. Krout, Les AtlasAbstract:In this paper we develop a framework to select a subset of sensors from a field in which the sensors have an ingrained Independence Structure. Given an arbitrary Independence pattern, we construct a graph that denotes pairwise Independence between sensors, which means those sensors can operate simultaneously. The set of all fully-connected subgraphs (cliques) of this Independence graph can form a set of matroid constraints over which we maximize a submodular objective function. Since we choose the objective function to be submodular, the algorithm returns a near-optimal solution with approximation guarantees. We also argue that this framework generalizes to any network with a defined Independence Structure between sensors, and intuitively models problems where the goal is to gather information in a complex environment. We apply this framework to ping sequence optimization for active multistatic sonar arrays by maximizing sensor coverage and not only achieve significant performance gains compared to conventional round-robin sensor selection, but approach optimal performance as well.
-
FUSION - Sensor selection from Independence graphs using submodularity
2015Co-Authors: Thomas Powers, David W. Krout, Les AtlasAbstract:In this paper we develop a framework to select a subset of sensors from a field in which the sensors have an ingrained Independence Structure. Given an arbitrary Independence pattern, we construct a graph that denotes pairwise Independence between sensors, which means those sensors can operate simultaneously. The set of all fully-connected subgraphs (cliques) of this Independence graph can form a set of matroid constraints over which we maximize a submodular objective function. Since we choose the objective function to be submodular, the algorithm returns a near-optimal solution with approximation guarantees. We also argue that this framework generalizes to any network with a defined Independence Structure between sensors, and intuitively models problems where the goal is to gather information in a complex environment. We apply this framework to ping sequence optimization for active multistatic sonar arrays by maximizing sensor coverage and not only achieve significant performance gains compared to conventional round-robin sensor selection, but approach optimal performance as well.
Harsh Shrivastava - One of the best experts on this subject based on the ideXlab platform.
-
cooperative neural networks conn exploiting prior Independence Structure for improved classification
arXiv: Learning, 2019Co-Authors: Harsh Shrivastava, Eugene Bart, Bob Price, Srinivas AluruAbstract:We propose a new approach, called cooperative neural networks (CoNN), which uses a set of cooperatively trained neural networks to capture latent representations that exploit prior given Independence Structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior Structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the Independence Structure of any graphical model. We illustrate the technique by showing that we can transfer the Independence Structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNN-sLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrates that the theoretical advantages of prior Independence Structure can be realized in practice -we demonstrate a 23\% reduction in error on the challenging MultiSent data set compared to state-of-the-art.
-
cooperative neural networks conn exploiting prior Independence Structure for improved classification
Neural Information Processing Systems, 2018Co-Authors: Harsh Shrivastava, Eugene Bart, Bob Price, Srinivas AluruAbstract:We propose a new approach, called cooperative neural networks (CoNN), which use a set of cooperatively trained neural networks to capture latent representations that exploit prior given Independence Structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior Structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the Independence Structure of any graphical model. We illustrate the technique by showing that we can transfer the Independence Structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNN-sLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrate that the theoretical advantages of prior Independence Structure can be realized in practice - we demonstrate a 23 percent reduction in error on the challenging MultiSent data set compared to state-of-the-art.
-
NeurIPS - Cooperative neural networks (CoNN): Exploiting prior Independence Structure for improved classification
2018Co-Authors: Harsh Shrivastava, Eugene Bart, Bob Price, Srinivas AluruAbstract:We propose a new approach, called cooperative neural networks (CoNN), which use a set of cooperatively trained neural networks to capture latent representations that exploit prior given Independence Structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior Structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the Independence Structure of any graphical model. We illustrate the technique by showing that we can transfer the Independence Structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNN-sLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrate that the theoretical advantages of prior Independence Structure can be realized in practice - we demonstrate a 23 percent reduction in error on the challenging MultiSent data set compared to state-of-the-art.
Nguyen Tran Quang - One of the best experts on this subject based on the ideXlab platform.
-
Learning conditional Independence Structure for high-dimensional uncorrelated vector processes
2017 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2017Co-Authors: Nguyen Tran Quang, Alexander JungAbstract:We formulate and analyze a graphical model selection method for inferring the conditional Independence graph of a high-dimensional nonstationary Gaussian random process (time series) from a finite-length observation. The observed process samples are assumed uncorrelated over time but having a time-varying marginal distribution. The selection method is based on testing conditional variances obtained for small subsets of process components. This allows to cope with the high-dimensional regime, where the sample size can be (much) smaller than the process dimension. We characterize the required sample size such that the proposed selection method is successful with high probability.
-
ICASSP - Learning conditional Independence Structure for high-dimensional uncorrelated vector processes
2017 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2017Co-Authors: Nguyen Tran Quang, Alexander JungAbstract:We formulate and analyze a graphical model selection method for inferring the conditional Independence graph of a high-dimensional nonstationary Gaussian random process (time series) from a finite-length observation. The observed process samples are assumed uncorrelated over time but having a time-varying marginal distribution. The selection method is based on testing conditional variances obtained for small subsets of process components. This allows to cope with the high-dimensional regime, where the sample size can be (much) smaller than the process dimension. We characterize the required sample size such that the proposed selection method is successful with high probability.