Proximity Operator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 186 Experts worldwide ranked by ideXlab platform

Lixin Shen - One of the best experts on this subject based on the ideXlab platform.

  • the Proximity Operator of the log sum penalty
    arXiv: Optimization and Control, 2021
    Co-Authors: Ashley Praterbennette, Lixin Shen, Erin E Tripp
    Abstract:

    The log-sum penalty is often adopted as a replacement for the $\ell_0$ pseudo-norm in compressive sensing and low-rank optimization. The hard-thresholding Operator, i.e., the Proximity Operator of the $\ell_0$ penalty, plays an essential role in applications; similarly, we require an efficient method for evaluating the Proximity Operator of the log-sum penalty. Due to the nonconvexity of this function, its Proximity Operator is commonly computed through the iteratively reweighted $\ell_1$ method, which replaces the log-sum term with its first-order approximation. This paper reports that the Proximity Operator of the log-sum penalty actually has an explicit expression. With it, we show that the iteratively reweighted $\ell_1$ solution disagrees with the true Proximity Operator of the log-sum penalty in certain regions. As a by-product, the iteratively reweighted $\ell_1$ solution is precisely characterized in terms of the chosen initialization. We also give the explicit form of the Proximity Operator for the composition of the log-sum penalty with the singular value function, as seen in low-rank applications. These results should be useful in the development of efficient and accurate algorithms for optimization problems involving the log-sum penalty.

  • multiplicative noise removal in imaging an exp model and its fixed point Proximity algorithm
    Applied and Computational Harmonic Analysis, 2016
    Co-Authors: Lixin Shen
    Abstract:

    Abstract We propose a variational model for restoration of images corrupted by multiplicative noise. The proposed model formulated in the logarithm transform domain of the desirable images consists of a data fitting term, a quadratic term, and a total variation regularizer. The data fitting term results directly from the presence of the multiplicative noise and the quadratic term reflects the statistics of the noise. We show that the proposed model is strictly convex under a mild condition. The solution of the model is then characterized in terms of the fixed-point of a nonlinear map described by the Proximity Operator of a function involved in the model. Based on the characterization, we present a fixed-point Proximity algorithm for solving the model and analyze its convergence. Our numerical results indicate that the proposed model compares favorably to several existing state-of-the-art models with better results in terms of the peak signal-to-noise ratio of the denoised images and the CPU time consumed.

  • computing the Proximity Operator of the l p norm with 0 p 1
    Iet Signal Processing, 2016
    Co-Authors: Feishe Chen, Lixin Shen, Bruce W Suter
    Abstract:

    Sparse modelling with the l p norm of 0 ≤ p ≤ 1 requires the availability of the Proximity Operator of the l p norm. The Proximity Operators of the l0 and l1 norms are the well-known hard- and soft-thresholding estimators, respectively. In this study, the authors give a complete study on the properties of the Proximity Operator of the l p norm. Based on these properties, explicit formulas of the Proximity Operators of the l1/2 norm and l2/3 norm are derived with simple proofs; for other values of p, an iterative Newton's method is developed to compute the Proximity Operator of the l p norm by fully exploring the available Proximity Operators of the l0, l1/2, l2/3, and l1 norms. As applications, the Proximity Operator of the l p norm with 0 ≤ p ≤ 1 is applied to the l p -regularisation for compressive sensing and image restoration.

  • finding dantzig selectors with a Proximity Operator based fixed point algorithm
    Computational Statistics & Data Analysis, 2015
    Co-Authors: Ashley Prater, Lixin Shen, Bruce W Suter
    Abstract:

    A simple iterative method for finding the Dantzig selector, designed for linear regression problems, is introduced. The method consists of two stages. The first stage approximates the Dantzig selector through a fixed-point formulation of solutions to the Dantzig selector problem; the second stage constructs a new estimator by regressing data onto the support of the approximated Dantzig selector. The proposed method is compared to an alternating direction method. The results of numerical simulations using both the proposed method and the alternating direction method on synthetic and real-world data sets are presented. The numerical simulations demonstrate that the two methods produce results of similar quality; however the proposed method tends to be significantly faster.

  • finding dantzig selectors with a Proximity Operator based fixed point algorithm
    arXiv: Numerical Analysis, 2015
    Co-Authors: Ashley Prater, Lixin Shen, Bruce W Suter
    Abstract:

    In this paper, we study a simple iterative method for finding the Dantzig selector, which was designed for linear regression problems. The method consists of two main stages. The first stage is to approximate the Dantzig selector through a fixed-point formulation of solutions to the Dantzig selector problem. The second stage is to construct a new estimator by regressing data onto the support of the approximated Dantzig selector. We compare our method to an alternating direction method, and present the results of numerical simulations using both the proposed method and the alternating direction method on synthetic and real data sets. The numerical simulations demonstrate that the two methods produce results of similar quality, however the proposed method tends to be significantly faster.

Ryo Hayakawa - One of the best experts on this subject based on the ideXlab platform.

  • error analysis of douglas rachford algorithm for linear inverse problems asymptotics of Proximity Operator for squared loss
    arXiv: Signal Processing, 2021
    Co-Authors: Ryo Hayakawa
    Abstract:

    Proximal splitting-based convex optimization is a promising approach to linear inverse problems because we can use some prior knowledge of the unknown variables explicitly. In this paper, we firstly analyze the asymptotic property of the Proximity Operator for the squared loss function, which appears in the update equations of some proximal splitting methods for linear inverse problems. The analysis shows that the output of the Proximity Operator can be characterized with a scalar random variable in the large system limit. Moreover, we investigate the asymptotic behavior of the Douglas-Rachford algorithm, which is one of the famous proximal splitting methods. From the asymptotic result, we can predict the evolution of the mean-square-error (MSE) in the algorithm for large-scale linear inverse problems. Simulation results demonstrate that the MSE performance of the Douglas-Rachford algorithm can be well predicted by the analytical result in compressed sensing with the $\ell_{1}$ optimization.

  • discrete valued vector reconstruction by optimization with sum of sparse regularizers
    European Signal Processing Conference, 2019
    Co-Authors: Ryo Hayakawa, Kazunori Hayashi
    Abstract:

    In this paper, we propose a possibly nonconvex optimization problem to reconstruct a discrete-valued vector from its underdetermined linear measurements. The proposed sum of sparse regularizers (SSR) optimization uses the sum of sparse regularizers as a regularizer for the discrete-valued vector. We also propose two proximal splitting algorithms for the SSR optimization problem on the basis of alternating direction method of multipliers (ADMM) and primal-dual splitting (PDS). The ADMM based algorithm can achieve faster convergence, whereas the PDS based algorithm does not require the computation of any inverse matrix. Moreover, we extend the ADMM based approach for the reconstruction of complex discrete-valued vectors. Note that the proposed approach can use any sparse regularizer as long as its Proximity Operator can be efficiently computed. Simulation results show that the proposed algorithms with nonconvex regularizers can achieve good reconstruction performance.

Bruce W Suter - One of the best experts on this subject based on the ideXlab platform.

  • computing the Proximity Operator of the l p norm with 0 p 1
    Iet Signal Processing, 2016
    Co-Authors: Feishe Chen, Lixin Shen, Bruce W Suter
    Abstract:

    Sparse modelling with the l p norm of 0 ≤ p ≤ 1 requires the availability of the Proximity Operator of the l p norm. The Proximity Operators of the l0 and l1 norms are the well-known hard- and soft-thresholding estimators, respectively. In this study, the authors give a complete study on the properties of the Proximity Operator of the l p norm. Based on these properties, explicit formulas of the Proximity Operators of the l1/2 norm and l2/3 norm are derived with simple proofs; for other values of p, an iterative Newton's method is developed to compute the Proximity Operator of the l p norm by fully exploring the available Proximity Operators of the l0, l1/2, l2/3, and l1 norms. As applications, the Proximity Operator of the l p norm with 0 ≤ p ≤ 1 is applied to the l p -regularisation for compressive sensing and image restoration.

  • finding dantzig selectors with a Proximity Operator based fixed point algorithm
    Computational Statistics & Data Analysis, 2015
    Co-Authors: Ashley Prater, Lixin Shen, Bruce W Suter
    Abstract:

    A simple iterative method for finding the Dantzig selector, designed for linear regression problems, is introduced. The method consists of two stages. The first stage approximates the Dantzig selector through a fixed-point formulation of solutions to the Dantzig selector problem; the second stage constructs a new estimator by regressing data onto the support of the approximated Dantzig selector. The proposed method is compared to an alternating direction method. The results of numerical simulations using both the proposed method and the alternating direction method on synthetic and real-world data sets are presented. The numerical simulations demonstrate that the two methods produce results of similar quality; however the proposed method tends to be significantly faster.

  • finding dantzig selectors with a Proximity Operator based fixed point algorithm
    arXiv: Numerical Analysis, 2015
    Co-Authors: Ashley Prater, Lixin Shen, Bruce W Suter
    Abstract:

    In this paper, we study a simple iterative method for finding the Dantzig selector, which was designed for linear regression problems. The method consists of two main stages. The first stage is to approximate the Dantzig selector through a fixed-point formulation of solutions to the Dantzig selector problem. The second stage is to construct a new estimator by regressing data onto the support of the approximated Dantzig selector. We compare our method to an alternating direction method, and present the results of numerical simulations using both the proposed method and the alternating direction method on synthetic and real data sets. The numerical simulations demonstrate that the two methods produce results of similar quality, however the proposed method tends to be significantly faster.

Jean-christophe Pesquet - One of the best experts on this subject based on the ideXlab platform.

  • Deep Unfolding of a Proximal Interior Point Method for Image Restoration
    Inverse Problems, 2019
    Co-Authors: Carla Bertocchi, Jean-christophe Pesquet, Emilie Chouzenoux, Marie-caroline Corbineau, Marco Prato
    Abstract:

    Variational methods are widely applied to ill-posed inverse problems for they have the ability to embed prior knowledge about the solution. However, the level of performance of these methods significantly depends on a set of parameters, which can be estimated through computationally expensive and time-consuming methods. In contrast, deep learning offers very generic and efficient architectures, at the expense of explainability, since it is often used as a black-box, without any fine control over its output. Deep unfolding provides a convenient approach to combine variational-based and deep learning approaches. Starting from a variational formulation for image restoration, we develop iRestNet, a neural network architecture obtained by unfolding a proximal interior point algorithm. Hard constraints, encoding desirable properties for the restored image, are incorporated into the network thanks to a logarithmic barrier, while the barrier parameter, the stepsize, and the penalization weight are learned by the network. We derive explicit expressions for the gradient of the Proximity Operator for various choices of constraints, which allows training iRestNet with gradient descent and backpropagation. In addition, we provide theoretical results regarding the stability of the network for a common inverse problem example. Numerical experiments on image deblurring problems show that the proposed approach compares favorably with both state-of-the-art variational and machine learning methods in terms of image quality.

  • Dual Block-Coordinate Forward-Backward Algorithm with Application to Deconvolution and Deinterlacing of Video Sequences
    Journal of Mathematical Imaging and Vision, 2017
    Co-Authors: F. Abboud, Jean-christophe Pesquet, Emilie Chouzenoux, J.-h. Chenot, L. Laborelli
    Abstract:

    Optimization methods play a central role in the solution of a wide array of problems encountered in various application fields, such as signal and image processing. Especially when the problems are highly dimensional, proximal methods have shown their efficiency through their capability to deal with composite, possibly nonsmooth objective functions. The cornerstone of these approaches is the Proximity Operator, which has become a quite popular tool in optimization. In this work, we propose new dual forward-backward formulations for computing the Proximity Operator of a sum of convex functions involving linear Operators. The proposed algorithms are accelerated thanks to the introduction of a block-coordinate strategy combined with a preconditioning technique. Numerical simulations emphasize the good performance of our approach for the problem of jointly deconvoluting and deinterlacing video sequences.

  • A dual block coordinate proximal algorithm with application to deconvolution of interlaced video sequences
    2015 IEEE International Conference on Image Processing (ICIP), 2015
    Co-Authors: F. Abboud, Jean-christophe Pesquet, Emilie Chouzenoux, J.-h. Chenot, L. Laborelli
    Abstract:

    Inverse problems encountered in video processing often require to minimize criteria involving a high number of variables. Among available optimization techniques, proximal methods have shown their efficiency in solving large-scale possibly nonsmooth problems. When some of the Proximity Operators involved in these methods do not have closed form expressions, they may constitute a bottleneck in terms of computational complexity and memory requirements. In this paper, we address this problem and propose accelerated techniques for solving it. A new dual block-coordinate forward-backward algorithm computing the Proximity Operator of a sum of convex functions composed with linear Operators is proposed and theoretically analyzed. The numerical performance of the approach is assessed through an application to deconvolution and super-resolution of interlaced video sequences.

  • Proximal splitting methods in signal processing
    Springer Optimization and Its Applications, 2011
    Co-Authors: Patrick Louis Combettes, Jean-christophe Pesquet
    Abstract:

    The Proximity Operator of a convex function is a natural extension of the notion of a projection Operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of signal processing, where it has become increasingly important. In this paper, we review the basic properties of Proximity Operators which are relevant to signal processing and present optimization methods based on these Operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.

  • proximal splitting methods in signal processing
    Fixed-point algorithms for inverse problems in science and engineering 2011 ISBN 978-1-4419-9568-1 págs. 185-212, 2011
    Co-Authors: Patrick Louis Combettes, Jean-christophe Pesquet
    Abstract:

    The Proximity Operator of a convex function is a natural extension of the notion of a projection Operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of Proximity Operators which are relevant to signal processing and present optimization methods based on these Operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.

Pesquet Jean-Christophe - One of the best experts on this subject based on the ideXlab platform.

  • Distributed Algorithms for Scalable Proximity Operator Computation and Application to Video Denoising
    HAL CCSD, 2020
    Co-Authors: Abboud Feriel, Pesquet Jean-Christophe, Chouzenoux Emilie, Talbot Hugues
    Abstract:

    Optimization problems arising in signal and image processing involve an increasingly large number of variables. In addition to the curse of dimensionality, another difficulty to overcome is that the cost function usually reads as the sum of several loss/regularization terms, non-necessarily smooth and possibly composed with large-size linear Operators. Proximal splitting approaches are fundamental tools to address such problems, with demonstrated efficiency in many applicative fields. In this paper, we present a new distributed algorithm for computing the Proximity Operator of a sum of non-necessarily smooth convex functions composed with arbitrary linear Operators. Our algorithm relies on a primal-dual splitting strategy, and benefits from established convergence guaranties. Each involved function is associated with a node of a hypergraph, with the ability to communicate with neighboring nodes sharing the same hyperedge. Thanks to this structure, our method can be efficiently implemented on modern parallel computing architectures, allowing to distribute computations on different nodes or machines while limiting the need for synchronization steps. Its good numerical performance and scalability properties are illustrated on a problem of video sequence denoising

  • Deep Unfolding of a Proximal Interior Point Method for Image Restoration
    'IOP Publishing', 2020
    Co-Authors: Bertocchi Carla, Pesquet Jean-Christophe, Chouzenoux Emilie, Corbineau Marie-caroline, Prato Marco
    Abstract:

    International audienceVariational methods are widely applied to ill-posed inverse problems for they have the ability to embed prior knowledge about the solution. However, the level of performance of these methods significantly depends on a set of parameters, which can be estimated through computationally expensive and time-consuming methods. In contrast, deep learning offers very generic and efficient architectures, at the expense of explainability, since it is often used as a black-box, without any fine control over its output. Deep unfolding provides a convenient approach to combine variational-based and deep learning approaches. Starting from a variational formulation for image restoration, we develop iRestNet, a neural network architecture obtained by unfolding a proximal interior point algorithm. Hard constraints, encoding desirable properties for the restored image, are incorporated into the network thanks to a logarithmic barrier, while the barrier parameter, the stepsize, and the penalization weight are learned by the network. We derive explicit expressions for the gradient of the Proximity Operator for various choices of constraints, which allows training iRestNet with gradient descent and backpropagation. In addition, we provide theoretical results regarding the stability of the network for a common inverse problem example. Numerical experiments on image deblurring problems show that the proposed approach compares favorably with both state-of-the-art variational and machine learning methods in terms of image quality

  • Learned Image Deblurring by Unfolding a Proximal Interior Point Algorithm
    'Institute of Electrical and Electronics Engineers (IEEE)', 2019
    Co-Authors: Corbineau Marie-caroline, Chouzenoux Emilie, Bertocchi Carla, Prato Marco, Pesquet Jean-Christophe
    Abstract:

    International audienceImage restoration is frequently addressed by resorting to variational methods which account for some prior knowledge about the solution. The success of these methods, however, heavily depends on the estimation of a set of hyperparameters. Deep learning architectures are, on the contrary, very generic and efficient, but they offer limited control over their output. In this paper, we present iRestNet, a neural network architecture which combines the benefits of both approaches. iRestNet is obtained by unfolding a proximal interior point algorithm. This enables enforcing hard constraints on the pixel range of the restored image thanks to a logarithmic barrier strategy, without requiring any parameter setting. Explicit expressions for the involved Proximity Operator, and its differential, are derived, which allows training iRestNet with gradient descent and backpropagation. Numerical experiments on image deblurring show that the proposed approach provides good image quality results compared to state-of-the-art variational and machine learning methods

  • Dual Block Coordinate Forward-Backward Algorithm with Application to Deconvolution and Deinterlacing of Video Sequences
    'Springer Science and Business Media LLC', 2017
    Co-Authors: Abboud Feriel, Pesquet Jean-Christophe, Chouzenoux Emilie, Chenot Jean-hugues, Laborelli Louis
    Abstract:

    International audienceOptimization methods play a central role in the solution of a wide array of problems encountered in various application fields, such as signal and image processing. Especially when the problems are highly dimensional, proximal methods have shown their efficiency through their capability to deal with composite, possibly nonsmooth objective functions. The cornerstone of these approaches is the Proximity Operator, which has become a quite popular tool in optimization. In this work, we propose new dual forward-backward formulations for computing the Proximity Operator of a sum of convex functions involving linear Operators. The proposed algorithms are accelerated thanks to the introduction of a block-coordinate strategy combined with a preconditioning technique. Numerical simulations emphasize the good performance of our approach for the problem of jointly deconvoluting and deinterlacing video sequences

  • A proximal decomposition method for solving convex variational inverse problems
    Inverse Problems, 2015
    Co-Authors: Combettes Patrick Louis, Pesquet Jean-Christophe
    Abstract:

    A broad range of inverse problems can be abstracted into the problem of minimizing the sum of several convex functions in a Hilbert space. We propose a proximal decomposition algorithm for solving this problem with an arbitrary number of nonsmooth functions and establish its weak convergence. The algorithm fully decomposes the problem in that it involves each function individually via its own Proximity Operator. A significant improvement over the methods currently in use in the area of inverse problems is that it is not limited to two nonsmooth functions. Numerical applications to signal and image processing problems are demonstrated.