Gradient Computation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 23433 Experts worldwide ranked by ideXlab platform

Jan Dirk Jansen - One of the best experts on this subject based on the ideXlab platform.

  • Iterative multiscale Gradient Computation for heterogeneous subsurface flow
    Advances in Water Resources, 2019
    Co-Authors: Rafael Moraes, Jose Rodrigues, Hadi Hajibeygi, Wessel De Zeeuw, Jan Dirk Jansen
    Abstract:

    Abstract We introduce a semi-analytical iterative multiscale derivative Computation methodology that allows for error control and reduction to any desired accuracy, up to fine-scale precision. The model responses are computed by the multiscale forward simulation of flow in heterogeneous porous media. The derivative Computation method is based on the augmentation of the model equation and state vectors with the smoothing stage defined by the iterative multiscale method. In the formulation, we avoid additional complexity involved in computing partial derivatives associated to the smoothing step. We account for it as an approximate derivative Computation stage. The numerical experiments illustrate how the newly introduced derivative method computes misfit objective function Gradients that converge to fine-scale one as the iterative multiscale residual converges. The robustness of the methodology is investigated for test cases with high contrast permeability fields. The iterative multiscale Gradient method casts a promising approach, with minimal accuracy-efficiency tradeoff, for large-scale heterogeneous porous media optimization problems.

  • Multiscale Gradient Computation for Multiphase Flow in Porous Media
    Day 3 Wed February 22 2017, 2017
    Co-Authors: Rafael J. De Moraes, Jose Rodrigues, Hadi Hajibeygi, Jan Dirk Jansen
    Abstract:

    A multiscale Gradient Computation method for multiphase flow in heterogeneous porous media is developed. The method constructs multiscale primal and dual coarse grids, imposed on the given fine-scale Computational grid. Local multiscale basis functions are computed on (dual-) coarse blocks, constructing an accurate map (prolongation operator) between coarse- and fine-scale systems. While the expensive operations involved in computing the Gradients are performed at the coarse scale, sensitivities with respect to uncertain parameters (e.g., grid block permeabilities) are expressed in the fine scale via the partial derivatives of the prolongation operator. Hence, the method allows for updating of the geological model, rather than the dynamic model only, avoiding upscaling and the inevitable loss of information. The formulation and implementation are based on automatic differentiation (AD), allowing for convenient extensions to complex physics. An IMPES coupling strategy for flow and transport is followed, in the forward simulation. The flow equation is computed using a multiscale finite volume (MSFV) formulation and the transport equation is computed at the fine scale, after reconstruction of mass conservative velocity field. To assess the performance of the method, a synthetic multiphase flow test case is considered. The multiscale Gradients are compared against those obtained from a fine-scale reference strategy. Apart from its Computational efficiency, the benefits of the method include flexibility to accommodate variables expressed at different scales, specially in multiscale data assimilation and reservoir management studies.

  • Multiscale Gradient Computation for flow in heterogeneous porous media
    Journal of Computational Physics, 2017
    Co-Authors: Rafael J. De Moraes, Jose Rodrigues, Hadi Hajibeygi, Jan Dirk Jansen
    Abstract:

    An efficient multiscale (MS) Gradient Computation method for subsurface flow management and optimization is introduced. The general, algebraic framework allows for the calculation of Gradients using both the Direct and Adjoint derivative methods. The framework also allows for the utilization of any MS formulation that can be algebraically expressed in terms of a restriction and a prolongation operator. This is achieved via an implicit differentiation formulation. The approach favors algorithms for multiplying the sensitivity matrix and its transpose with arbitrary vectors. This provides a flexible way of computing Gradients in a form suitable for any given Gradient-based optimization algorithm. No assumption w.r.t. the nature of the problem or specific optimization parameters is made. Therefore, the framework can be applied to any Gradient-based study. In the implementation, extra partial derivative information required by the Gradient Computation is computed via automatic differentiation. A detailed utilization of the framework using the MS Finite Volume (MSFV) simulation technique is presented. Numerical experiments are performed to demonstrate the accuracy of the method compared to a fine-scale simulator. In addition, an asymptotic analysis is presented to provide an estimate of its Computational complexity. The investigations show that the presented method casts an accurate and efficient MS Gradient Computation strategy that can be successfully utilized in next-generation reservoir management studies.

  • Multiscale Gradient Computation for Subsurface Flow Models
    ECMOR XV - 15th European Conference on the Mathematics of Oil Recovery, 2016
    Co-Authors: Rafael J. De Moraes, Jose Rodrigues, Hadi Hajibeygi, Jan Dirk Jansen
    Abstract:

    We present an efficient multiscale (MS) Gradient Computation that is suitable for reservoir management studies involving optimization techniques for, e.g., computer-assisted history matching or life-cycle production optimization. The general, algebraic framework allows for the calculation of Gradients using both the Direct and Adjoint derivative methods. The framework also allows for the utilization of any MS formulation in the forward reservoir simulation that can be algebraically expressed in terms of a restriction and a prolongation operator. In the implementation, extra partial derivative information required by the Gradient methods is computed via automatic differentiation. Numerical experiments demonstrate the accuracy of the method compared against those based on fine-scale simulation (industry standard).

Scott Yang - One of the best experts on this subject based on the ideXlab platform.

  • efficient Gradient Computation for structured output learning with rational and tropical losses
    Neural Information Processing Systems, 2018
    Co-Authors: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
    Abstract:

    Many structured prediction problems admit a natural loss function for evaluation such as the edit-distance or $n$-gram loss. However, existing learning algorithms are typically designed to optimize alternative objectives such as the cross-entropy. This is because a na\"{i}ve implementation of the natural loss functions often results in intractable Gradient Computations. In this paper, we design efficient Gradient Computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses. These families include as special cases the $n$-gram loss, the edit-distance loss, and many other loss functions commonly used in natural language processing and Computational biology tasks that are based on sequence similarity measures. Our algorithms make use of weighted automata and graph operations over appropriate semirings to design efficient solutions. They facilitate efficient Gradient Computation and hence enable one to train learning models such as neural networks with complex structured losses.

  • NeurIPS - Efficient Gradient Computation for Structured Output Learning with Rational and Tropical Losses
    2018
    Co-Authors: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
    Abstract:

    Many structured prediction problems admit a natural loss function for evaluation such as the edit-distance or $n$-gram loss. However, existing learning algorithms are typically designed to optimize alternative objectives such as the cross-entropy. This is because a na\"{i}ve implementation of the natural loss functions often results in intractable Gradient Computations. In this paper, we design efficient Gradient Computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses. These families include as special cases the $n$-gram loss, the edit-distance loss, and many other loss functions commonly used in natural language processing and Computational biology tasks that are based on sequence similarity measures. Our algorithms make use of weighted automata and graph operations over appropriate semirings to design efficient solutions. They facilitate efficient Gradient Computation and hence enable one to train learning models such as neural networks with complex structured losses.

Mohamed Abouhawwash - One of the best experts on this subject based on the ideXlab platform.

  • karush kuhn tucker proximity measure for multi objective optimization based on numerical Gradients
    Genetic and Evolutionary Computation Conference, 2016
    Co-Authors: Mohamed Abouhawwash
    Abstract:

    A measure for estimating the convergence characteristics of a set of non-dominated points obtained by a multi-objective optimization algorithm was developed recently. The idea of the measure was developed based on the Karush-Kuhn-Tucker (KKT) optimality conditions which require the Gradients of objective and constraint functions. In this paper, we extend the scope of the proposed KKT proximity measure by computing Gradients numerically and evaluating the accuracy of the numerically computed KKT proximity measure with the same computed using the exact Gradient Computation. The results are encouraging and open up the possibility of using the proposed KKTPM to non-differentiable problems as well.

  • GECCO - Karush-Kuhn-Tucker Proximity Measure for Multi-Objective Optimization Based on Numerical Gradients
    Proceedings of the 2016 on Genetic and Evolutionary Computation Conference - GECCO '16, 2016
    Co-Authors: Mohamed Abouhawwash
    Abstract:

    A measure for estimating the convergence characteristics of a set of non-dominated points obtained by a multi-objective optimization algorithm was developed recently. The idea of the measure was developed based on the Karush-Kuhn-Tucker (KKT) optimality conditions which require the Gradients of objective and constraint functions. In this paper, we extend the scope of the proposed KKT proximity measure by computing Gradients numerically and evaluating the accuracy of the numerically computed KKT proximity measure with the same computed using the exact Gradient Computation. The results are encouraging and open up the possibility of using the proposed KKTPM to non-differentiable problems as well.

O. Cugat - One of the best experts on this subject based on the ideXlab platform.

Corinna Cortes - One of the best experts on this subject based on the ideXlab platform.

  • efficient Gradient Computation for structured output learning with rational and tropical losses
    Neural Information Processing Systems, 2018
    Co-Authors: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
    Abstract:

    Many structured prediction problems admit a natural loss function for evaluation such as the edit-distance or $n$-gram loss. However, existing learning algorithms are typically designed to optimize alternative objectives such as the cross-entropy. This is because a na\"{i}ve implementation of the natural loss functions often results in intractable Gradient Computations. In this paper, we design efficient Gradient Computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses. These families include as special cases the $n$-gram loss, the edit-distance loss, and many other loss functions commonly used in natural language processing and Computational biology tasks that are based on sequence similarity measures. Our algorithms make use of weighted automata and graph operations over appropriate semirings to design efficient solutions. They facilitate efficient Gradient Computation and hence enable one to train learning models such as neural networks with complex structured losses.

  • NeurIPS - Efficient Gradient Computation for Structured Output Learning with Rational and Tropical Losses
    2018
    Co-Authors: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
    Abstract:

    Many structured prediction problems admit a natural loss function for evaluation such as the edit-distance or $n$-gram loss. However, existing learning algorithms are typically designed to optimize alternative objectives such as the cross-entropy. This is because a na\"{i}ve implementation of the natural loss functions often results in intractable Gradient Computations. In this paper, we design efficient Gradient Computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses. These families include as special cases the $n$-gram loss, the edit-distance loss, and many other loss functions commonly used in natural language processing and Computational biology tasks that are based on sequence similarity measures. Our algorithms make use of weighted automata and graph operations over appropriate semirings to design efficient solutions. They facilitate efficient Gradient Computation and hence enable one to train learning models such as neural networks with complex structured losses.