Dynamic Programming Algorithm

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 50880 Experts worldwide ranked by ideXlab platform

Sean R Eddy - One of the best experts on this subject based on the ideXlab platform.

  • a memory efficient Dynamic Programming Algorithm for optimal alignment of a sequence to an rna secondary structure
    BMC Bioinformatics, 2002
    Co-Authors: Sean R Eddy
    Abstract:

    Covariance models (CMs) are probabilistic models of RNA secondary structure, analogous to profile hidden Markov models of linear sequence. The Dynamic Programming Algorithm for aligning a CM to an RNA sequence of length N is O(N3) in memory. This is only practical for small RNAs. I describe a divide and conquer variant of the alignment Algorithm that is analogous to memory-efficient Myers/Miller Dynamic Programming Algorithms for linear sequence alignment. The new Algorithm has an O(N2 log N) memory complexity, at the expense of a small constant factor in time. Optimal ribosomal RNA structural alignments that previously required up to 150 GB of memory now require less than 270 MB.

  • a Dynamic Programming Algorithm for rna structure prediction including pseudoknots
    Journal of Molecular Biology, 1999
    Co-Authors: Elena Rivas, Sean R Eddy
    Abstract:

    We describe a Dynamic Programming Algorithm for predicting optimal RNA secondary structure, including pseudoknots. The Algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the Algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the Algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermoDynamic parameters augmented by a few parameters describing the thermoDynamic stability of pseudoknots. We demonstrate the properties of the Algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the Algorithm are steep, we believe this is the first Algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermoDynamic model.

Warren B. Powell - One of the best experts on this subject based on the ideXlab platform.

  • benchmarking a scalable approximate Dynamic Programming Algorithm for stochastic control of grid level energy storage
    Informs Journal on Computing, 2018
    Co-Authors: Daniel Salas, Warren B. Powell
    Abstract:

    We present and benchmark an approximate Dynamic Programming Algorithm that is capable of designing near-optimal control policies for a portfolio of heterogenous storage devices in a time-dependent environment, where wind supply, demand, and electricity prices may evolve stochastically. We found that the Algorithm was able to design storage policies that are within 0.08% of optimal on deterministic models, and within 0.86% on stochastic models. We use the Algorithm to analyze a dual-storage system with different capacities and losses, and show that the policy properly uses the low-loss device (which is typically much more expensive) for high-frequency variations. We close by demonstrating the Algorithm on a five-device system. The Algorithm easily scales to handle heterogeneous portfolios of storage devices distributed over the grid and more complex storage networks. The online supplement is available at https://doi.org/10.1287/ijoc.2017.0768.

  • an approximate Dynamic Programming Algorithm for monotone value functions
    Operations Research, 2015
    Co-Authors: Daniel R Jiang, Warren B. Powell
    Abstract:

    Many sequential decision problems can be formulated as Markov decision processes MDPs where the optimal value function or cost-to-go function can be shown to satisfy a monotone structure in some or all of its dimensions. When the state space becomes large, traditional techniques, such as the backward Dynamic Programming Algorithm i.e., backward induction or value iteration, may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate Dynamic Programming ADP. We propose a provably convergent ADP Algorithm called Monotone-ADP that exploits the monotonicity of the value functions to increase the rate of convergence. In this paper, we describe a general finite-horizon problem setting where the optimal value function is monotone, present a convergence proof for Monotone-ADP under various technical assumptions, and show numerical results for three application domains: optimal stopping, energy storage/allocation, and glycemic control for diabetes patients. The empirical results indicate that by taking advantage of monotonicity, we can attain high quality solutions within a relatively small number of iterations, using up to two orders of magnitude less computation than is needed to compute the optimal solution exactly.

  • an approximate Dynamic Programming Algorithm for monotone value functions
    arXiv: Optimization and Control, 2014
    Co-Authors: Daniel R Jiang, Warren B. Powell
    Abstract:

    Many sequential decision problems can be formulated as Markov Decision Processes (MDPs) where the optimal value function (or cost-to-go function) can be shown to satisfy a monotone structure in some or all of its dimensions. When the state space becomes large, traditional techniques, such as the backward Dynamic Programming Algorithm (i.e., backward induction or value iteration), may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate Dynamic Programming (ADP). We propose a provably convergent ADP Algorithm called Monotone-ADP that exploits the monotonicity of the value functions in order to increase the rate of convergence. In this paper, we describe a general finite-horizon problem setting where the optimal value function is monotone, present a convergence proof for Monotone-ADP under various technical assumptions, and show numerical results for three application domains: optimal stopping, energy storage/allocation, and glycemic control for diabetes patients. The empirical results indicate that by taking advantage of monotonicity, we can attain high quality solutions within a relatively small number of iterations, using up to two orders of magnitude less computation than is needed to compute the optimal solution exactly.

  • An Optimal Approximate Dynamic Programming Algorithm for Concave, Scalar Storage Problems With Vector-Valued Controls
    IEEE Transactions on Automatic Control, 2013
    Co-Authors: Juliana Nascimento, Warren B. Powell
    Abstract:

    We prove convergence of an approximate Dynamic Programming Algorithm for a class of high-dimensional stochastic control problems linked by a scalar storage device, given a technical condition. Our problem is motivated by the problem of optimizing energy flows for a power grid supported by grid-level storage. The problem is formulated as a stochastic, Dynamic program, where we estimate the value of resources in storage using a piecewise linear value function approximation. Given the technical condition, we provide a rigorous convergence proof for an approximate Dynamic Programming Algorithm, which can capture the presence of both the amount of energy held in storage as well as other exogenous variables. Our Algorithm exploits the natural concavity of the problem to avoid any need for explicit exploration policies.

  • an optimal approximate Dynamic Programming Algorithm for the lagged asset acquisition problem
    Mathematics of Operations Research, 2009
    Co-Authors: Juliana Nascimento, Warren B. Powell
    Abstract:

    We consider a multistage asset acquisition problem where assets are purchased now, at a price that varies randomly over time, to be used to satisfy a random demand at a particular point in time in the future. We provide a rare proof of convergence for an approximate Dynamic Programming Algorithm using pure exploitation, where the states we visit depend on the decisions produced by solving the approximate problem. The resulting Algorithm does not require knowing the probability distribution of prices or demands, nor does it require any assumptions about its functional form. The Algorithm and its proof rely on the fact that the true value function is a family of piecewise linear concave functions.

Juan C Lopez - One of the best experts on this subject based on the ideXlab platform.

  • a Dynamic Programming Algorithm for high level task scheduling in energy harvesting iot
    IEEE Internet of Things Journal, 2018
    Co-Authors: Antonio Caruso, Stefano Chessa, Soledad Escolar, Xavier Del Toro, Juan C Lopez
    Abstract:

    Outdoor Internet of Things (IoT) applications usually exploit energy harvesting systems to guarantee virtually uninterrupted operations. However, the use of energy harvesting poses issues concerning the optimization of the utility of the application while guaranteeing energy neutrality of the devices. In this context, we propose a new Dynamic Programming Algorithm for the optimization of the scheduling of the tasks in IoT devices that harvest energy by means of a solar panel. We show that the problem is NP-hard and that the Algorithm finds the optimum solution in a pseudo-polynomial time. Furthermore, we show that the Algorithm can be executed with a small overhead on three popular IoT platforms (namely TMote, Raspberry PI, and Arduino) and, by simulation, we show the behavior of the Algorithm with different settings and at different conditions of energy production.

Elena Rivas - One of the best experts on this subject based on the ideXlab platform.

  • a Dynamic Programming Algorithm for rna structure prediction including pseudoknots
    Journal of Molecular Biology, 1999
    Co-Authors: Elena Rivas, Sean R Eddy
    Abstract:

    We describe a Dynamic Programming Algorithm for predicting optimal RNA secondary structure, including pseudoknots. The Algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the Algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the Algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermoDynamic parameters augmented by a few parameters describing the thermoDynamic stability of pseudoknots. We demonstrate the properties of the Algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the Algorithm are steep, we believe this is the first Algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermoDynamic model.

Guang R Gao - One of the best experts on this subject based on the ideXlab platform.

  • a parallel Dynamic Programming Algorithm on a multi core architecture
    ACM Symposium on Parallel Algorithms and Architectures, 2007
    Co-Authors: Guangming Tan, Ninghui Sun, Guang R Gao
    Abstract:

    Dynamic Programming is an efficient technique to solve combinatorial search and optimization problem. There have been many parallel Dynamic Programming Algorithms. The purpose of this paper is to study a family of Dynamic Programming Algorithm where data dependence appear between non-consecutive stages, in other words, the data dependence is non-uniform. This kind of dynnamic Programming is typically called nonserial polyadic Dynamic Programming. Owing to the non-uniform data dependence, it is harder to optimize this problem for parallelism and locality on parallel architectures. In this paper, we address the chanllenge of exploiting fine grain parallelism and locality of nonserial polyadic Dynamic Programming on a multi-core architecture. We present a Programming and execution model for multi-core architectures with memory hierarchy. In the framework of the new model, the parallelism and locality benifit from a data dependence transformation. We propose a parallel pipelined Algorithm for filling the Dynamic Programming matrix by decomposing the computation operators. The new parallel Algorithm tolerates the memory access latency using multi-thread and is easily improved with tile technique. We formulate and analytically solve the optimization problem determing the tile size that minimizes the total execution time. The experiments on a simulator give a validation of the proposed model and show that the fine grain parallel Algorithm achieves sub-linear speedup and that a potential high scalability on multi-core arichitecture.

  • a multithreaded parallel implementation of a Dynamic Programming Algorithm for sequence comparison
    Pacific Symposium on Biocomputing, 2000
    Co-Authors: Wellington Santos Martins, J Del Cuvillo, F J Useche, Kevin B Theobald, Guang R Gao
    Abstract:

    This paper discusses the issues involved in implementing a Dynamic Programming Algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the Dynamic Programming Algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.