Stochastic Dynamic

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 113844 Experts worldwide ranked by ideXlab platform

Claire J. Tomlin - One of the best experts on this subject based on the ideXlab platform.

  • A Risk-Sensitive Finite-Time Reachability Approach for Safety of Stochastic Dynamic Systems
    2019 American Control Conference (ACC), 2019
    Co-Authors: Margaret P. Chapman, Marco Pavone, Jonathan Lacotte, Aviv Tamar, Kevin M. Smith, Victoria Cheng, Jaime F. Fisac, Claire J. Tomlin
    Abstract:

    A classic reachability problem for safety of Dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of Stochastic Dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set asa set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk(CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set and provide arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for Stochastic reachability analysis).

  • a risk sensitive finite time reachability approach for safety of Stochastic Dynamic systems
    arXiv: Systems and Control, 2019
    Co-Authors: Margaret P. Chapman, Marco Pavone, Jonathan Lacotte, Aviv Tamar, Kevin M. Smith, Victoria Cheng, Jaime F. Fisac, Donggun Lee, Susmit Jha, Claire J. Tomlin
    Abstract:

    A classic reachability problem for safety of Dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of Stochastic Dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set as a set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk (CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set, and provide theoretical arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for Stochastic reachability analysis).

Margaret P. Chapman - One of the best experts on this subject based on the ideXlab platform.

  • A Risk-Sensitive Finite-Time Reachability Approach for Safety of Stochastic Dynamic Systems
    2019 American Control Conference (ACC), 2019
    Co-Authors: Margaret P. Chapman, Marco Pavone, Jonathan Lacotte, Aviv Tamar, Kevin M. Smith, Victoria Cheng, Jaime F. Fisac, Claire J. Tomlin
    Abstract:

    A classic reachability problem for safety of Dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of Stochastic Dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set asa set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk(CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set and provide arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for Stochastic reachability analysis).

  • a risk sensitive finite time reachability approach for safety of Stochastic Dynamic systems
    arXiv: Systems and Control, 2019
    Co-Authors: Margaret P. Chapman, Marco Pavone, Jonathan Lacotte, Aviv Tamar, Kevin M. Smith, Victoria Cheng, Jaime F. Fisac, Donggun Lee, Susmit Jha, Claire J. Tomlin
    Abstract:

    A classic reachability problem for safety of Dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of Stochastic Dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set as a set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk (CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set, and provide theoretical arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for Stochastic reachability analysis).

Hoijun Yoo - One of the best experts on this subject based on the ideXlab platform.

  • hnpu an adaptive dnn training processor utilizing Stochastic Dynamic fixed point and active bit precision searching
    IEEE Journal of Solid-state Circuits, 2021
    Co-Authors: Donghyeon Han, Gwangtae Park, Youngwoo Kim, Seokchan Song, Juhyoung Lee, Hoijun Yoo
    Abstract:

    This article presents HNPU, which is an energy-efficient deep neural network (DNN) training processor by adopting algorithm-hardware co-design. The HNPU supports Stochastic Dynamic fixed-point representation and layer-wise adaptive precision searching unit for low-bit-precision training. It additionally utilizes slice-level reconfigurability and sparsity to maximize its efficiency both in DNN inference and training. Adaptive bandwidth reconfigurable accumulation network enables reconfigurable DNN allocation and maintains its high core utilization even in various bit-precision conditions. Fabricated in a 28-nm process, the HNPU accomplished at least 5.9x higher energy efficiency and 2.5x higher area efficiency in actual DNN training compared with the previous state-of-the-art on-chip learning processors.

  • hnpu an adaptive dnn training processor utilizing Stochastic Dynamic fixed point and active bit precision searching
    IEEE Journal of Solid-state Circuits, 2021
    Co-Authors: Donghyeon Han, Gwangtae Park, Youngwoo Kim, Seokchan Song, Juhyoung Lee, Hoijun Yoo
    Abstract:

    This article presents HNPU, which is an energy-efficient deep neural network (DNN) training processor by adopting algorithm-hardware co-design. The HNPU supports Stochastic Dynamic fixed-point representation and layer-wise adaptive precision searching unit for low-bit-precision training. It additionally utilizes slice-level reconfigurability and sparsity to maximize its efficiency both in DNN inference and training. Adaptive bandwidth reconfigurable accumulation network enables reconfigurable DNN allocation and maintains its high core utilization even in various bit-precision conditions. Fabricated in a 28-nm process, the HNPU accomplished at least $5.9\times $ higher energy efficiency and $2.5\times $ higher area efficiency in actual DNN training compared with the previous state-of-the-art on-chip learning processors.

Hugh P. Possingham - One of the best experts on this subject based on the ideXlab platform.

  • the use of Stochastic Dynamic programming in optimal landscape reconstruction for metapopulations
    Ecological Applications, 2003
    Co-Authors: Marcus Pickett, Michael I Westphal, Wayne M Getz, Hugh P. Possingham
    Abstract:

    A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic Stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus mala- churus intermedius), a critically endangered Australian bird. Using discrete-time Markov chains to describe the Dynamics of a metapopulation and Stochastic Dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction Dynamics have been integrated with population Dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combi- nation of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation management decisions depend great- ly on the current state of the metapopulation, and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from Stochastic Dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of man- agement actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.

  • optimal release strategies for biological control agents an application of Stochastic Dynamic programming to population management
    Journal of Applied Ecology, 2000
    Co-Authors: Katriona Shea, Hugh P. Possingham
    Abstract:

    1. Establishing biological control agents in the field is a major step in any classical biocontrol programme, yet there are few general guidelines to help the practitioner decide what factors might enhance the establishment of such agents. 2. A Stochastic Dynamic programming (SDP) approach, linked to a metapopulation model, was used to find optimal release strategies (number and size of releases), given constraints on time and the number of biocontrol agents available. By modelling within a decision-making framework we derived rules of thumb that will enable biocontrol workers to choose between management options, depending on the current state of the system. 3. When there are few well-established sites, making a few large releases is the optimal strategy. For other states of the system, the optimal strategy ranges from a few large releases, through a mixed strategy (a variety of release sizes), to many small releases, as the probability of establishment of smaller inocula increases. 4. Given that the probability of establishment is rarely a known entity, we also strongly recommend a mixed strategy in the early stages of a release programme, to accelerate learning and improve the chances of finding the optimal approach.

Jonathan Lacotte - One of the best experts on this subject based on the ideXlab platform.

  • A Risk-Sensitive Finite-Time Reachability Approach for Safety of Stochastic Dynamic Systems
    2019 American Control Conference (ACC), 2019
    Co-Authors: Margaret P. Chapman, Marco Pavone, Jonathan Lacotte, Aviv Tamar, Kevin M. Smith, Victoria Cheng, Jaime F. Fisac, Claire J. Tomlin
    Abstract:

    A classic reachability problem for safety of Dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of Stochastic Dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set asa set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk(CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set and provide arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for Stochastic reachability analysis).

  • a risk sensitive finite time reachability approach for safety of Stochastic Dynamic systems
    arXiv: Systems and Control, 2019
    Co-Authors: Margaret P. Chapman, Marco Pavone, Jonathan Lacotte, Aviv Tamar, Kevin M. Smith, Victoria Cheng, Jaime F. Fisac, Donggun Lee, Susmit Jha, Claire J. Tomlin
    Abstract:

    A classic reachability problem for safety of Dynamic systems is to compute the set of initial states from which the state trajectory is guaranteed to stay inside a given constraint set over a given time horizon. In this paper, we leverage existing theory of reachability analysis and risk measures to devise a risk-sensitive reachability approach for safety of Stochastic Dynamic systems under non-adversarial disturbances over a finite time horizon. Specifically, we first introduce the notion of a risk-sensitive safe set as a set of initial states from which the risk of large constraint violations can be reduced to a required level via a control policy, where risk is quantified using the Conditional Value-at-Risk (CVaR) measure. Second, we show how the computation of a risk-sensitive safe set can be reduced to the solution to a Markov Decision Process (MDP), where cost is assessed according to CVaR. Third, leveraging this reduction, we devise a tractable algorithm to approximate a risk-sensitive safe set, and provide theoretical arguments about its correctness. Finally, we present a realistic example inspired from stormwater catchment design to demonstrate the utility of risk-sensitive reachability analysis. In particular, our approach allows a practitioner to tune the level of risk sensitivity from worst-case (which is typical for Hamilton-Jacobi reachability analysis) to risk-neutral (which is the case for Stochastic reachability analysis).