Online Computation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 21009 Experts worldwide ranked by ideXlab platform

Gunter Hommel - One of the best experts on this subject based on the ideXlab platform.

  • control and Online Computation of stable movement for biped robots
    Intelligent Robots and Systems, 2003
    Co-Authors: Konstantin Kondak, Gunter Hommel
    Abstract:

    The presented algorithm enables a biped to perform a stable movement to a defined goal without using any precomputed trajectories. The algorithm merges the trajectory generation and the control along it, and can be used for global control, for local control along an existing trajectory, as well as for Online Computation of trajectories for stable movement. The algorithm is based on a decoupling of the non-linear model and changing the steering torques to account for both the overall stability of the biped and for achieving the goal. The algorithm is applied to the model of a biped moving in a sagittal plane. By specifying the goal as an arbitrary pelvis position the biped can perform movements such as sitting down or standing up. The performance of the algorithm is demonstrated in a simulation.

  • IROS - Control and Online Computation of stable movement for biped robots
    Proceedings 2003 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), 1
    Co-Authors: Konstantin Kondak, Gunter Hommel
    Abstract:

    The presented algorithm enables a biped to perform a stable movement to a defined goal without using any precomputed trajectories. The algorithm merges the trajectory generation and the control along it, and can be used for global control, for local control along an existing trajectory, as well as for Online Computation of trajectories for stable movement. The algorithm is based on a decoupling of the non-linear model and changing the steering torques to account for both the overall stability of the biped and for achieving the goal. The algorithm is applied to the model of a biped moving in a sagittal plane. By specifying the goal as an arbitrary pelvis position the biped can perform movements such as sitting down or standing up. The performance of the algorithm is demonstrated in a simulation.

Rebecca Willett - One of the best experts on this subject based on the ideXlab platform.

  • Scalable Generalized Linear Bandits: Online Computation and Hashing
    arXiv: Machine Learning, 2017
    Co-Authors: Kwang-sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett
    Abstract:

    Generalized Linear Bandits (GLBs), a natural extension of the stochastic linear bandits, has been popular and successful in recent years. However, existing GLBs scale poorly with the number of rounds and the number of arms, limiting their utility in practice. This paper proposes new, scalable solutions to the GLB problem in two respects. First, unlike existing GLBs, whose per-time-step space and time complexity grow at least linearly with time $t$, we propose a new algorithm that performs Online Computations to enjoy a constant space and time complexity. At its heart is a novel Generalized Linear extension of the Online-to-confidence-set Conversion (GLOC method) that takes \emph{any} Online learning algorithm and turns it into a GLB algorithm. As a special case, we apply GLOC to the Online Newton step algorithm, which results in a low-regret GLB algorithm with much lower time and memory complexity than prior work. Second, for the case where the number $N$ of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search. Such methods can be implemented via hashing algorithms (i.e., "hash-amenable") and result in a time complexity sublinear in $N$. While a Thompson sampling extension of GLOC is hash-amenable, its regret bound for $d$-dimensional arm sets scales with $d^{3/2}$, whereas GLOC's regret bound scales with $d$. Towards closing this gap, we propose a new hash-amenable algorithm whose regret bound scales with $d^{5/4}$. Finally, we propose a fast approximate hash-key Computation (inner product) with a better accuracy than the state-of-the-art, which can be of independent interest. We conclude the paper with preliminary experimental results confirming the merits of our methods.

  • NIPS - Scalable Generalized Linear Bandits: Online Computation and Hashing
    2017
    Co-Authors: Kwang-sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett
    Abstract:

    Generalized Linear Bandits (GLBs), a natural extension of the stochastic linear bandits, has been popular and successful in recent years. However, existing GLBs scale poorly with the number of rounds and the number of arms, limiting their utility in practice. This paper proposes new, scalable solutions to the GLB problem in two respects. First, unlike existing GLBs, whose per-time-step space and time complexity grow at least linearly with time $t$, we propose a new algorithm that performs Online Computations to enjoy a constant space and time complexity. At its heart is a novel Generalized Linear extension of the Online-to-confidence-set Conversion (GLOC method) that takes \emph{any} Online learning algorithm and turns it into a GLB algorithm. As a special case, we apply GLOC to the Online Newton step algorithm, which results in a low-regret GLB algorithm with much lower time and memory complexity than prior work. Second, for the case where the number $N$ of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search. Such methods can be implemented via hashing algorithms (i.e., ``hash-amenable'') and result in a time complexity sublinear in $N$. While a Thompson sampling extension of GLOC is hash-amenable, its regret bound for $d$-dimensional arm sets scales with $d^{3/2}$, whereas GLOC's regret bound scales with $d$. Towards closing this gap, we propose a new hash-amenable algorithm whose regret bound scales with $d^{5/4}$. Finally, we propose a fast approximate hash-key Computation (inner product) with a better accuracy than the state-of-the-art, which can be of independent interest. We conclude the paper with preliminary experimental results confirming the merits of our methods.

J.k. Hedrick - One of the best experts on this subject based on the ideXlab platform.

  • Air-to-fuel ratio control of spark ignition engines using Gaussian network sliding control
    IEEE Transactions on Control Systems Technology, 1998
    Co-Authors: Mooncheol Won, Seibum B. Choi, J.k. Hedrick
    Abstract:

    This paper treats air-to-fuel ratio control of a spark ignition engine. A direct adaptive control method using Gaussian neural networks is developed to compensate transient fueling dynamics and the measurement bias of mass air flow rate into the manifold. The transient fueling compensation method is coupled with a dynamic sliding mode control technique that governs fueling rate when the throttle change is not rapid. The proposed controller is simple enough for Online Computation and is successfully implemented on an automotive engine having a multiport fuel injection system.

  • Air to fuel ratio control of spark ignition engines using dynamic sliding mode control and Gaussian neural network
    Proceedings of 1995 American Control Conference - ACC'95, 1
    Co-Authors: Mooncheol Won, Seibum B. Choi, J.k. Hedrick
    Abstract:

    This paper deals with air to fuel ratio control of a spark ignition engine, whose pollutant is a major cause of air pollution. A direct adaptive control using Gaussian neural networks is developed to compensate transient fueling dynamics and measurement error in mass air flow rate into the cylinder. The transient fueling compensation method is coupled with a dynamic sliding mode control technique that governs the steady state fueling rate. The proposed controller is simple enough for Online Computation and is implemented automotive engine using a PC-386.

Konstantin Kondak - One of the best experts on this subject based on the ideXlab platform.

  • control and Online Computation of stable movement for biped robots
    Intelligent Robots and Systems, 2003
    Co-Authors: Konstantin Kondak, Gunter Hommel
    Abstract:

    The presented algorithm enables a biped to perform a stable movement to a defined goal without using any precomputed trajectories. The algorithm merges the trajectory generation and the control along it, and can be used for global control, for local control along an existing trajectory, as well as for Online Computation of trajectories for stable movement. The algorithm is based on a decoupling of the non-linear model and changing the steering torques to account for both the overall stability of the biped and for achieving the goal. The algorithm is applied to the model of a biped moving in a sagittal plane. By specifying the goal as an arbitrary pelvis position the biped can perform movements such as sitting down or standing up. The performance of the algorithm is demonstrated in a simulation.

  • IROS - Control and Online Computation of stable movement for biped robots
    Proceedings 2003 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), 1
    Co-Authors: Konstantin Kondak, Gunter Hommel
    Abstract:

    The presented algorithm enables a biped to perform a stable movement to a defined goal without using any precomputed trajectories. The algorithm merges the trajectory generation and the control along it, and can be used for global control, for local control along an existing trajectory, as well as for Online Computation of trajectories for stable movement. The algorithm is based on a decoupling of the non-linear model and changing the steering torques to account for both the overall stability of the biped and for achieving the goal. The algorithm is applied to the model of a biped moving in a sagittal plane. By specifying the goal as an arbitrary pelvis position the biped can perform movements such as sitting down or standing up. The performance of the algorithm is demonstrated in a simulation.

Kwang-sung Jun - One of the best experts on this subject based on the ideXlab platform.

  • Scalable Generalized Linear Bandits: Online Computation and Hashing
    arXiv: Machine Learning, 2017
    Co-Authors: Kwang-sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett
    Abstract:

    Generalized Linear Bandits (GLBs), a natural extension of the stochastic linear bandits, has been popular and successful in recent years. However, existing GLBs scale poorly with the number of rounds and the number of arms, limiting their utility in practice. This paper proposes new, scalable solutions to the GLB problem in two respects. First, unlike existing GLBs, whose per-time-step space and time complexity grow at least linearly with time $t$, we propose a new algorithm that performs Online Computations to enjoy a constant space and time complexity. At its heart is a novel Generalized Linear extension of the Online-to-confidence-set Conversion (GLOC method) that takes \emph{any} Online learning algorithm and turns it into a GLB algorithm. As a special case, we apply GLOC to the Online Newton step algorithm, which results in a low-regret GLB algorithm with much lower time and memory complexity than prior work. Second, for the case where the number $N$ of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search. Such methods can be implemented via hashing algorithms (i.e., "hash-amenable") and result in a time complexity sublinear in $N$. While a Thompson sampling extension of GLOC is hash-amenable, its regret bound for $d$-dimensional arm sets scales with $d^{3/2}$, whereas GLOC's regret bound scales with $d$. Towards closing this gap, we propose a new hash-amenable algorithm whose regret bound scales with $d^{5/4}$. Finally, we propose a fast approximate hash-key Computation (inner product) with a better accuracy than the state-of-the-art, which can be of independent interest. We conclude the paper with preliminary experimental results confirming the merits of our methods.

  • NIPS - Scalable Generalized Linear Bandits: Online Computation and Hashing
    2017
    Co-Authors: Kwang-sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett
    Abstract:

    Generalized Linear Bandits (GLBs), a natural extension of the stochastic linear bandits, has been popular and successful in recent years. However, existing GLBs scale poorly with the number of rounds and the number of arms, limiting their utility in practice. This paper proposes new, scalable solutions to the GLB problem in two respects. First, unlike existing GLBs, whose per-time-step space and time complexity grow at least linearly with time $t$, we propose a new algorithm that performs Online Computations to enjoy a constant space and time complexity. At its heart is a novel Generalized Linear extension of the Online-to-confidence-set Conversion (GLOC method) that takes \emph{any} Online learning algorithm and turns it into a GLB algorithm. As a special case, we apply GLOC to the Online Newton step algorithm, which results in a low-regret GLB algorithm with much lower time and memory complexity than prior work. Second, for the case where the number $N$ of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search. Such methods can be implemented via hashing algorithms (i.e., ``hash-amenable'') and result in a time complexity sublinear in $N$. While a Thompson sampling extension of GLOC is hash-amenable, its regret bound for $d$-dimensional arm sets scales with $d^{3/2}$, whereas GLOC's regret bound scales with $d$. Towards closing this gap, we propose a new hash-amenable algorithm whose regret bound scales with $d^{5/4}$. Finally, we propose a fast approximate hash-key Computation (inner product) with a better accuracy than the state-of-the-art, which can be of independent interest. We conclude the paper with preliminary experimental results confirming the merits of our methods.