The Experts below are selected from a list of 115377 Experts worldwide ranked by ideXlab platform

Doyakenji - One of the best experts on this subject based on the ideXlab platform.

Liming Xiang - One of the best experts on this subject based on the ideXlab platform.

  • ICIC (1) - Kernel-Based reinforcement learning
    Lecture Notes in Computer Science, 2006
    Co-Authors: Guanghua Hu, Liming Xiang
    Abstract:

    We consider the problem of approximating the cost-to-go functions in reinforcement learning. By mapping the state implicitly into a feature space, we perform a simple algorithm in the feature space, which corresponds to a complex algorithm in the original state space. Two kernel-based reinforcement learning algorithms, the e -insensitive kernel based reinforcement learning (e – KRL) and the least squares kernel based reinforcement learning (LS-KRL) are proposed. An example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to explore many states.

Csaba Szepesvári - One of the best experts on this subject based on the ideXlab platform.

  • Algorithms for reinforcement learning
    Synthesis Lectures on Artificial Intelligence and Machine Learning, 2010
    Co-Authors: Csaba Szepesvári
    Abstract:

    reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective.What distin- guishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the al- gorithms’ merits and limitations. reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book,we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming.We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations.

Kurt Driessens - One of the best experts on this subject based on the ideXlab platform.

  • Relational reinforcement learning
    Multi-Agent Systems and Applications, 2001
    Co-Authors: Kurt Driessens
    Abstract:

    This paper presents an introduction to reinforcement learning and relational reinforcement learning at a level to be understood by students and researchers with different backgrounds. It gives an overview of the fundamental principles and techniques of reinforcement learning without involving a rigorous deduction of the mathematics involved through the use of an example application. Then, relational reinforcement learning is presented as a combination of reinforcement learning with relational learning. Its advantages — such as the possibility of using structural representations, making abstraction from specific goals pursued and exploiting the results of previous learning phases — are discussed.status: publishe

  • EASSS - Relational reinforcement learning
    Multi-Agent Systems and Applications, 2001
    Co-Authors: Kurt Driessens
    Abstract:

    This paper presents an introduction to reinforcement learning and relational reinforcement learning at a level to be understood by students and researchers with different backgrounds.It gives an overview of the fundamental principles and techniques of reinforcement learning without involving a rigorous deduction of the mathematics involved through the use of an example application.Then, relational reinforcement learning is presented as a combination of reinforcement learning with relational learning. Its advantages -- such as the possibility of using structural representations, making abstraction from specific goals pursued and exploiting the results of previous learning phases -- are discussed.

  • Relational reinforcement learning
    Machine Learning, 2001
    Co-Authors: Sašo Džeroski, Luc De Raedt, Kurt Driessens
    Abstract:

    Relational reinforcement learning is presented, a learning technique that combines reinforcement learning with relational learning or inductive logic programming. Due to the use of a more expressive representation language to represent states, actions and Q-functions, relational reinforcement learning can be potentially applied to a new range of learning tasks. One such task that we investigate is planning in the blocks world, where it is assumed that the effects of the actions are unknown to the agent and the agent has to learn a policy. Within this simple domain we show that relational reinforcement learning solves some existing problems with reinforcement learning. In particular, relational reinforcement learning allows us to employ structural representations, to abstract from specific goals pursued and to exploit the results of previous learning phases when addressing new (more complex) situations.

Guanghua Hu - One of the best experts on this subject based on the ideXlab platform.

  • ICIC (1) - Kernel-Based reinforcement learning
    Lecture Notes in Computer Science, 2006
    Co-Authors: Guanghua Hu, Liming Xiang
    Abstract:

    We consider the problem of approximating the cost-to-go functions in reinforcement learning. By mapping the state implicitly into a feature space, we perform a simple algorithm in the feature space, which corresponds to a complex algorithm in the original state space. Two kernel-based reinforcement learning algorithms, the e -insensitive kernel based reinforcement learning (e – KRL) and the least squares kernel based reinforcement learning (LS-KRL) are proposed. An example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to explore many states.