Jump to content

Q-learning

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 66.57.21.66 (talk) at 16:12, 5 December 2010 (→‎Algorithm: fixed when reward is observed in the equation. The reward from "s_t" should be used to update Q(s_t,a_t). If "s_{t+1}" is used, then the rewards are double-counted.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Q-learning is a reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. A recent variation called delayed-Q learning has shown substantial improvements, bringing Probably approximately correct learning (PAC) bounds to Markov decision processes [1]

Algorithm

The problem model consists of an agent, states S and a number of actions per state A. By performing an action a, where , the agent can move from state to state. Each state provides the agent a reward (a real or natural number) or punishment (a negative reward). The goal of the agent is to maximize its total reward. It does this by learning which action is optimal for each state.

The algorithm therefore has a function which calculates the Quality of a state-action combination:

Before learning has started, Q returns a fixed value, chosen by the designer. Then, each time the agent is given a reward (the state has changed) new values are calculated for each combination of a state s from S, and action a from A. The core of the algorithm is a simple value iteration update. It assumes the old value and makes a correction based on the new information.

Where is the reward observed from , () the learning rate (may be the same for all pairs). The discount factor is such that

The above formula is equivalent to:

An episode of the algorithm ends when state is a final state (or, "absorbing state").

Note that for all final states , is never updated and thus retains its initial value.

Influence of variables on the algorithm

Learning rate

The learning rate determines to what extent the newly acquired information will override the old information. A factor of 0 will make the agent not learn anything, while a factor of 1 would make the agent consider only the most recent information.

Discount factor

The discount factor determines the importance of future rewards. A factor of 0 will make the agent "opportunistic" by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the values will diverge.

Implementation

Q-learning at its simplest uses tables to store data. This very quickly loses viability with increasing levels of complexity of the system it is monitoring/controlling. One answer to this problem is to use an (adapted) artificial neural network as a function approximator, as demonstrated by Tesauro in his Backgammon playing temporal difference learning research. An adaptation of the standard neural network is required because the required result (from which the error signal is generated) is itself generated at run-time.

Early study

Q-learning was first introduced by Watkins[2] in 1989.

The convergence proof was presented later by Watkins and Dayan[3] in 1992.

See also

External links

References

  1. ^ Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. Pac model-free reinforcement learning. In Proc. 23nd ICML 2006, pages 881–888, 2006.
  2. ^ Watkins, C.J.C.H., (1989), Learning from Delayed Rewards. Ph.D. thesis, Cambridge University.
  3. ^ Watkins and Dayan, C.J.C.H., (1992), 'Q-learning.Machine Learning', ISBN : 8:279-292