Talk:State–action–reward–state–action: Difference between revisions
RichardKatz (talk | contribs) |
RichardKatz (talk | contribs) |
||
Line 24: | Line 24: | ||
I've looked at <ref>http://scholar.google.co.uk/scholar_url?hl=en&q=http://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.17.2539%26rep%3Drep1%26type%3Dpdf&sa=X&scisig=AAGBfm2S_I1eUo9AsoieJcfOCrVk-kySEw&oi=scholarr</ref> and that seems to support what Thrun & Norvig teach in their Stanford ai-class [[User:Wheeliebin|wheeliebin]] ([[User talk:Wheeliebin|talk]]) 04:58, 12 November 2011 (UTC) |
I've looked at <ref>http://scholar.google.co.uk/scholar_url?hl=en&q=http://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.17.2539%26rep%3Drep1%26type%3Dpdf&sa=X&scisig=AAGBfm2S_I1eUo9AsoieJcfOCrVk-kySEw&oi=scholarr</ref> and that seems to support what Thrun & Norvig teach in their Stanford ai-class [[User:Wheeliebin|wheeliebin]] ([[User talk:Wheeliebin|talk]]) 04:58, 12 November 2011 (UTC) |
||
Note also section 1 in the |
Note also section 1 where the page states "Taking every letter in the quintuple" it lists "R(t+1)." Shouldn't this be "R(t)" as well? -- RDK |
Revision as of 00:30, 17 November 2011
![]() | Robotics Stub‑class Low‑importance | |||||||||
|
Date
When did this algorithm get invented ? XApple 19:46, 7 May 2007 (UTC)
- First published 1994, added info. 220.253.135.178 16:50, 21 May 2007 (UTC)
- Hey, thanks a lot for contributing to wikipedia ! XApple 23:05, 27 May 2007 (UTC)
Updates
For updates, SARSA uses the next action chosen, not the best next action, to reflect the value of the last state/action under the current policy. If using the best next action, you'll end up with Watkin's Q-Learning which SARSA was an attempt to provide an alternative to. By updating with the value of the best next action (Watkin's Q-Learning) the update can possibly over-estimate values, as the control method used will not pick this action all the time (due to the need to balance exploration and exploitation). A comparison between Q-Learning and SARSA, perhaps Cliff World from Rich Sutton's 'Reinforcement Learning An Introduction' (1998), may be useful to clarify the differences and the resulting behaviour --131.217.6.6 08:17, 29 May 2007 (UTC)
this is the algorithm presented in Q-Learning:
SARSA:
Uses "backpropagation"? updates previous Q entry with future reward? Dspattison (talk) 19:20, 19 March 2008 (UTC)
Correct Algorithm ?
Is the algorithm given correct? Should it not be R(t) not R(t+1) ? I've looked at [1] and that seems to support what Thrun & Norvig teach in their Stanford ai-class wheeliebin (talk) 04:58, 12 November 2011 (UTC)
Note also section 1 where the page states "Taking every letter in the quintuple" it lists "R(t+1)." Shouldn't this be "R(t)" as well? -- RDK