A gambler has $2, she is allowed to play a game of chance 4 times and her goal is to maximize her probability of ending up with a least $6. If the gambler bets $ on a play of the game, then with probability 0.4 she wins the game, recoup the initial bet, and she increases her capital position by $; with probability 0.6, she loses the bet amount $; all plays are pairwise independent. On any play of the game, the gambler may not bet more money than she has available at the beginning of that play.[1]
Stochastic dynamic programming can be employed to model this problem and determine a betting strategy that, for instance, maximizes the gambler's probability of attaining a wealth of at least $6 by the end of the betting horizon.
Note that if there is no limit to the number of games that can be played, the problem becomes a variant of the well known St. Petersburg paradox.
Formal background
Consider a discrete system defined on stages in which each stage is characterized by
an initial state, where is the set of feasible states at the beginning of stage ;
a decision variable, where is the set of feasible actions at stage – note that may be a function of the initial state ;
an immediate cost/reward function, representing the cost/reward at stage if is the initial state and the action selected;
a state transition function that leads the system towards state .
Let represent the optimal cost/reward obtained by following an optimal policy over stages . Without loss of generality in what follow we will consider a reward maximisation setting. In deterministic dynamic programming one usually deals with functional equations taking the following structure
where and the boundary condition of the system is
The aim is to determine the set of optimal actions that maximise . Given the current state and the current action , we know with certainty the reward secured during the current stage and – thanks to the state transition function – the future state towards which the system transitions.
In practice, however, even if we know the state of the system at the beginning of the current stage as well as the decision taken, the state of the system at the beginning of the next stage and the current period reward are often random variables that can be observed only at the end of the current stage.
Stochastic dynamic programming deals with problems in which the current period reward and/or the next period state are random, i.e. with multi-stage stochastic systems. The decision maker's goal is to maximise expected (discounted) reward over a given planning horizon.
In their most general form, stochastic dynamic programs deal with functional equations taking the following structure
where
is the maximum expected reward that can be attained during stages , given state at the beginning of stage ;
belongs to the set of feasible actions at stage given initial state ;
Gambling game can be formulated as a Stochastic Dynamic Program as follows: there are games (i.e. stages) in the planning horizon
the state in period represents the initial wealth at the beginning of period ;
the action given state in period is the bet amount ;
the transition probability from state to state when action is taken in state is easily derived from the probability of winning (0.4) or losing (0.6) a game.
Let be the probability that, by the end of game 4, the gambler has at least $6, given that she has $ at the beginning of game .
the immediate profit incurred if action is taken in state is given by the expected value .
To derive the functional equation, define as a bet that attains , then at the beginning of game
if it is impossible to attain the goal, i.e. for ;
if the goal is attained, i.e. for ;
if the gambler should bet enough to attain the goal, i.e. for .
For the functional equation is , where ranges in ; the aim is to find .
Given the functional equation, an optimal betting policy can be obtained via forward recursion or backward recursion algorithms, as outlined below.
Given a bounded state space, backward recursion (Bertsekas 2000) begins by tabulating for every possible state belonging to the final stage . Once these values are tabulated, together with the associated optimal state-dependent actions , it is possible to move to stage and tabulate for all possible states belonging to the stage . The process continues by considering in a backward fashion all remaining stages up to the first one. Once this tabulation process is complete, – the value of an optimal policy given initial state – as well as the associated optimal action can be easily retrieved from the table. Since the computation proceeds in a backward fashion, it is clear that backward recursion may lead to computation of a large number of states that are not necessary for the computation of .
Example: Gambling game
This section needs expansion. You can help by adding to it. (January 2017)
Forward recursion
Given the initial state of the system at the beginning of period 1, forward recursion (Bertsekas 2000) computes by progressively expanding the functional equation (forward pass). This involves recursive calls for all that are necessary for computing a given . The value of an optimal policy and its structure are then retrieved via a (backward pass) in which these suspended recursive calls are resolved. A key difference from backward recursion is the fact that is computed only for states that are relevant for the computation of . Memoization is employed to avoid recomputation of states that have been already considered.
Example: Gambling game
We shall illustrate forward recursion in the context of the Gambling game instance previously discussed. We begin the forward pass by considering
At this point we have not computed yet , which are needed to compute ; we proceed and compute these items. Note that , therefore one can leverage memoization and perform the necessary computations only once.
Computation of
We have now computed for all that are needed to compute . However, this has led to additional suspended recursions involving . We proceed and compute these values.
Computation of
Since stage 4 is the last stage in our system, represent boundary conditions that are easily computed as follows.
Boundary conditions
At this point it is possible to proceed and recover the optimal policy and its value via a backward pass involving, at first, stage 3
Backward pass involving
and, then, stage 2.
Backward pass involving
We finally recover the value of an optimal policy
This is the optimal policy that has been previously illustrated. Note that there are multiple optimal policies leading to the same optimal value ; for instance, in the first game one may either bet $1 or $2.
Python implementation. The one that follows is a complete Python implementation of this example.
fromtypingimportList,Tupleimportmemoizeasmemimportfunctoolsclassmemoize:def__init__(self,func):self.func=funcself.memoized={}self.method_cache={}def__call__(self,*args):returnself.cache_get(self.memoized,args,lambda:self.func(*args))def__get__(self,obj,objtype):returnself.cache_get(self.method_cache,obj,lambda:self.__class__(functools.partial(self.func,obj)))defcache_get(self,cache,key,func):try:returncache[key]exceptKeyError:cache[key]=func()returncache[key]defreset(self):self.memoized={}self.method_cache={}classState:'''the state of the gambler's ruin problem '''def__init__(self,t:int,wealth:float):'''state constructor Arguments: t {int} -- time period wealth {float} -- initial wealth '''self.t,self.wealth=t,wealthdef__eq__(self,other):returnself.__dict__==other.__dict__def__str__(self):returnstr(self.t)+" "+str(self.wealth)def__hash__(self):returnhash(str(self))classGamblersRuin:def__init__(self,bettingHorizon:int,targetWealth:float,pmf:List[List[Tuple[int,float]]]):'''the gambler's ruin problem Arguments: bettingHorizon {int} -- betting horizon targetWealth {float} -- target wealth pmf {List[List[Tuple[int, float]]]} -- probability mass function '''# initialize instance variablesself.bettingHorizon,self.targetWealth,self.pmf=bettingHorizon,targetWealth,pmf# lambdasself.ag=lambdas:[iforiinrange(0,min(self.targetWealth//2,s.wealth)+1)]# action generatorself.st=lambdas,a,r:State(s.t+1,s.wealth-a+a*r)# state transitionself.iv=lambdas,a,r:1ifs.wealth-a+a*r>=self.targetWealthelse0# immediate value functionself.cache_actions={}# cache with optimal state/action pairsdeff(self,wealth:float)->float:s=State(0,wealth)returnself._f(s)defq(self,t:int,wealth:float)->float:s=State(t,wealth)returnself.cache_actions[str(s)]@memoizedef_f(self,s:State)->float:#Forward recursionv=max([sum([p[1]*(self._f(self.st(s,a,p[0]))ifs.t<self.bettingHorizon-1elseself.iv(s,a,p[0]))# future valueforpinself.pmf[s.t]])# random variable realisationsforainself.ag(s)])# actionsopt_a=lambdaa:sum([p[1]*(self._f(self.st(s,a,p[0]))ifs.t<self.bettingHorizon-1elseself.iv(s,a,p[0]))forpinself.pmf[s.t]])==vq=[kforkinfilter(opt_a,self.ag(s))]# retrieve best action listself.cache_actions[str(s)]=q[0]ifbool(q)elseNone# store an action in dictionaryreturnv# return valueinstance={"bettingHorizon":4,"targetWealth":6,"pmf":[[(0,0.6),(2,0.4)]foriinrange(0,4)]}gr,initial_wealth=GamblersRuin(**instance),2# f_1(x) is gambler's probability of attaining $targetWealth at the end of bettingHorizonprint("f_1("+str(initial_wealth)+"): "+str(gr.f(initial_wealth)))#Recover optimal action for period 2 when initial wealth at the beginning of period 2 is $1.t,initial_wealth=1,1print("b_"+str(t+1)+"("+str(initial_wealth)+"): "+str(gr.q(t,initial_wealth)))
Java implementation.GamblersRuin.java is a standalone Java 8 implementation of the above example.
Approximate dynamic programming
This section needs expansion. You can help by adding to it. (January 2017)