Decentralized partially observable Markov decision process
This is not a Wikipedia article: It is an individual user's work-in-progress page, and may be incomplete and/or unreliable. For guidance on developing this draft, see Wikipedia:So you made a userspace draft. This draft was last edited 0 seconds ago (purge). Finished? Save your work by pressing the "Save page" button below, and a button will appear here allowing you to submit your draft for review. |
Category:Articles with invalid date parameter in template
This article, Decentralized partially observable Markov decision process, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
The decentralized partially observable Markov decision process (Dec-POMDP) [1][2] is a model for coordination and decision-making among multiple agents. It is a probabilistic model that can consider uncertainty in outcomes, sensors and communication (i.e., costly, delayed, noisy or nonexistent communication). it is a generalization of a Markov decision process (MDP) and a partially observable Markov decision process (POMDP) to consider multiple decentralized agents.
Definition
Formal definition
A Dec-POMDP is a 7-tuple , where
- is a set of states,
- is a set of actions for agent i, with is the set of joint actions,
- is a set of conditional transition probabilities between states, ,
- is the reward function.
- is a set of observations for agent i, with is the set of joint actions,
- is a set of conditional observation probabilities , and
- is the discount factor.
At each time step, each agent takes an action , the state updates based on the transition function (using the current state and the joint action), each agent observes an observation based on the observation function (using the next state and the joint action) and a reward is generated for the whole team based on the reward function . The goal is to maximize expected cumulative reward over a finite or infinite number of steps. These time steps repeat until some given horizon (called finite horizon) or forever (called infinite horizon). The discount factor maintains a finite sum in the infinite-horizon case ().
References
- ^ Bernstein, Daniel S.; Givan, Robert; Immerman, Neil; Zilberstein, Shlomo (November 2002). "The Complexity of Decentralized Control of Markov Decision Processes". Math. Oper. Res. 27 (4): 819–840. doi:10.1287/moor.27.4.819.297. ISSN 0364-765X.
- ^ Oliehoek, Frans A.; Amato, Christopher. A Concise Introduction to Decentralized POMDPs | SpringerLink. doi:10.1007/978-3-319-28929-8.