Promise theory

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Promise theory is a model of voluntary cooperation between individual, autonomous actors or agents who publish their intentions to one another in the form of promises.

A promise is a declaration of intent whose purpose is to increase the recipient's certainty about a claim of past, present or future behaviour.[1] For a promise to increase certainty, the recipient needs to trust the promiser, but trust can also be built on the verification that previous promises have been kept, thus trust plays a symbiotic relationship with promises.

History[edit]

Promise Theory was proposed by Mark Burgess in 2004, in the context of computer science, in order to solve problems present in obligation-based computer management schemes for policy-based management.[2] However its usefulness was quickly seen to go far beyond computing. The simple model of a promise used in Promise Theory (now called 'micropromises') can easily address matters of Economics and Organization.Promise Theory has since been developed by Burgess in collaboration with Jan Bergstra, resulting in a book: Promise Theory: Principles and Applications.[3]

Autonomy[edit]

Obligations, rather than promises have been the traditional way of guiding behaviour.[4] Promise Theory's point of departure from obligation logics is the idea that all agents in a system should have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. Obligation theories in computer science often view an obligation as a deterministic command that causes its proposed outcome. In Promise Theory an agent may only make promises about its own behaviour. For autonomous agents it is meaningless to make promises about another's behaviour.

Although this assumption could be interpreted morally or ethically, in Promise Theory this is simply a pragmatic `engineering' principle, which leads to a more complete documentation of the intended roles of the actors or agents within the whole. The reason for this is that, when one is not allowed to make assumptions about others' behaviour, one is forced to document every promise more completely in order to make predictions; thus it leads to a more complete documentation which in turn points out the possible failure modes by which cooperative behaviour could fail.

Command and control systems like those that motivate obligation theories can easily be reproduced by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control.

In Philosophy and Law a promise is often viewed as something that leads to an obligation. Promise Theory rejects that point of view. Bergstra and Burgess have shown that the concept of a promise is quite independent of that of obligation and indeed is simpler.[5]

The role of obligations in increasing certainty is unclear, since obligations can come from anywhere and an aggregation of non-local constraints cannot be resolved by a local agent: this means that obligations can actually increase uncertainty. In a world of promises, all constraints on an agent are self-imposed and local (even if they are suggested by outside agents), thus all contradictions can be resolved locally.

Multi-agent systems and commitments[edit]

The theory of commitments in multi-agent systems has some similarities with aspects of promise theory, but there are key differences. In Promise Theory a commitment is a subset of intentions. Since a promise is a published intention, a commitment may or may not be a promise. A detailed comparison of Promises and Commitments in the senses intended in their respective fields is forthcoming, and not a trivial matter.

Economics[edit]

Promises can be valuable to the promisee or even to the promiser. They might also lead to costs. There is thus an economic story to tell about promises. The economics of promises naturally motivate `selfish agent' behaviour and Promise Theory can be seen as a motivation for game theoretical decision making, in which multiple promises play the role of strategies in a game.[6]

The theory of promises as applied to organization [7] bears some resemblance to the theory of Institutional Diversity by Elinor Ostrom.[8] Several of the same themes and consideration appear; the main difference is that Ostrom focuses, like many authors, on the role of external rules and obligations. Promise Theory takes the opposite viewpoint that obeyance of rules is a voluntary act and hence it makes sense to focus on those voluntary promises. An attempt to force obeyance without a promise is considered to constitute an attack. One benefit of a Promise Theory approach is that it does not require special structural elements (e.g. Ostrom's institutional "Positions") to describe different roles in a collaborate network—these may also be viewed as promises in Promise Theory; thus there is a parsimony that helps to avoid an explosion of concepts, and perhaps more importantly admits mathematical formalization. The algebra and calculus of promises allows simple reasoning in a mathematical framework.

CFEngine[edit]

In spite of the generality of Promise Theory, it was originally proposed by Burgess as a way of modelling the computer management software CFEngine and its autonomous behaviour. Existing theories based on obligation were unsuitable. CFEngine uses a model of autonomy both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack: no agent can be forced to receive information or instructions from another agent, thus all cooperation is voluntary. For many users of the software, this property has been instrumental in both keeping their systems safe and adapting to the local requirements.

Emergent behaviour[edit]

In computer science, the Promise theory describes policy governed services, in a framework of completely autonomous agents, which assist one another by voluntary cooperation alone. It is a framework for analyzing realistic models of modern networking, and as a formal model for swarm intelligence.[9]

Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met, which was developed at Oslo University College, by drawing on ideas from several different lines of research conducted there, including policy based management, graph theory, logic and configuration management. It uses a constructivist approach that builds conventional management structures from graphs of interacting, autonomous agents. Promises can be asserted either from an agent to itself or from one agent to another and each promise implies a constraint on the behavior of the promising agent. The atomicity of the promises makes them a tool for finding contradictions and inconsistencies.[10]

See also[edit]

References[edit]

  1. ^ http://project.iu.hio.no/papers/origin2.pdf
  2. ^ http://project.iu.hio.no/papers/dsom2005.pdf
  3. ^ http://www.amazon.com/Promise-Theory-Principles-Applications-Volume/dp/1495437779/ref=sr_1_3?ie=UTF8&qid=1391713533&sr=8-3&keywords=promise+theory
  4. ^ http://arxiv.org/abs/0810.3294
  5. ^ http://arxiv.org/abs/0810.3294
  6. ^ http://project.iu.hio.no/papers/pcm.2.pdf
  7. ^ Laws of Human-Computer Behaviour and Collective Organization.  http://research.iu.hio.no/papers/organization.pdf
  8. ^ Ostrom, Elinor (2005). Understanding Institutional Diversity. Princeton University Press. ISBN 0-691-12238-5. 
  9. ^ M. Burgess, S. Fagernes (2006), Promise theory - a model of autonomous objects for pervasive computing and swarms, Oslo University College, ISBN 0-7695-2622-5
  10. ^ Promise Theory Website, Oslo University College Computing Repository

External links[edit]

Promise Theory website