Promise theory
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Promise Theory, in the context of information science, is a model of voluntary cooperation between individual, autonomous actors or agents who publish their intentions to one another in the form of promises. It is a form of labelled graph theory, describing discrete networks of agents joined by the unilateral promises they make.
A promise is a declaration of intent whose purpose is to increase the recipient's certainty about a claim of past, present or future behaviour.[1] For a promise to increase certainty, the recipient needs to trust the promiser, but trust can also be built on the verification (or assessment) that previous promises have been kept, thus trust plays a symbiotic relationship with promises. Each agent assesses its belief in the promise's outcome or intent. Thus Promise Theory is about the relativity of autonomous agents.
One of the goals of Promise Theory is to offer a model that unifies the physical (or dynamical) description of an information system with its intended meaning, i.e. its semantics. This has been used to describe Configuration Management of resources in information systems, amongst other things.
History
Promise Theory was proposed by Mark Burgess in 2004, in the context of computer science, in order to solve problems present in obligation-based computer management schemes for policy-based management.[1] However its usefulness was quickly seen to go far beyond computing. The simple model of a promise used in Promise Theory (now called 'micro-promises') can easily address matters of Economics and Organization. Promise Theory has since been developed by Burgess in collaboration with Dutch computer scientist Jan Bergstra, resulting in a book: Promise Theory: Principles and Applications.[2] published in 2013.
Interest in promise theory has grown in the IT industry, with several products citing it.[3][4][5][6][7][8]
Autonomy
Obligations, rather than promises have been the traditional way of guiding behaviour.[9] Promise Theory's point of departure from obligation logics is the idea that all agents in a system should have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. Obligation theories in computer science often view an obligation as a deterministic command that causes its proposed outcome. In Promise Theory an agent may only make promises about its own behaviour. For autonomous agents it is meaningless to make promises about another's behaviour.
Although this assumption could be interpreted morally or ethically, in Promise Theory this is simply a pragmatic `engineering' principle, which leads to a more complete documentation of the intended roles of the actors or agents within the whole. The reason for this is that, when one is not allowed to make assumptions about others' behaviour, one is forced to document every promise more completely in order to make predictions; thus it leads to a more complete documentation which in turn points out the possible failure modes by which cooperative behaviour could fail.
Command and control systems like those that motivate obligation theories can easily be reproduced by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control.
In Philosophy and Law a promise is often viewed as something that leads to an obligation. Promise Theory rejects that point of view. Bergstra and Burgess have shown that the concept of a promise is quite independent of that of obligation and indeed is simpler.[9]
The role of obligations in increasing certainty is unclear, since obligations can come from anywhere and an aggregation of non-local constraints cannot be resolved by a local agent: this means that obligations can actually increase uncertainty. In a world of promises, all constraints on an agent are self-imposed and local (even if they are suggested by outside agents), thus all contradictions can be resolved locally.
Multi-agent systems and commitments
The theory of commitments in multi-agent systems has some similarities with aspects of promise theory, but there are key differences. In Promise Theory a commitment is a subset of intentions. Since a promise is a published intention, a commitment may or may not be a promise. A detailed comparison of Promises and Commitments in the senses intended in their respective fields is forthcoming, and not a trivial matter.
Economics
Promises can be valuable to the promisee or even to the promiser. They might also lead to costs. There is thus an economic story to tell about promises. The economics of promises naturally motivate `selfish agent' behaviour and Promise Theory can be seen as a motivation for game theoretical decision making, in which multiple promises play the role of strategies in a game.[10]
The theory of promises as applied to organization [11] bears some resemblance to the theory of Institutional Diversity by Elinor Ostrom.[12] Several of the same themes and consideration appear; the main difference is that Ostrom focuses, like many authors, on the role of external rules and obligations. Promise Theory takes the opposite viewpoint that obeyance of rules is a voluntary act and hence it makes sense to focus on those voluntary promises. An attempt to force obeyance without a promise is considered to constitute an attack. One benefit of a Promise Theory approach is that it does not require special structural elements (e.g. Ostrom's institutional "Positions") to describe different roles in a collaborate network—these may also be viewed as promises in Promise Theory; thus there is a parsimony that helps to avoid an explosion of concepts, and perhaps more importantly admits mathematical formalization. The algebra and calculus of promises allows simple reasoning in a mathematical framework.
CFEngine
In spite of the generality of Promise Theory, it was originally proposed by Burgess as a way of modelling the computer management software CFEngine and its autonomous behaviour. Existing theories based on obligation were unsuitable. CFEngine uses a model of autonomy both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack: no agent can be forced to receive information or instructions from another agent, thus all cooperation is voluntary. For many users of the software, this property has been instrumental in both keeping their systems safe and adapting to the local requirements.
Emergent behaviour
In computer science, the Promise theory describes policy governed services, in a framework of completely autonomous agents, which assist one another by voluntary cooperation alone. It is a framework for analyzing realistic models of modern networking, and as a formal model for swarm intelligence.[13]
Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met, which was developed at Oslo University College, by drawing on ideas from several different lines of research conducted there, including policy based management, graph theory, logic and configuration management. It uses a constructivist approach that builds conventional management structures from graphs of interacting, autonomous agents. Promises can be asserted either from an agent to itself or from one agent to another and each promise implies a constraint on the behavior of the promising agent. The atomicity of the promises makes them a tool for finding contradictions and inconsistencies.
Agency as a model of systems in space and time
The promises made by autonomous agents lead to a mutually approved graph structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of smart spaces, i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of semantic spacetime uses promise theory to discuss these spacetime concepts.
Promises are more mathematically primitive than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modeling physical and virtual information systems.[14]
See also
References
- ^ a b M. Burgess, An Approach to Understanding Policy Based on Autonomy and Voluntary Cooperation
- ^ Promise Theory: Principles and Applications
- ^ Thinking in Promises, O'Reilly, 2015
- ^ Promise Theory: Can you really trust the network to keep promises?
- ^ ACI Policy Model: Introduction to some of the fundamentals of an ACI Policy and how it’s enforced
- ^ Why you need to know about promise theory
- ^ OpFlex-ing Your Cisco Application Centric Infrastructure
- ^ The Quest to Make Code Work Like Biology Just Took A Big Step (Wired 2016)
- ^ a b [0810.3294] A static theory of promises
- ^ http://project.iu.hio.no/papers/pcm.2.pdf
- ^ "Laws of Human-Computer Behaviour and Collective Organization".
{{cite journal}}
: Cite journal requires|journal=
(help) http://research.iu.hio.no/papers/organization.pdf - ^ Ostrom, Elinor (2005). Understanding Institutional Diversity. Princeton University Press. ISBN 0-691-12238-5.
- ^ M. Burgess, S. Fagernes (2006), Promise theory - a model of autonomous objects for pervasive computing and swarms, Oslo University College, ISBN 0-7695-2622-5
- ^ M. Burgess, Spacetimes with Semantics