Promise theory: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
added redlink
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
Hazitt (talk | contribs)
Rewrote the entire article based on extensive reading, reference checking and discussions with some of the authors, including Mr. Burgess - for verification.
Tags: Reverted COI template removed Disambiguation links added
Line 1: Line 1:
{{distinguish|Futures and promises}}
{{distinguish|Futures and promises}}
{{multiple issues|
{{COI|date=August 2016}}
{{essay-like|date=August 2016}}
{{more citations needed|date=August 2016}}
{{technical|date=August 2016}}
}}
'''Promise Theory''', in the context of [[information science]], is a model of voluntary cooperation between individual, [[autonomous agent|autonomous actors or agents]] who publish their intentions to one another in the form of promises. It is a form of [[labelled graph theory]], describing discrete networks of agents joined by the unilateral promises they make.


[[File:PromiseTheoryPartialOrder.png|alt=Promise graph example|thumb|230x230px|An example Promise Theory diagram illustrating partial ordering of agents by promise.]]
A 'promise' is a declaration of intent whose purpose is to increase the recipient's certainty about a claim of past, present or future behaviour.<ref name="Burgess" >[https://link.springer.com/chapter/10.1007%2F11568285_9 M. Burgess, An Approach to Understanding Policy Based on Autonomy and Voluntary Cooperation]</ref> For a promise to increase certainty, the recipient needs to trust the promiser, but trust can also be built on the [[verification theory|verification]] (or 'assessment') that previous promises have been kept, thus trust plays a symbiotic relationship with promises. Each agent assesses its belief in the promise's outcome or intent. Thus Promise Theory is about the [[wikt:relativity|relativity]] of autonomous agents.


'''Promise Theory''', in the context of [[information science]], is a model of voluntary cooperation between individual [[autonomous agent|actors or agents]], which make public their 'intentions' to one another in the form of promises. Promise Theory has a clear philosophical grounding<ref name="book">{{cite web | url=https://www.amazon.com/dp/1696578558?ref_=pe_3052080_397514860 | title=Promise Theory: Principles and Applications}}</ref> and a mathematical formulation rooted in [[graph theory]] and [[set theory]]. It may be expressed in the form of a labelled [[graph theory|graph]], with [[set theory|set]]-valued edges in order to describe discrete networks of agents joined by the unilateral promises they make.
One of the goals of Promise Theory is to offer a model that unifies the physical (or dynamical) description of an information system with its intended meaning, i.e. its [[semantics]]. This has been used to describe [[configuration management]] of resources in information systems, amongst other things.


== History ==
== Summary of key ideas ==


Promise Theory is a <em>method of analysis</em>, suitable for studying any system of interacting components. It is not a technology or design methodology. It doesn't advocate any position or design principle, except as a method of analysis.
Promise Theory was proposed by [[Mark Burgess (computer scientist)|Mark Burgess]] in 2004, in the context of computer science, in order to solve problems present in obligation-based computer management schemes for [[policy-based management]].<ref name="Burgess" /> However its usefulness was quickly seen to go far beyond computing. The simple model of a promise used in Promise Theory (now called 'micro-promises') can easily address matters of Economics and Organization. Promise Theory has since been developed by Burgess in collaboration with Dutch computer scientist [[Jan Bergstra]], resulting in a book: Promise Theory: Principles and Applications.<ref>[https://www.amazon.com/Promise-Theory-Principles-Applications-Volume/dp/1495437779/ref=sr_1_3?ie=UTF8&qid=1391713533&sr=8-3&keywords=promise+theory Promise Theory: Principles and Applications]</ref> published in 2013.


Agents in Promise Theory are said to be <em>autonomous</em>, meaning that they are [[causality|causally]] independent of one another. Agents cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation. Agents are thus self-determined until such a time as they partially or completely give up their independence by promising to accept guidance from other agents.
Interest in promise theory has grown in the IT industry, with several products citing it.<ref>[http://markburgess.org/TIpromises.html Thinking in Promises, O'Reilly, 2015]</ref><ref>[http://www.networkworld.com/article/2449562/sdn/promise-theory-mark-burgess-cfengine-sdn-cisco-aci-apic-opflex.html Promise Theory: Can you really trust the network to keep promises?]</ref><ref>[https://jmplank.wordpress.com/tag/promise-theory/ ACI Policy Model: Introduction to some of the fundamentals of an ACI Policy and how it’s enforced]</ref><ref>[http://www.nojitter.com/post/240169790/why-you-need-to-know-about-promise-theory Why you need to know about promise theory]</ref><ref>[http://vmiss.net/infrastructure/opflex-ing-your-cisco-application-centric-infrastructure/ OpFlex-ing Your Cisco Application Centric Infrastructure]</ref><ref>[https://www.wired.com/2016/06/chef-just-took-big-step-quest-make-code-work-like-biology/ The Quest to Make Code Work Like Biology Just Took A Big Step (Wired 2016)]</ref>


Agents may embody mechanisms in order to keep promises, from simple processes like a spinning loop, while others can be much more sophisticated such as an organism with advanced cognitive or reasoning abilities. These differences in internal <em>process complexity</em> lead to a definition of so-called ‘[[Semantic spacetime|semantic scaling]]' of agent complexity.
== Autonomy ==


=== Intentions and outcomes ===
Obligations, rather than promises have been the traditional way of guiding behaviour.<ref name="ArXiv 3294" >[https://arxiv.org/abs/0810.3294 [0810.3294&#93; A static theory of promises<!-- Bot generated title -->]</ref>
Promise Theory's point of departure from [[Deontological ethics|obligation logics]] is the idea that all agents in a system should have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. Obligation theories in computer science often view an obligation as a deterministic command that causes its proposed outcome. In Promise Theory an agent may only make promises about its own behaviour. For autonomous agents it is meaningless to make promises about another's behaviour.


Promise Theory describes agents and their 'intentions'. An intention may be realized by a behaviour or a target outcome. Intentions are thus made concrete by defining a set of <em>acceptable outcomes</em> associated with each intention. An outcome is most useful when it describes an [[invariant]] or [[Fixed point (mathematics)|mathematical fixed point]] in some description of states, because this can be both dynamically and semantically [[stability|stable]].
Although this assumption could be interpreted morally or ethically, in Promise Theory this is simply a pragmatic `engineering' principle, which leads to a more complete documentation of the intended roles of the actors or agents within the whole. The reason for this is that, when one is not allowed to make assumptions about others' behaviour, one is forced to document every promise more completely in order to make predictions; thus it leads to a more complete documentation which in turn points out the possible failure modes by which cooperative behaviour could fail.


Each intention expresses a quantifiable outcome, which may be described as a [[State (computer science)|state]] of an agent. Intentions are sometimes described as targets, goals, or desired states<ref name="book"/>. The selection of intentions by an agent is left unexplained so as to avoid questions about free will<ref name="book"/>.
Command and control systems like those that motivate obligation theories can easily be reproduced by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control.

Agents express their intentions to one another by 'promise' or by 'imposition'. This provides a measure by which they can 'assess' whether intentions are fulfilled or not (i.e. whether promises are kept).

=== Promises ===

Promises arise when an agent shares one of its intentions with another agent voluntarily, e.g. by publishing its intent. The method of sharing is left to the modeller to explain.

For example, an object, such as a door handle, is an agent that promises to be suitable for opening a door, although it could be used for something else, e.g. for digging a hole in the ground. We cannot assume that agents will accept the promises given in the spirit in which they were intended, because every agent has its own context and capabilities. The promise of 'door handleness' could be expressed by virtue of its physical form or by having a written label attached in some language. An agent that uses this promise can <em>assess</em> whether the agent keeps its promise, or is 'fit for purpose'. Any agent can decide this for itself.

A promise may be used voluntarily by another agent in order to influence its usage of the other agent. Promises facilitate interaction, cooperation, and tend to maximize an intended outcome. Promises are not commands or deterministic controls.

=== Impositions ===

Impositions, by contrast with voluntary promises, are attempts to induce cooperation in a recipient <em>involuntarily</em>. An imposition is usually unexpected and Promise Theory considers impositions to be generally ineffective as communications of intent. The axiom and meaning of autonomy imply that no imposition can automatically affect another agent, since agents intentions can only affect their own behaviour. Impositions may nonetheless succeed if the recipient (imposee) already independently intends or promises to accept such impositions, implicitly or explicitly. Promise theory is thus a [[determinism|non-deterministic] theory and is capable of dealing with incomplete information<ref name="book"/>.

For example, someone unexpectedly throwing a ball at another person is an imposition, which will typically fail to lead to a catch. However, if someone has promised to catch balls in advance, then they are more likely to accept the imposition of a random throw. If they have planned the throw in advance, then both sides have made a promise. The final case is most likely to lead to a successful outcome.

=== Assessments ===

Agents act as independent observers. Each agent ‘assesses' the promises of other agents according to its own internal capabilities. More sophisticated agents can perform more sophisticated assessments.

In physics, measurements are assumed to be repeatable and reliable assessments of the state of a process. In a sociological setting, agents typically have very different non-repeatable characteristics and methods. A person may judge the keeping of a promise in one way, a company in another, a nation state has its own process of deliberation. Assessment is intrinsically subjective: to establish impersonal judgements, agents need to cooperate by promising to coordinate and calibrate their assessments.

The formalization of subjectivity in Promise Theory makes it unlike most classical approaches to analysis. It embodies the notions of [[Theory of relativity|relativity]].

=== Processes ===

Ultimately, Promise Theory has been described as a theory of generalized [[process theory|processes]], arising from the <em>intentional interactions</em> of <em>independent</em> agents<ref name="treatise2">{{cite web |url=https://www.amazon.com/dp/B084QKMXCK?ref_=pe_3052080_397514860 | title=A Treatise on Systems (volume 2): The scaling of intentional systems with faults, errors, and flaws}}</ref>. Most languages for describing processes describe [[Determinism|deterministic]] [[flow diagram|flow diagrams]] or [[process calculus|algebraic languages]] that are designed to be semi-deterministic or [[control theory|control oriented]]. Promise Theory distinguishes itself from other process theories and algebras by focusing on the autonomy of agents. This is sometimes referred to as a [[Top-down and bottom-up design|bottom up]] theory.

=== Notation and representation ===

Promises and impositions amount to [[tuple|tuples]] of information, without a defined notation. Each promise has a promiser, one or more promisee agents, and a promise body, which describes the intention of the promise. Note that the spelling 'promiser' (rather than more classical 'promisor') is used consistently in Promise Theory. Some uses express promises in natural language for convenience, especially when relating to human intent<ref name="brexit">{{cite web|url=https://www.amazon.co.uk/Promise-Theory-Case-Study-Brexit/dp/1974545334/ref=sr_1_6?keywords=promise+theory+brexit&link_code=qs&qid=1572689629&sourceid=Mozilla-search&sr=8-6|title=Promise Theory: Case Study on the 2016 Brexit Vote}}</ref><ref name="p737">{{cite web|url=https://arxiv.org/abs/2001.01543|title=A Promise Theoretic Account of the Boeing 737 Max MCAS Algorithm Affair}}</ref>. Most scientific and technological works use a set theoretic and graphical representation to express networks of cooperation<ref name="book"/>.

Each 'promise' is a directed edge in a 'promise graph', whose status is a declaration of intent by one agent to others. The purpose of a promise is to increase the recipient agents' information about a claim of past, present or future behaviour<ref name="book"/>, perhaps to prepare it for the outcome. Each agent assesses its belief in the promise's outcome or intention. Thus Promise Theory describes the [[wikt:relativity|relativity]] of autonomous agent's beliefs and intentions.

For a promise to increase certainty, the recipient needs to [[Trust (social science)|trust]] the promiser, so trust becomes a central ingredient in the offering and accepting of promises<ref name="trust">{{cite web| url=https://arxiv.org/abs/0912.4637 | title=Local and Global Trust Based on the Concept of Promises}}</ref>. Trust begins as an arbitrary assessment by agents, and is typically increased as a result of an agent keeping promises. Conversely trust tends to be reduced by impositions and unkept promises. This leads to a unique mathematization of social science and questions, compatible with network theory, which can deal with questions of trust and authority<ref name="social"/><ref name="authority">{{cite web| url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3855352 | title=Authority (I): A Promise Theoretic Formalization}}</ref>. In this model, trust takes on the role of an accounting value <ref name="social"/>, analogous to energy, with both kinetic and potential components. Since assessments are completely local, each agent maintains its own account of trust for all other agents. There is therefore a [[memorylessness|memory]] aspect associated with autonomy and assessment<ref name="social"/>.

== Discussion ==

Promise Theory may be described as an [[agent-based model]] of general cooperative processes. It is, however, not related to the theory of [[Multi-agent system|multi-agent systems] in Computer Science or Artificial Intelligence, which focuses on movements of remotely controlled robotic agents, with top down planning. Promise Theory takes the opposite view that agents are not externally controlled. Rather, they have internal processes of self-regulation whose collective result leads to a [[Top-down and bottom-up design|bottom up]] emergence of desired behaviours without external control.

Agents may be humans, companies, countries, machinery, computer programs, apps, etc, which allows Promise Theory to be applied with far greater range than other more specific models<ref name="social">{{cite web| url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4252501 | title=Notes on Trust as a Causal Basis for Social Science}}</ref><ref name="swarm">{{cite web| url=https://www.researchgate.net/publication/224646884_Promise_theory_-_A_model_of_autonomous_objects_for_pervasive_computing_and_swarms | title=Promise theory - A model of autonomous objects for pervasive computing and swarms}}</ref>.

Promise theoretic agents are fundamentally autonomous, or causally independent, but they can choose to collaborate in order to achieve successful outcomes, only ceding part of their independence by choice when appropriate. Promise Theory is mainly about the cooperative aspects of the agents.

== Relationship to technology ==

Certain software technologies, such as [[Service-oriented architecture]] (SOA) and the [[Actor model]] implement software systems using ideas that resemble some aspects of promise theoretic thinking. Promise Theory does not confuse such specific implementations with its general methods, which might be used to analyze and compare any scenario. One of the first uses of Promise Theory was to analyze [[configuration management]] of resources in information systems, especially in the implementation of the software technology [[cfengine|CFEngine]].

One of the goals of Promise Theory is to offer a model with both [[dynamics]] and [[semantics]], i.e. which unifies the description of the [[Dynamical system]]) with its [[intention]] or [[Function (computer programming)|functional purpose]]<ref name="book"/>, which is typically difficult to formalize in technological or engineering subjects.

== History, context, and reception ==

An early form of Promise Theory was proposed by physicist and computer scientist [[Mark Burgess (computer scientist)|Mark Burgess]] in 2004, initially in the context of information science, in order to solve perceived problems with the use of obligation-based logics in computer management schemes, in particular for [[policy-based management]]<ref name="dsom2005">{{cite web | url=https://link.springer.com/chapter/10.1007%2F11568285_9 | title=M. Burgess, An Approach to Understanding Policy Based on Autonomy and Voluntary Cooperation}}</ref>.

A collaboration between Burgess and computer scientist [[Jan Bergstra]] led to a much deeper and consistent model of a promise, which included the notion of impositions and the role of trust. The usefulness of the concept was quickly seen to go beyond computing.

Promise Theory has subsequently been developed and applied to many areas from Information Technology to Economics and Organization<ref name="podcast"/>. Promise Theory has since been developed in a variety of directions by Burgess and Bergstra, resulting in several books and many scientific papers covering different applications<ref name="book"/><ref name="money"/><ref name="treatise1">{{cite web| url=https://www.amazon.com/dp/B084T2KNM5?ref_=pe_3052080_397514860 | title=A Treatise on Systems (volume 1): Analytical Descriptions of Human-Information Networks}}</ref><ref name="treatise2"/> <ref name="brexit"/><ref name="nuclear">{{cite web|url=https://www.amazon.co.uk/Promises-Threats-Asymmetric-Nuclear-Weapon-Promise/dp/1673128211/|title=Promises and Threats by Asymmetric Nuclear-Weapon States}}</ref><ref name="p737"/><ref name="jan2">{{cite web|url=https://transmathematica.org/index.php/journal/article/view/35|title=Promise Theory as a Tool for Informaticians, Transmathematica}}</ref>.

In spite of the later generality of Promise Theory, it was originally proposed by Burgess as a way of modelling the computer management software [[CFEngine]] and its autonomous behaviour. Existing theories based on obligations were deemed unsuitable as, according to Burgess, they amount to wishful thinking<ref name="rosegarden">{{cite web |url=http://markburgess.org/rosegarden.pdf|title=Promise You A Rose Garden (An Essay About System Management)}}</ref>. CFEngine uses a model of autonomy both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack: no agent can be forced to receive information or instructions from another agent, thus all cooperation is voluntary. For many users of the software, this property has been instrumental in both keeping their systems safe and adapting to the local requirements.

The theory of [[commit|commitments]] in [[multi-agent system]]s has some superficial similarities with aspects of Promise Theory, but there are key differences. In Promise Theory a commitment is simply as a promise to which an agent is committed (i.e. it has engaged in irreversible steps towards keeping the promise). A commitment is thus potentially stronger than a promise. In other areas of Computer Science, the term commitment is used as a form of [[Deontic logic|deontic obligation]]. An obligation is typically an imposition, which is the opposite of a promise. A detailed comparison of Promises and Commitments in the senses intended in their respective fields is not a trivial matter<ref name="book"/>.

Interest in Promise Theory grew in the IT industry following the publication of the essay <em>Promise You A Rose Garden</em> by Burgess in 2007<ref name="rosegarden"/><ref name="agile">{{cite web |url=https://agileuprising.libsyn.com/promise-theory-with-mark-burgess?tdest_id=478606|title=Promise Theory with Mark Burgess}}</ref>. Using a less academic, more popular style, the essay was cited by several software and networking publications and vendors<ref name="popbook">{{cite web | url=http://markburgess.org/TIpromises.html|title=Thinking in Promises, O'Reilly, 2015}}</ref><ref name="networkworld">{{cite web|url=http://www.networkworld.com/article/2449562/sdn/promise-theory-mark-burgess-cfengine-sdn-cisco-aci-apic-opflex.html|title=Promise Theory: Can you really trust the network to keep promises?}}</ref><ref name="ACI">{{cite web|url=https://jmplank.wordpress.com/tag/promise-theory/ |title=ACI Policy Model: Introduction to some of the fundamentals of an ACI Policy and how it's enforced}}</ref><ref name="opflex">{{cite web | url=http://www.nojitter.com/post/240169790/why-you-need-to-know-about-promise-theory |title=Why you need to know about promise theory}}</ref><ref name="cisco">{{cite web| url=http://vmiss.net/infrastructure/opflex-ing-your-cisco-application-centric-infrastructure/|title= OpFlex-ing Your Cisco Application Centric Infrastructure}}</ref><ref name="biology">{{cite web | url=https://www.wired.com/2016/06/chef-just-took-big-step-quest-make-code-work-like-biology/|title=The Quest to Make Code Work Like Biology Just Took A Big Step (Wired 2016)}}</ref><ref name="podcast">{{cite web |url=https://www.jimruttshow.com/mark-burgess/|title=Jim Rutt Show EP28 Mark Burgess on Promise Theory, AI & Spacetime}}</ref>. Visibility increased further after Burgess gave a talk on Promise Theory at Google, Santa Monica in 2008<ref name="googletalk">{{cite web |url=https://www.youtube.com/watch?v=4CCXs4Om5pY | title=The Promise of System Configuration (Google talk 2008)}}</ref>.

According to Burgess, some interpreters of Promise Theory have misunderstood its significance, viewing it as a political manifesto for greater social or technical decentralization or 'democracy', rather than as a robust scientific model for analysis<ref name="agile"/>. In particular, in the business world, the notion of autonomy is often sought on humanitarian or political grounds, with notions such as [[adhocracy]], [[holocracy]]. However, Burgess maintains that the promise model makes no assumptions about the virtues of one configuration or another; rather, it takes autonomy as an axiom akin to locality in physics and seeks to work out the consequences of that assumption. Another misconception lies in the associating Promise Theory exclusively with desired end state computing, as in [[cfengine|CFEngine]] or [[Effects-based operations]].

== Principles of Promise Theory ==

[[File:PromiseTheory.png|alt=Promise theoretic Notation|thumb|230x230px|Some simple examples of notation to represent promises between autonomous agents. The symbol <math>A</math> is normally used for agents, and arrows indicate promises.]]

=== Autonomy ===

In classical Computer Science and Philosophy,
[[obligation|obligations]] and their associated formalization in [[Deontic logic]], are the traditional way of describing and attempting to guide behaviour<ref name="ArXiv 3294">{{cite web|url=https://arxiv.org/abs/0810.3294 |title=A static theory of promises}}</ref>. Such logics assume that all behaviour originates from outside agents, governed by [[Control theory|control loops]]. Promise Theory's point of departure from [[Deontic logic|obligation logics]] is the idea that all agents in a system ultimately have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. This is a restatement of the [[principle of locality]] as used in physics. Obligation theories in Computer Science often implicitly view each obligation as a deterministic command, indeed as something which <em>[[modal logic|must]]</em> cause its proposed outcome. This is an idealization which is empirically false in general. In Promise Theory, an agent can only make promises about its own behaviour.

=== Promises of the first kind ===

Promises can be imagined and defined within different [[Categorization|kinds]]. The fundamental type of promise, satisfying full autonomy, is called a ‘<em>promise of the first kind</em>'<ref name="book"/>. In promises of the first kind, the first law is that: <em>no agent may make promises about another's behaviour</em>.

Although this assumption of independence could be interpreted morally or ethically, in Promise Theory this is simply a pragmatic engineering principle, which leads to a more complete documentation of the intended roles of the agents within the whole. When one is not allowed to make assumptions about other agents' behaviours, one is forced to document every promise more completely in order to make predictions; thus it leads to a more complete documentation which in turn points out the possible failure modes by which cooperative behaviour could fail<ref name="book"/>.

[[Command and control]] systems like those that motivate obligation theories can easily be reproduced in Promise Theory by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control.


In Philosophy and Law a promise is often viewed as something that leads to an obligation. Promise Theory rejects that point of view. Bergstra and Burgess have shown that the concept of a promise is quite independent of that of obligation and indeed is simpler.<ref name="ArXiv 3294" />
In Philosophy and Law a promise is often viewed as something that leads to an obligation. Promise Theory rejects that point of view. Bergstra and Burgess have shown that the concept of a promise is quite independent of that of obligation and indeed is simpler.<ref name="ArXiv 3294" />


The role of obligations in increasing certainty is unclear, since obligations can come from anywhere and an aggregation of non-local constraints cannot be resolved by a local agent: this means that obligations can actually increase uncertainty. In a world of promises, all constraints on an agent are self-imposed and local (even if they are suggested by outside agents), thus all contradictions can be resolved locally.
The role of obligations for increasing certainty is unclear, since obligations can come from anywhere without knowledge of an agent's capabilities. An aggregation of non-local constraints cannot be resolved by a local agent: this means that obligations can actually increase uncertainty<ref name="googletalk"/>. In a world of promises, all constraints on an agent are self-imposed and local (even if they are suggested by outside agents), thus all contradictions can be resolved locally.


=== Promise notation, polarity, and directedness of promises ===
== Multi-agent systems and commitments ==


All promises are directed from promiser (intender) to promisee (intendee).
The theory of commitments in [[multi-agent system]]s has some similarities with aspects of promise theory, but there are key differences. In Promise Theory a commitment is a subset of intentions. Since a promise is a published intention, a commitment may or may not be a promise. A detailed comparison of Promises and Commitments in the senses intended in their respective fields is forthcoming, and not a trivial matter.
The polarity of a promise (denoted either + or -) is the type of intention communicated by the promise. The rule that an agent can only promise its own behaviour means that links or bindings between agents must be mutually agreed. Promises thus come in two types. There is always a dual interpretation for a promise graph, in which offer (+) and acceptance (-) promises are interchanged.


An offer promise is from an agent <math>A</math> to an agent <math>A'</math> denoted (+) and is written:
== Economics ==
:<math>
A \stackrel{+b}{\longrightarrow} A'
</math>
where <math>b</math> is called the 'body' of the promise and has two components: a type or label (expressing the subject of the promise) and a constraint which expresses the intended subset of possible outcomes for the nature of the promise. This whole expression is often denoted <math>\pi</math> for the complete promise. Agents can also make conditional promises


:<math>
Promises can be valuable to the promisee or even to the promiser. They might also lead to costs. There is thus an economic story to tell about promises. The economics of promises naturally motivate `selfish agent' behaviour and Promise Theory can be seen as a motivation for game theoretical decision making, in which multiple promises play the role of strategies in a game.<ref>{{cite web |url=http://project.iu.hio.no/papers/pcm.2.pdf |title=Archived copy |website=project.iu.hio.no |access-date=14 January 2022 |archive-url=https://web.archive.org/web/20070106183021/http://project.iu.hio.no/papers/pcm.2.pdf |archive-date=6 January 2007 |url-status=dead}}</ref>
A \stackrel{+b|c}{\longrightarrow} A'
</math>
where this is to be read as a promise of <math>+b</math> if promise <math>c</math> is assessed to be kept.


A promise offered is not necessarily accepted, since it cannot be imposed for promises of the first kind. In order for a promise offer to be accepted, an acceptance promise must be given explicitly by the intended recipient. This is denoted by (-):
The theory of promises as applied to [[organization]]s<ref>{{cite journal |title=Laws of Human-Computer Behaviour and Collective Organization}} [https://web.archive.org/web/20110724180137/http://research.iu.hio.no/papers/organization.pdf]</ref> bears some resemblance to the theory of Institutional Diversity by [[Elinor Ostrom]].<ref>{{cite book |title=Understanding Institutional Diversity |last=Ostrom |first=Elinor |year=2005 |publisher=[[Princeton University Press]] |isbn=978-0-691-12238-0}}</ref>
:<math>
Several of the same themes and consideration appear; the main difference is that Ostrom focuses, like many authors, on the role of external rules and obligations. Promise Theory takes the opposite viewpoint that obeyance of rules is a voluntary act and hence it makes sense to focus on those voluntary promises. An attempt to force obeyance without a promise is considered to constitute an attack.
A' \stackrel{-b'}{\longrightarrow} A
One benefit of a Promise Theory approach is that it does not require special structural elements (e.g. Ostrom's institutional "Positions") to describe different roles in a collaborate network—these may also be viewed as promises in Promise Theory; thus there is a parsimony that helps to avoid an explosion of concepts, and perhaps more importantly admits mathematical formalization. The algebra and calculus of promises allows simple reasoning in a mathematical framework.
</math>
The resulting amount of voluntary interaction is the overlap <math>b \cap b'</math>.
In sketches or informal drawings a ([[Gender of connectors and fasteners|female]]) cup symbol is sometimes used for (-) promises instead of a ([[Gender of connectors and fasteners|male]]) arrowhead in order to emphasise the donor and receptor status of the polarities.


The combination of a (+) and a (-) promise leads to a binding, and these two parts correspond to a single directed edge in graph theory.
== CFEngine ==
The promise binding matrix is the closest thing to an [[adjacency matrix]] in plain [[graph theory]].


[[File:PromiseMatrix.png|alt=The promise binding matrix|thumb|230x230px|The promise binding matrix is the closest thing to an adjacency matrix in plain graph theory.]]
In spite of the generality of Promise Theory, it was originally proposed by Burgess as a way of modelling the computer management software [[CFEngine]] and its autonomous behaviour. Existing theories based on obligation were unsuitable. CFEngine uses a model of autonomy both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack: no agent can be forced to receive information or instructions from another agent, thus all cooperation is voluntary. For many users of the software, this property has been instrumental in both keeping their systems safe and adapting to the local requirements.


To distinguish involuntary impositions from promises, the block arrow is used (which is used to resemble a 'fist'<ref name="book"/>).
== Emergent behaviour ==
:<math>
A \stackrel{+b}{\;\;\mbox{---}\blacksquare \;\;} A'
</math>
Impositions are generally ineffective, since an autonomous agent cannot (formally) be forced by outside influence.


=== Assessments and valuations of promises ===
In [[computer science]], the Promise theory describes policy governed services, in a framework of completely autonomous agents, which assist one another by voluntary cooperation alone. It is a framework for analyzing realistic models of modern networking, and as a formal model for [[swarm intelligence]].<ref>M. Burgess, S. Fagernes (2006), ''Promise theory - a model of autonomous objects for pervasive computing and swarms'', Oslo University College, {{ISBN|0-7695-2622-5}}</ref>


The principle of autonomy requires that each agent assesses the validity and outcome of each promise independently. Agents may disagree on whether they assess a promise to be kept or not kept at any moment.
Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met, which was developed at [[Oslo University College]], by drawing on ideas from several different lines of research conducted there, including policy based management, graph theory, logic and [[configuration management]]. It uses a constructivist approach that builds conventional management structures from graphs of interacting, autonomous agents. Promises can be asserted either from an agent to itself or from one agent to another and each promise implies a constraint on the behavior of the promising agent. The atomicity of the promises makes them a tool for finding contradictions and inconsistencies.

[[File:AssessmentOfPromiseBinding.png|alt=The assessment of a promise binding matrix|thumb|230x230px|A promise assessment is a local scalar function of the assessors information about a promise binding, denoted by <math>\alpha()</math>.]]

The value of a promise is an assessment of private utility to the assessing agent. Promises might be valuable to either the promisee, the promiser, or neither. They might also lead to costs in the form of negative values. Promise Theory can be seen as part of the scaffolding for establishing game matrices in game theoretical decision making, in which multiple promises play the role of strategies in a game<ref name="book"/>. An example of this was used in the analysis of the theory of Institutional Diversity work of [[Elinor Ostrom]]<ref name="ostrom">{{cite book |title=Understanding Institutional Diversity |last=Ostrom |first=Elinor |year=2005 |publisher=[[Princeton University Press]] |isbn=978-0-691-12238-0}}</ref>.

Several themes and considerations that Ostrom deals with appear to be predicted by promise theoretical considerations; the main difference found was that Ostrom, like many authors, focuses on the role of external rules and obligations in a top down way. Promise Theory takes the opposite (bottom up) viewpoint, namely that obeying rules is a voluntary act<ref name="siri">{{cite web|url=https://www.researchgate.net/publication/338749338_Autonomic_Pervasive_Computing_A_Smart_Mall_Scenario_Using_Promise_Theory|title=Autonomic Pervasive Computing: A Smart Mall Scenario Using Promise Theory (Fagernes 2006)}}</ref><ref name="org">{{cite web | url=https://www.researchgate.net/publication/251269852_Laws_of_Human-Computer_Behaviour_and_Collective_Organization |title=Laws of Human-Computer Behaviour and Collective Organization}}</ref>.

== Emergent network and swarm behaviour ==

Promise Theory is a natural framework for analyzing realistic models of modern [[network science]] and as a formal model for [[swarm intelligence]].

A model for the latter was developed at [[Oslo University College]], by drawing on ideas from several different lines of research conducted there, including [[Policy-based management]], [[Graph Theory]], logic and [[Configuration Management]]<ref name="siri2">{{cite web | url=https://www.researchgate.net/publication/224646884_Promise_theory_-_A_model_of_autonomous_objects_for_pervasive_computing_and_swarms | title=Promise theory - A model of autonomous objects for pervasive computing and swarms}}</ref>.


== Agency as a model of systems in space and time ==
== Agency as a model of systems in space and time ==


The promises made by autonomous agents lead to a mutually approved [[graph (data structure)|graph]] structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of '''smart spaces''', i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of [[semantic spacetime]] uses promise theory to discuss these spacetime concepts.
The promises made by autonomous agents lead to a mutually approved [[graph (data structure)|graph]] structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of '''smart spaces''', i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of [[Semantic spacetime]] uses promise theory to discuss these spacetime concepts.

Promises are more elementary than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modelling physical and virtual information systems.<ref name="ref1">{{cite web | url=http://arxiv.org/abs/1411.5563 | title=Spacetimes with Semantics I, Notes on Theory and Formalism (2014)}}</ref>

== Promise Theory of money and economics ==

According to interviews, given in 2016, the application of Promise Theory to socio-technical systems began to be developed more formally following an encounter between Promise Theory originator [[Mark Burgess (computer scientist)|Mark Burgess]] and fellow physicist [[Geoffrey West]] at the Percolate conference in New York 2015<ref name="podcast"/>. Co-author [[Jan Bergstra]] had been writing independently about the logic of financial mechanisms. The language of Promise Theory could model processes in cities and other socio-technical systems<ref name="transient">{{cite web|url=https://www.youtube.com/watch?v=kjh2kcGCKas|title=Thinking In Promises For The Cyborg Age}}</ref><ref name="cities">{{cite web|url=https://arxiv.org/abs/1602.06091|title=On the scaling of functional spaces, from smart cities to cloud computing}}</ref>. Further interest from [[Federal Reserve]] researchers in San Francisco on the topic of [[Semantic spacetime]], prompted Burgess and Bergstra to expand and publish a set of their pre-existing notes on the Promise Theory of Money<ref name="money">{{cite web |url=https://www.amazon.com/dp/1696588375?ref_=pe_3052080_397514860 | title=Money, Ownership. and Agency: As an Application of Promise Theory}}</ref> in 2019.

Money takes the natural role of a communication network, according to the authors, and the principles of autonomy and independence of agents naturally leads to the identification of [[ownership]] and [[tenancy]] as inseparable and complementary elements, not widely considered in the literature of money or economics<ref name="rutt2">{{cite web| url=https://jimruttshow.blubrry.net/mark-burgess-2/ | title=Jim Rutt Show EP47 Mark Burgess on the Physics of Money}}</ref>. Ownership and tenancy recur in technological systems too, e.g. cloud computing.

== Promise Theory, Agile Transformation and Social Science ==

The [https://openleadershipnetwork.com/ Open Leadership Network] and
[[Open Space Technology]] organizers Daniel Mezick and Mark Sheffield
invited Promise Theory originator [[Mark Burgess (computer scientist)|Mark Burgess]] to keynote at the Open Leadership Network's Boston conference in 2019. This led to applying the formal development of Promise Theory to teach agile concepts. Burgess later extended the lecture notes into an online study course<ref name="oln">{{cite web|url=https://www.youtube.com/watch?v=VU23Z5nsr9A&list=PL6wAWDeKgxZcZlcsJ9XbI5wBRBJM0iUgR|title=Promise Theory And Applications}}</ref>, which he claims prompted an even deeper study of the concepts of social systems, including trust and authority<ref name="social"/><ref name="authority"/>. Promise theory thus offers an agent-based model of social phenomena in a way that differs from [[Social physics|socio-physics]], i.e. it begins from the principle of autonomy rather than by trying to map social systems onto already understood physical models.

== Promise Theory and Category Theory ==

Promise Theory is sometimes compared to [[Category Theory]], since both attempt to use relationships between real or virtual concepts to describe formal and practical systems
in a formal way<ref name="category">{{cite web|url=https://transmathematica.org/index.php/journal/article/view/43|title=Promise Theory and the Alignment of Context, Processes, Types, and Transforms}}</ref>. However, Promise Theory is a model of agents and intentions expressed as set-valued relations with rigid principles and loose formal requirements, whereas Category Theory is a model of [[Category (mathematics)|categories]] joined by morphisms or mappings, with fewer defined principles but more rigid formal requirements<ref name="category"/>.

== Application to Knowledge Management ==


The application of Promise Theory as a representation for Knowledge Management has built on the notion of [[Semantic spacetime]] as an organizational principle for generalized semantics<ref name="sst">{{cite web | url=https://mark-burgess-oslo-mb.medium.com/list/semantic-spacetime-and-data-analytics-28e9649c0ade | title=Semantic Spacetime and Data Analytics}}</ref>.
Promises are more mathematically primitive than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modeling physical and virtual information systems.<ref>[https://arxiv.org/abs/1411.5563 M. Burgess, Spacetimes with Semantics]</ref>


== References ==
== References ==
{{Reflist|30em}}
{{Reflist|30em}}


[[Category:Formal methods]]
[[Category:Theoretical computer science]]
[[Category:Theoretical computer science]]
[[Category:Economics]]
[[Category:Sociological theories]]

Revision as of 12:31, 12 January 2023

Promise graph example
An example Promise Theory diagram illustrating partial ordering of agents by promise.

Promise Theory, in the context of information science, is a model of voluntary cooperation between individual actors or agents, which make public their 'intentions' to one another in the form of promises. Promise Theory has a clear philosophical grounding[1] and a mathematical formulation rooted in graph theory and set theory. It may be expressed in the form of a labelled graph, with set-valued edges in order to describe discrete networks of agents joined by the unilateral promises they make.

Summary of key ideas

Promise Theory is a method of analysis, suitable for studying any system of interacting components. It is not a technology or design methodology. It doesn't advocate any position or design principle, except as a method of analysis.

Agents in Promise Theory are said to be autonomous, meaning that they are causally independent of one another. Agents cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation. Agents are thus self-determined until such a time as they partially or completely give up their independence by promising to accept guidance from other agents.

Agents may embody mechanisms in order to keep promises, from simple processes like a spinning loop, while others can be much more sophisticated such as an organism with advanced cognitive or reasoning abilities. These differences in internal process complexity lead to a definition of so-called ‘semantic scaling' of agent complexity.

Intentions and outcomes

Promise Theory describes agents and their 'intentions'. An intention may be realized by a behaviour or a target outcome. Intentions are thus made concrete by defining a set of acceptable outcomes associated with each intention. An outcome is most useful when it describes an invariant or mathematical fixed point in some description of states, because this can be both dynamically and semantically stable.

Each intention expresses a quantifiable outcome, which may be described as a state of an agent. Intentions are sometimes described as targets, goals, or desired states[1]. The selection of intentions by an agent is left unexplained so as to avoid questions about free will[1].

Agents express their intentions to one another by 'promise' or by 'imposition'. This provides a measure by which they can 'assess' whether intentions are fulfilled or not (i.e. whether promises are kept).

Promises

Promises arise when an agent shares one of its intentions with another agent voluntarily, e.g. by publishing its intent. The method of sharing is left to the modeller to explain.

For example, an object, such as a door handle, is an agent that promises to be suitable for opening a door, although it could be used for something else, e.g. for digging a hole in the ground. We cannot assume that agents will accept the promises given in the spirit in which they were intended, because every agent has its own context and capabilities. The promise of 'door handleness' could be expressed by virtue of its physical form or by having a written label attached in some language. An agent that uses this promise can assess whether the agent keeps its promise, or is 'fit for purpose'. Any agent can decide this for itself.

A promise may be used voluntarily by another agent in order to influence its usage of the other agent. Promises facilitate interaction, cooperation, and tend to maximize an intended outcome. Promises are not commands or deterministic controls.

Impositions

Impositions, by contrast with voluntary promises, are attempts to induce cooperation in a recipient involuntarily. An imposition is usually unexpected and Promise Theory considers impositions to be generally ineffective as communications of intent. The axiom and meaning of autonomy imply that no imposition can automatically affect another agent, since agents intentions can only affect their own behaviour. Impositions may nonetheless succeed if the recipient (imposee) already independently intends or promises to accept such impositions, implicitly or explicitly. Promise theory is thus a [[determinism|non-deterministic] theory and is capable of dealing with incomplete information[1].

For example, someone unexpectedly throwing a ball at another person is an imposition, which will typically fail to lead to a catch. However, if someone has promised to catch balls in advance, then they are more likely to accept the imposition of a random throw. If they have planned the throw in advance, then both sides have made a promise. The final case is most likely to lead to a successful outcome.

Assessments

Agents act as independent observers. Each agent ‘assesses' the promises of other agents according to its own internal capabilities. More sophisticated agents can perform more sophisticated assessments.

In physics, measurements are assumed to be repeatable and reliable assessments of the state of a process. In a sociological setting, agents typically have very different non-repeatable characteristics and methods. A person may judge the keeping of a promise in one way, a company in another, a nation state has its own process of deliberation. Assessment is intrinsically subjective: to establish impersonal judgements, agents need to cooperate by promising to coordinate and calibrate their assessments.

The formalization of subjectivity in Promise Theory makes it unlike most classical approaches to analysis. It embodies the notions of relativity.

Processes

Ultimately, Promise Theory has been described as a theory of generalized processes, arising from the intentional interactions of independent agents[2]. Most languages for describing processes describe deterministic flow diagrams or algebraic languages that are designed to be semi-deterministic or control oriented. Promise Theory distinguishes itself from other process theories and algebras by focusing on the autonomy of agents. This is sometimes referred to as a bottom up theory.

Notation and representation

Promises and impositions amount to tuples of information, without a defined notation. Each promise has a promiser, one or more promisee agents, and a promise body, which describes the intention of the promise. Note that the spelling 'promiser' (rather than more classical 'promisor') is used consistently in Promise Theory. Some uses express promises in natural language for convenience, especially when relating to human intent[3][4]. Most scientific and technological works use a set theoretic and graphical representation to express networks of cooperation[1].

Each 'promise' is a directed edge in a 'promise graph', whose status is a declaration of intent by one agent to others. The purpose of a promise is to increase the recipient agents' information about a claim of past, present or future behaviour[1], perhaps to prepare it for the outcome. Each agent assesses its belief in the promise's outcome or intention. Thus Promise Theory describes the relativity of autonomous agent's beliefs and intentions.

For a promise to increase certainty, the recipient needs to trust the promiser, so trust becomes a central ingredient in the offering and accepting of promises[5]. Trust begins as an arbitrary assessment by agents, and is typically increased as a result of an agent keeping promises. Conversely trust tends to be reduced by impositions and unkept promises. This leads to a unique mathematization of social science and questions, compatible with network theory, which can deal with questions of trust and authority[6][7]. In this model, trust takes on the role of an accounting value [6], analogous to energy, with both kinetic and potential components. Since assessments are completely local, each agent maintains its own account of trust for all other agents. There is therefore a memory aspect associated with autonomy and assessment[6].

Discussion

Promise Theory may be described as an agent-based model of general cooperative processes. It is, however, not related to the theory of [[Multi-agent system|multi-agent systems] in Computer Science or Artificial Intelligence, which focuses on movements of remotely controlled robotic agents, with top down planning. Promise Theory takes the opposite view that agents are not externally controlled. Rather, they have internal processes of self-regulation whose collective result leads to a bottom up emergence of desired behaviours without external control.

Agents may be humans, companies, countries, machinery, computer programs, apps, etc, which allows Promise Theory to be applied with far greater range than other more specific models[6][8].

Promise theoretic agents are fundamentally autonomous, or causally independent, but they can choose to collaborate in order to achieve successful outcomes, only ceding part of their independence by choice when appropriate. Promise Theory is mainly about the cooperative aspects of the agents.

Relationship to technology

Certain software technologies, such as Service-oriented architecture (SOA) and the Actor model implement software systems using ideas that resemble some aspects of promise theoretic thinking. Promise Theory does not confuse such specific implementations with its general methods, which might be used to analyze and compare any scenario. One of the first uses of Promise Theory was to analyze configuration management of resources in information systems, especially in the implementation of the software technology CFEngine.

One of the goals of Promise Theory is to offer a model with both dynamics and semantics, i.e. which unifies the description of the Dynamical system) with its intention or functional purpose[1], which is typically difficult to formalize in technological or engineering subjects.

History, context, and reception

An early form of Promise Theory was proposed by physicist and computer scientist Mark Burgess in 2004, initially in the context of information science, in order to solve perceived problems with the use of obligation-based logics in computer management schemes, in particular for policy-based management[9].

A collaboration between Burgess and computer scientist Jan Bergstra led to a much deeper and consistent model of a promise, which included the notion of impositions and the role of trust. The usefulness of the concept was quickly seen to go beyond computing.

Promise Theory has subsequently been developed and applied to many areas from Information Technology to Economics and Organization[10]. Promise Theory has since been developed in a variety of directions by Burgess and Bergstra, resulting in several books and many scientific papers covering different applications[1][11][12][2] [3][13][4][14].

In spite of the later generality of Promise Theory, it was originally proposed by Burgess as a way of modelling the computer management software CFEngine and its autonomous behaviour. Existing theories based on obligations were deemed unsuitable as, according to Burgess, they amount to wishful thinking[15]. CFEngine uses a model of autonomy both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack: no agent can be forced to receive information or instructions from another agent, thus all cooperation is voluntary. For many users of the software, this property has been instrumental in both keeping their systems safe and adapting to the local requirements.

The theory of commitments in multi-agent systems has some superficial similarities with aspects of Promise Theory, but there are key differences. In Promise Theory a commitment is simply as a promise to which an agent is committed (i.e. it has engaged in irreversible steps towards keeping the promise). A commitment is thus potentially stronger than a promise. In other areas of Computer Science, the term commitment is used as a form of deontic obligation. An obligation is typically an imposition, which is the opposite of a promise. A detailed comparison of Promises and Commitments in the senses intended in their respective fields is not a trivial matter[1].

Interest in Promise Theory grew in the IT industry following the publication of the essay Promise You A Rose Garden by Burgess in 2007[15][16]. Using a less academic, more popular style, the essay was cited by several software and networking publications and vendors[17][18][19][20][21][22][10]. Visibility increased further after Burgess gave a talk on Promise Theory at Google, Santa Monica in 2008[23].

According to Burgess, some interpreters of Promise Theory have misunderstood its significance, viewing it as a political manifesto for greater social or technical decentralization or 'democracy', rather than as a robust scientific model for analysis[16]. In particular, in the business world, the notion of autonomy is often sought on humanitarian or political grounds, with notions such as adhocracy, holocracy. However, Burgess maintains that the promise model makes no assumptions about the virtues of one configuration or another; rather, it takes autonomy as an axiom akin to locality in physics and seeks to work out the consequences of that assumption. Another misconception lies in the associating Promise Theory exclusively with desired end state computing, as in CFEngine or Effects-based operations.

Principles of Promise Theory

Promise theoretic Notation
Some simple examples of notation to represent promises between autonomous agents. The symbol is normally used for agents, and arrows indicate promises.

Autonomy

In classical Computer Science and Philosophy, obligations and their associated formalization in Deontic logic, are the traditional way of describing and attempting to guide behaviour[24]. Such logics assume that all behaviour originates from outside agents, governed by control loops. Promise Theory's point of departure from obligation logics is the idea that all agents in a system ultimately have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. This is a restatement of the principle of locality as used in physics. Obligation theories in Computer Science often implicitly view each obligation as a deterministic command, indeed as something which must cause its proposed outcome. This is an idealization which is empirically false in general. In Promise Theory, an agent can only make promises about its own behaviour.

Promises of the first kind

Promises can be imagined and defined within different kinds. The fundamental type of promise, satisfying full autonomy, is called a ‘promise of the first kind'[1]. In promises of the first kind, the first law is that: no agent may make promises about another's behaviour.

Although this assumption of independence could be interpreted morally or ethically, in Promise Theory this is simply a pragmatic engineering principle, which leads to a more complete documentation of the intended roles of the agents within the whole. When one is not allowed to make assumptions about other agents' behaviours, one is forced to document every promise more completely in order to make predictions; thus it leads to a more complete documentation which in turn points out the possible failure modes by which cooperative behaviour could fail[1].

Command and control systems like those that motivate obligation theories can easily be reproduced in Promise Theory by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control.

In Philosophy and Law a promise is often viewed as something that leads to an obligation. Promise Theory rejects that point of view. Bergstra and Burgess have shown that the concept of a promise is quite independent of that of obligation and indeed is simpler.[24]

The role of obligations for increasing certainty is unclear, since obligations can come from anywhere without knowledge of an agent's capabilities. An aggregation of non-local constraints cannot be resolved by a local agent: this means that obligations can actually increase uncertainty[23]. In a world of promises, all constraints on an agent are self-imposed and local (even if they are suggested by outside agents), thus all contradictions can be resolved locally.

Promise notation, polarity, and directedness of promises

All promises are directed from promiser (intender) to promisee (intendee). The polarity of a promise (denoted either + or -) is the type of intention communicated by the promise. The rule that an agent can only promise its own behaviour means that links or bindings between agents must be mutually agreed. Promises thus come in two types. There is always a dual interpretation for a promise graph, in which offer (+) and acceptance (-) promises are interchanged.

An offer promise is from an agent to an agent denoted (+) and is written:

where is called the 'body' of the promise and has two components: a type or label (expressing the subject of the promise) and a constraint which expresses the intended subset of possible outcomes for the nature of the promise. This whole expression is often denoted for the complete promise. Agents can also make conditional promises

where this is to be read as a promise of if promise is assessed to be kept.

A promise offered is not necessarily accepted, since it cannot be imposed for promises of the first kind. In order for a promise offer to be accepted, an acceptance promise must be given explicitly by the intended recipient. This is denoted by (-):

The resulting amount of voluntary interaction is the overlap . In sketches or informal drawings a (female) cup symbol is sometimes used for (-) promises instead of a (male) arrowhead in order to emphasise the donor and receptor status of the polarities.

The combination of a (+) and a (-) promise leads to a binding, and these two parts correspond to a single directed edge in graph theory. The promise binding matrix is the closest thing to an adjacency matrix in plain graph theory.

The promise binding matrix
The promise binding matrix is the closest thing to an adjacency matrix in plain graph theory.

To distinguish involuntary impositions from promises, the block arrow is used (which is used to resemble a 'fist'[1]).

Impositions are generally ineffective, since an autonomous agent cannot (formally) be forced by outside influence.

Assessments and valuations of promises

The principle of autonomy requires that each agent assesses the validity and outcome of each promise independently. Agents may disagree on whether they assess a promise to be kept or not kept at any moment.

The assessment of a promise binding matrix
A promise assessment is a local scalar function of the assessors information about a promise binding, denoted by .

The value of a promise is an assessment of private utility to the assessing agent. Promises might be valuable to either the promisee, the promiser, or neither. They might also lead to costs in the form of negative values. Promise Theory can be seen as part of the scaffolding for establishing game matrices in game theoretical decision making, in which multiple promises play the role of strategies in a game[1]. An example of this was used in the analysis of the theory of Institutional Diversity work of Elinor Ostrom[25].

Several themes and considerations that Ostrom deals with appear to be predicted by promise theoretical considerations; the main difference found was that Ostrom, like many authors, focuses on the role of external rules and obligations in a top down way. Promise Theory takes the opposite (bottom up) viewpoint, namely that obeying rules is a voluntary act[26][27].

Emergent network and swarm behaviour

Promise Theory is a natural framework for analyzing realistic models of modern network science and as a formal model for swarm intelligence.

A model for the latter was developed at Oslo University College, by drawing on ideas from several different lines of research conducted there, including Policy-based management, Graph Theory, logic and Configuration Management[28].

Agency as a model of systems in space and time

The promises made by autonomous agents lead to a mutually approved graph structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of smart spaces, i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of Semantic spacetime uses promise theory to discuss these spacetime concepts.

Promises are more elementary than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modelling physical and virtual information systems.[29]

Promise Theory of money and economics

According to interviews, given in 2016, the application of Promise Theory to socio-technical systems began to be developed more formally following an encounter between Promise Theory originator Mark Burgess and fellow physicist Geoffrey West at the Percolate conference in New York 2015[10]. Co-author Jan Bergstra had been writing independently about the logic of financial mechanisms. The language of Promise Theory could model processes in cities and other socio-technical systems[30][31]. Further interest from Federal Reserve researchers in San Francisco on the topic of Semantic spacetime, prompted Burgess and Bergstra to expand and publish a set of their pre-existing notes on the Promise Theory of Money[11] in 2019.

Money takes the natural role of a communication network, according to the authors, and the principles of autonomy and independence of agents naturally leads to the identification of ownership and tenancy as inseparable and complementary elements, not widely considered in the literature of money or economics[32]. Ownership and tenancy recur in technological systems too, e.g. cloud computing.

Promise Theory, Agile Transformation and Social Science

The Open Leadership Network and Open Space Technology organizers Daniel Mezick and Mark Sheffield invited Promise Theory originator Mark Burgess to keynote at the Open Leadership Network's Boston conference in 2019. This led to applying the formal development of Promise Theory to teach agile concepts. Burgess later extended the lecture notes into an online study course[33], which he claims prompted an even deeper study of the concepts of social systems, including trust and authority[6][7]. Promise theory thus offers an agent-based model of social phenomena in a way that differs from socio-physics, i.e. it begins from the principle of autonomy rather than by trying to map social systems onto already understood physical models.

Promise Theory and Category Theory

Promise Theory is sometimes compared to Category Theory, since both attempt to use relationships between real or virtual concepts to describe formal and practical systems in a formal way[34]. However, Promise Theory is a model of agents and intentions expressed as set-valued relations with rigid principles and loose formal requirements, whereas Category Theory is a model of categories joined by morphisms or mappings, with fewer defined principles but more rigid formal requirements[34].

Application to Knowledge Management

The application of Promise Theory as a representation for Knowledge Management has built on the notion of Semantic spacetime as an organizational principle for generalized semantics[35].

References

  1. ^ a b c d e f g h i j k l m "Promise Theory: Principles and Applications".
  2. ^ a b "A Treatise on Systems (volume 2): The scaling of intentional systems with faults, errors, and flaws".
  3. ^ a b "Promise Theory: Case Study on the 2016 Brexit Vote".
  4. ^ a b "A Promise Theoretic Account of the Boeing 737 Max MCAS Algorithm Affair".
  5. ^ "Local and Global Trust Based on the Concept of Promises".
  6. ^ a b c d e "Notes on Trust as a Causal Basis for Social Science".
  7. ^ a b "Authority (I): A Promise Theoretic Formalization".
  8. ^ "Promise theory - A model of autonomous objects for pervasive computing and swarms".
  9. ^ "M. Burgess, An Approach to Understanding Policy Based on Autonomy and Voluntary Cooperation".
  10. ^ a b c "Jim Rutt Show EP28 Mark Burgess on Promise Theory, AI & Spacetime".
  11. ^ a b "Money, Ownership. and Agency: As an Application of Promise Theory".
  12. ^ "A Treatise on Systems (volume 1): Analytical Descriptions of Human-Information Networks".
  13. ^ "Promises and Threats by Asymmetric Nuclear-Weapon States".
  14. ^ "Promise Theory as a Tool for Informaticians, Transmathematica".
  15. ^ a b "Promise You A Rose Garden (An Essay About System Management)" (PDF).
  16. ^ a b "Promise Theory with Mark Burgess".
  17. ^ "Thinking in Promises, O'Reilly, 2015".
  18. ^ "Promise Theory: Can you really trust the network to keep promises?".
  19. ^ "ACI Policy Model: Introduction to some of the fundamentals of an ACI Policy and how it's enforced".
  20. ^ "Why you need to know about promise theory".
  21. ^ "OpFlex-ing Your Cisco Application Centric Infrastructure".
  22. ^ "The Quest to Make Code Work Like Biology Just Took A Big Step (Wired 2016)".
  23. ^ a b "The Promise of System Configuration (Google talk 2008)".
  24. ^ a b "A static theory of promises".
  25. ^ Ostrom, Elinor (2005). Understanding Institutional Diversity. Princeton University Press. ISBN 978-0-691-12238-0.
  26. ^ "Autonomic Pervasive Computing: A Smart Mall Scenario Using Promise Theory (Fagernes 2006)".
  27. ^ "Laws of Human-Computer Behaviour and Collective Organization".
  28. ^ "Promise theory - A model of autonomous objects for pervasive computing and swarms".
  29. ^ "Spacetimes with Semantics I, Notes on Theory and Formalism (2014)".
  30. ^ "Thinking In Promises For The Cyborg Age".
  31. ^ "On the scaling of functional spaces, from smart cities to cloud computing".
  32. ^ "Jim Rutt Show EP47 Mark Burgess on the Physics of Money".
  33. ^ "Promise Theory And Applications".
  34. ^ a b "Promise Theory and the Alignment of Context, Processes, Types, and Transforms".
  35. ^ "Semantic Spacetime and Data Analytics".