Decentralised system

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A decentralized system in systems theory is a system in which lower level components operate on local information to accomplish global goals. The global pattern of behavior is an emergent property of dynamical mechanisms that act upon local components, such as indirect communication, rather than the result of a central ordering influence (see centralized system).

Centralized versus Decentralized Systems[edit]

A centralized system is one in which a central controller exercises control over the lower-level components of the system directly or through the use of a power hierarchy (such as instructing a middle level component to instruct a lower level component).[1] The complex behavior exhibited by this system is thus the result of the central controller's "control" over lower level components in the system, including the active supervision of the lower level components.

A decentralized system, on the other hand, is one in which complex behavior emerges through the work of lower level components operating on local information, not the instructions of any commanding influence. This form of control is known as distributed control, or control in which each component of the system is equally responsible for contributing to the global, complex behavior by acting on local information in the appropriate manner. The lower level components are implicitly aware of these appropriate responses through mechanisms that are based on the component's interaction with the environment, including other components in that environment.

Self-Organization[edit]

Decentralized systems are intricately linked to the idea of self-organization—a phenomena in which local interactions between components of a system establish order and coordination to achieve global goals without a central commanding influence. The rules specifying these interactions emerge from local information and in the case of biological (or biologically-inspired) agent[disambiguation needed]s, from the closely linked perception and action system of the agents.[2] These interactions continually form and depend on spatio-temporal patterns, which are created through the positive and negative feedback that the interactions provide. For example, recruitment in the foraging behavior of ants relies on the positive feedback of the ant finding food at the end of a pheromone trail while ants' task-switching behavior relies on the negative feedback of making antennal contact with a certain number of ants (for example, a sufficiently low encounter rate with successful foragers can cause a midden worker to switch to foraging, although other factors like food availability can affect the threshold for switching).

Naturally Occurring Decentralized Systems[edit]

While decentralized systems can easily be found in nature, they are also evident in aspects of human society such as governmental and economic systems.

Biological: Insect Colonies[edit]

One of the most well known examples of a "natural" decentralized system is one used by certain insect colonies. In these insect colonies, control is distributed among the homogeneous biological agents who act upon local information and local interactions to collectively create complex, global behavior. While individually exhibiting simple behaviors, these agents achieve global goals such as feeding the colony or raising the brood by using dynamical mechanisms like non-explicit communication and exploiting their closely coupled action and perception systems. Without any form of central control, these insect colonies achieve global goals by performing required tasks, responding to changing conditions in the colony environment in terms of task-activity, and subsequently adjusting the number of workers performing each task to ensure that all tasks are completed.[3] For example, ant colonies guide their global behavior (in terms of foraging, patrolling, brood care, and nest maintenance) using a pulsing, shifting web of spatio-temporal patterned interactions that rely on antennal contact rate and olfactory sensing. While these interactions consist of both interactions with the environment and each other, ants do not direct the behavior of other ants and thus never have a "central controller" dictating what is to be done to achieve global goals.

Red Harvester Ant

Instead, ants use a flexible task-allocation system that allows the colony to respond rapidly to changing needs for achieving these goals. This task-allocation system, similar to a division of labor is flexible in that all tasks rely on either the number of ant encounters (which take the form of antennal contact) and the sensing of chemical gradients (using olfactory sensing for pheromone trails) and can thus be applied to the entire ant population. While recent research has shown that certain tasks may have physiologically and age-based response thresholds,[4] all tasks can be completed by "any" ant in the colony.

For example, in foraging behavior, red harvester ants (Pogonomyrmex barbatus) communicate to other ants where food is, how much food there is, and whether or not they should switch tasks to forage based on cuticular hydrocarbon scents and the rate of ant-interaction. By using the combined odors of forager cuticular hydrocarbons and of seeds[5] and interaction rate using brief antennal contact, the colony captures precise information about the current availability of food and thus whether or not they should switch to foraging behavior "all without being directed by a central controller or even another ant". The rate at which foragers return with seeds sets the rate at which outgoing foragers leave the nest on foraging trips; faster rates of return indicate more food availability and fewer interactions indicate a greater need for foragers. A combination of these two factors, which are solely based on local information from the environment, leads to decisions about switching to the foraging task and ultimately, to achieving the global goal of feeding the colony.

In short, the use of a combination of simple cues makes it possible for red harvester ant colonies to make an accurate and rapid adjustment of foraging activity that corresponds to the current availability of food[6] while using positive feedback for regulation of the process: the faster outgoing foragers meet ants returning with seeds, the more ants go out to forage.[7] Ants then continue to use these local cues in finding food, as they use their olfactory senses to pick up pheromone trails laid by other ants and follow the trail in a descending gradient to the food source. Instead of being directed by other ants or being told as to where the food is, ants rely on their closely coupled action and perception systems to collectively complete the global task.[3]

While red harvester ant colonies achieve their global goals using a decentralized system, not all insect colonies function this way. For example, the foraging behavior of wasps is under the constant regulation and control of the queen.[8]

Human Society: Market Economy[edit]

A market economy is an economy in which decisions on investment and the allocation of producer goods are mainly made through markets and not by a plan of production (see planned economy). A market economy is a decentralized economic system because it does not function via a central, economic plan (which is usually headed by a governmental body) but instead, acts through the distributed, local interactions in the market (e.g. individual investments). While a "market economy" is a broad term and can differ greatly in terms of state or governmental control (and thus central control), the final "behavior" of any market economy emerges from these local interactions and is not directly the result of a central body's set of instructions or regulation.

Application[edit]

Artificial Intelligence (AI) and Robotics[edit]

While classic artificial intelligence in the 1970s was focused on knowledge-based systems or planning robots, Rodney Brooks' behavior-based robots and their success in acting in the real, unpredictably changing world has led many AI researchers to shift from a planned, centralized symbolic architecture to studying intelligence as an emergent product of simple interactions.[9] This thus reflects a general shift from applying a centralized system in robotics to applying a more decentralized system based on local interactions at various levels of abstraction.

For example, largely stemming from Newell and Simon's physical-symbol theory, researchers in the 1970s designed robots with a course of action that, when executed, would result in the achievement of some desired goal; thus the robots were seen as "intelligent" if they could follow the directions of their central controller (the program or the programmer) (for an example, see STRIPS). However, upon Rodney Brooks' introduction of subsumption architecture, which enabled robots to perform "intelligent" behavior without using symbolic knowledge or explicit reasoning, increasingly more researchers have viewed intelligent behavior as an emergent property that arises from an agent's interaction with the environment, including other agents in that environment.

While certain researchers have begun to design their robots with closely coupled perception and action systems and attempted to embody and situate their agents a la Brooks, other researchers have attempted to simulate multi-agent behavior and thus further dissect the phenomena of decentralized systems in achieving global goals. For example, in 1996, Minar, Burkhard, Lang-ton and Askenazi created a multi-agent software platform for the stimulation of interacting agents and their emergent collective behavior called "Swarm". While the basic unit in Swarm is the "swarm", a collection of agents executing a schedule of actions, agents can be composed of swarms of other agents in nested structures. As the software also provides object-oriented libraries of reusable components for building models and analyzing, displaying and controlling experiments on those models, it ultimately attempts to not only simulate multi-agent behavior but to serve as a basis for further exploration of how collective groups of agents can achieve global goals through careful, yet implicit, coordination.[10]

See also[edit]

References[edit]

  1. ^ Bekey, G. A. (2005). Autonomous Robots: From Biological Inspiration to Implementation and Control. Cambridge, MA: MIT Press.[page needed]
  2. ^ Bonabeau, Eric; Theraulaz, Guy; Deneubourg, Jean-Louls; Aron, Serge; Camazine, Scott (1997). "Self-organization in social insects". Trends in Ecology & Evolution 12 (5): 188. doi:10.1016/S0169-5347(97)01048-3. 
  3. ^ a b Gordon, D. (2010). Ant Encounters: Interaction Networks and Colony Behavior. Princeton, NJ: Princeton U Press.[page needed]
  4. ^ Robinson, EJ; Feinerman, O; Franks, NR (2009). "Flexible task allocation and the organization of work in ants". Proceedings. Biological sciences / the Royal Society 276 (1677): 4373–80. doi:10.1098/rspb.2009.1244. PMC 2817103. PMID 19776072. 
  5. ^ Greene, Michael J.; Gordon, Deborah M. (2003). "Social insects: Cuticular hydrocarbons inform task decisions". Nature 423 (6935): 32. doi:10.1038/423032a. PMID 12721617. 
  6. ^ Greene, Michael J.; Pinter-Wollman, Noa; Gordon, Deborah M. (2013). "Interactions with Combined Chemical Cues Inform Harvester Ant Foragers' Decisions to Leave the Nest in Search of Food". In Fenton, Brock. PLoS ONE 8 (1): e52219. doi:10.1371/journal.pone.0052219. PMC 3540075. PMID 23308106. 
  7. ^ Carey, Bjorn (May 15, 2013). "Evolution shapes new rules for ant behavior, Stanford research finds". Stanford Report. Retrieved November 21, 2013. 
  8. ^ Reeve, Hudson K.; Gamboa, George J. (1987). "Queen Regulation of Worker Foraging in Paper Wasps: A Social Feedback Control System (Polistes Fuscatus, Hymenoptera: Vespidae)". Behaviour 102 (3): 147. doi:10.1163/156853986X00090. 
  9. ^ Brooks, R. (1986). "A robust layered control system for a mobile robot". IEEE Journal on Robotics and Automation 2: 14. doi:10.1109/JRA.1986.1087032. 
  10. ^ Minar, N.; Burkhart, R.; Lang- ton, C.; Askenazi, M. (1996). "The Swarm Simulation System: A Toolkit for Building Multi-Agent Simulations". SFI Working Papers. Sante Fe Institute. 

Further reading[edit]

  • Camazine, Scott; Sneyd, James (1991). "A model of collective nectar source selection by honey bees: Self-organization through simple rules". Journal of Theoretical Biology 149 (4): 547. doi:10.1016/S0022-5193(05)80098-0. 
  • Kernis, Michael H.; Cornell, David P.; Sun, Chien-ru; Berry, Andrea; Harlow, T (1993). "There's more to self-esteem than whether it is high or low: The importance of stability of self-esteem". Journal of Personality and Social Psychology 65 (6): 1190–204. doi:10.1037/0022-3514.65.6.1190. PMID 8295118. 
  • Miller, Peter (July 2007). "Swarm Theory". National Geographic. Retrieved November 21, 2013.