Jump to content

Human-agent team

From Wikipedia, the free encyclopedia

A human-agent team is a system composed of multiple interacting humans and artificial intelligence systems. The artificial intelligence system may be a robotic system, a decision support system, or a virtual agent. Human agent teaming provides an interaction paradigm that differs from traditional approaches such as supervisory control, or user interface design, by enabling the computer to have a certain degree of autonomy. The paradigm draws from various scientific research fields, being strongly inspired by the way humans work together in teams, and constituting a special type of multi-agent system.

Concept

[edit]

Software agents that behave as artificial team players satisfy the following general requirements:[1]

  • Observability: agents must make their status, intentions, knowledge observable to others.
  • Predictability: agents must be predictable to others such that others can rely on them when considering their own actions
  • Directability: agents must be capable of directing the behavior of others, as well as be directed by others.

To satisfy these OPD requirements, agents exhibit various behaviors such as:

  • Proactively communicating information to other agents to establish shared situation awareness within the team
  • Explaining their decisions and recommendations to other teammates to establish appropriate levels of trust (also known as Explainable artificial intelligence)
  • Receiving instructions at a high level of abstraction.
  • Choosing the right moment of interaction to prevent inconvenient interruptions of other team members.
  • Notifying others when they believe they can no longer contribute their part of the work required to fulfill the team goal.

The engineering efforts to develop artificial team members include user interface design, but also the design of specialized social artificial intelligence, that enables agents to reason about whether some piece of information is worthy of sharing.

Frameworks

[edit]

Various frameworks have been developed that support the software engineering effort of building human agent teams, such as KAoS,[2] and SAIL.[3] Engineering methodologies for human agent teaming include Coactive design[4]

Applications

[edit]

Human agent teaming is a popular paradigm to approach the interaction between humans and AI technologies in various domains such as defense, healthcare, space, disaster response.

References

[edit]
  1. ^ Johnson, Matthew; Bradshaw, Jeffrey M.; Feltovich, Paul J.; Jonker, Catholijn M.; Van Riemsdijk, M. Birna; Sierhuis, Maarten (2014-03-01). "Coactive Design: Designing Support for Interdependence in Joint Activity". Journal of Human-Robot Interaction. 3 (1): 43. doi:10.5898/jhri.3.1.johnson. ISSN 2163-0364.
  2. ^ Bradshaw, Jeffrey M.; Sierhuis, Maarten; Acquisti, Alessandro; Feltovich, Paul; Hoffman, Robert; Jeffers, Renia; Prescott, Debbie; Suri, Niranjan; Uszok, Andrzej (2003), "Adjustable Autonomy and Human-Agent Teamwork in Practice: An Interim Report on Space Applications", Multiagent Systems, Artificial Societies, and Simulated Organizations, Springer US, pp. 243–280, doi:10.1007/978-1-4419-9198-0_11, ISBN 9781461348337
  3. ^ van der Vecht, Bob; van Diggelen, Jurriaan; Peeters, Marieke; Barnhoorn, Jonathan; van der Waa, Jasper (2018), "SAIL: A Social Artificial Intelligence Layer for Human-Machine Teaming", Advances in Practical Applications of Agents, Multi-Agent Systems, and Complexity: The PAAMS Collection, Springer International Publishing, pp. 262–274, doi:10.1007/978-3-319-94580-4_21, ISBN 9783319945798
  4. ^ Klein, G.; Woods, D.D.; Bradshaw, J.M.; Hoffman, R.R.; Feltovich, P.J. (2004). "Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity". IEEE Intelligent Systems. 19 (6): 91–95. doi:10.1109/mis.2004.74. ISSN 1541-1672. S2CID 27049933.