In economics, game theory, decision theory, and artificial intelligence, a rational agent is an agent which has clear preferences, models uncertainty via expected values, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. Rational agents are also studied in the fields of cognitive science, ethics, and philosophy, including the philosophy of practical reason.
A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.
The action a rational agent takes depends on:
- the preferences of the agent
- the agent's information of its environment, which may come from past experiences
- the actions, duties and obligations available to the agent
- the estimated or actual benefits and the chances of success of the actions.
In game theory and classical economics, it is often assumed that the actors, people, and firms are rational. However, the extent that people and firms behave rationally is subject to debate. Economists often assume the models of rational choice theory and bounded rationality to formalize and predict the behavior of individuals and firms. Rational agents sometimes behave in manners that are counter-intuitive to many people, as in the Traveler's dilemma.
Artificial intelligence has borrowed the term "rational agents" from economics to describe autonomous programs that are capable of goal directed behavior. Today there is a considerable overlap between AI research, game theory and decision theory. Rational agents in AI are closely related to "intelligent agents", autonomous software programs that display intelligence.
See also 
- Economics and game theory
- Osborne, Martin; Rubinstein, Ariel (2001), A Course in Game Theory, Cambridge, Mass.: MIT Press, p. 4, ISBN 0-262-65040-1
- Artificial intelligence