Talk:Bounded rationality

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Game theory (Rated Start-class, High-importance)
WikiProject icon This article is part of WikiProject Game theory, an attempt to improve, grow, and standardize Wikipedia's articles related to Game theory. We need your help!
Join in | Fix a red link | Add content | Weigh in
Start-Class article Start  This article has been rated as Start-Class on the quality scale.
 High  This article has been rated as High-importance on the importance scale.
WikiProject Economics (Rated C-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Economics, a collaborative effort to improve the coverage of Economics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.

Is a system[edit]

is a system in which decisions are driven by the desire to identify and select the first acceptable alternative (satisficing).

I believe satisficing is a more specific term. Perhaps it should be mentioned, but I don't think the concept is synonymous with "bounded rationality"

The term was coined by Prof. Herbert Simon in 1981.

Its use in Simon 1957 suggests otherwise. Are we even sure Simon coined the term at all?—Preceding unsigned comment added by (talk) 19:49, 9 September 2002

Albert Einstein[edit]

He gives Albert Einstein as an example of bounded rationality. How is Einstein a better example of bounded rationality than, say, me? In fact, how can Einstein be an example of bounded rationality? He lived his live within the limitations of bounded rationality, but Einstein was not a concept. That Einstein sentence makes no sense.

I agree, need an example.—Preceding unsigned comment added by (talk) 20:33, 3 March 2006

References before See Also[edit]

Do references go before or after see also? The article looks a bit wierd. —The preceding unsigned comment was added by Forwardmeasure (talkcontribs) 22:22, 29 March 2007 (UTC).

See also goes before References read the instructions here: Wikipedia:Guide_to_layout#Standard_appendices_and_descriptions for more information.
Trade2tradewell 08:37, 1 April 2007 (UTC)

Bounded Emotionality[edit]

Yeah, I totally agree. I have no idea why it's even here. I think someone is trying to put in there personal say. I am going to remove it. If anyone has any objections please discuss. —Preceding unsigned comment added by (talk) 03:57, 6 December 2008 (UTC)

This seems to have snuck in here. It should be in it's own article not in this article as a section. Kevin Purcell (talk) 22:36, 19 November 2008 (UTC)

Frankly this section is unreadable. I sincerely hope it's intended as a parody of feminism (talk) 23:14, 4 December 2008 (UTC)


Why does "hyperrational" redirect here when the phrase "hyper" doesn't even occur in the article? (talk)

Ariel Rubenstein[edit]

Note as written:

This puts the study of decision procedures on the research agenda.

This sentence draws my curiosity, but I don't fully understand .. is it clear?

Does it mean something like the following:

Rubenstein's way of approaching bounded rationality provides an model through which it can be researched empirically.

--Ihaveabutt (talk) 03:20, 5 May 2009 (UTC)

On the three statements about Rubenstein, Khaneman and the comment below, there is not enough information in the current page. What is needed is a paragraph explaining why if you do not have perfect rationality, the way you decide what to do matters. I also think that we need to explain why "bounded rationality" should not be understood as some "super-rationality" of optimizing subject to lots of information and processing constraints. Will do this when I have the timeByronmercury (talk) 14:03, 24 October 2012 (UTC)

Heuristics vs theoretical optima[edit]

The article says:

Gigerenzer ... and his colleagues have shown that such simple heuristics frequently lead to better decisions than the theoretically optimal procedure.

That's ridiculous. The "theoretically optimal procedure", by definition, leads to the best possible decision. At best, the heuristics can equal it, but they can never lead to _better_ decisions. —Preceding unsigned comment added by (talk) 16:04, 30 October 2009 (UTC)

Pair-of-scissors analogy.[edit]

IMHO, this sentence:

[Herbert A.] Simon used the analogy of a pair of scissors, where one blade is the "cognitive limitations" of actual humans and the other the "structures of the environment"; minds with limited cognitive resources can thus be successful by exploiting pre-existing structure and regularity in the environment.[1]

is more confusing than enlightening. I think it should either be expanded to actually explain the analogy, or else trimmed to remove mention of it.

RuakhTALK 16:00, 27 November 2012 (UTC)

epsilon optimization[edit]

added brief summary of this idea. I hope it is clear to the general reader.Byronmercury (talk) 10:19, 24 April 2013 (UTC)

In reponse to the expert ideas review by Valente. The concept of epsilon optimization and Epsilon-equilibrium are more of a game theoretic (and information theoretic) idea rather than simply an economic one. However, it is also used in economics, for example in macroeconomics - Akerloff and Yellen's work on Menu costs use the phrase "Near rationality" in their 1985 QJE article. It would however be good to see some more material on Bounded rationality in other disciplines such as economics, management and psychology.Byronmercury (talk) 10:28, 3 September 2016 (UTC)

Dr. Valente's comment on this article[edit]

Dr. Valente has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:

Huw Dixon later argues that it may not be necessary to analyze in detail the process of reasoning underlying bounded rationality.[4] If we believe that agents will choose an action that gets them "close" to the optimum, then we can use the notion of epsilon-optimization, that means you choose your actions so that the payoff is within epsilon of the optimum. If we define the optimum (best possible) payoff as

U U^{*}, then the set of epsilon-optimizing options S(ε) can be defined as all those options s such that:

U s U ϵU(s)\geq U^{*}-\epsilon .

The notion of strict rationality is then a special case (ε=0). The advantage of this approach is that it avoids having to specify in detail the process of reasoning, but rather simply assumes that whatever the process is, it is good enough to get near to the optimum.

This paragraph seems a bit unwarranted. Even considering that the literature on bounded rationality is very heterogeneous, this definition of bounded rationality is, to my knowledge, rarely used in the economic literature, and actually contrasts with the very core idea of rationality-as-procedure.

The article could be improved by mentioning the fields where the bounded rational principles have been applied. Concerning Economics they could be: management (e.g. organizational theory), economics of innovation, theory of consumption.

We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. Valente has expertise on the topic of this article, since he has published relevant scholarly research:

  • Reference : Marco Valente & Giorgio Fagiolo & Luigi Marengo, 2003. "Endogenous Networks in Random Population Games," Computing in Economics and Finance 2003 68, Society for Computational Economics.

ExpertIdeasBot (talk) 15:09, 31 August 2016 (UTC)

HaeB edit on 29th Nov, 2016[edit]


This refers to your edit on Bounded Rationality page. This article satisfy WP:RS requirements as this is a published article published by a peer reviewed journal. Kindly let me know what is the concern/issue. Thanks !!! TANI0208 (talk) 19:07, 13 December 2016 (UTC)

Nietzschean utilitarianism[edit]

Some critics of Nudge have lodged attacks that modifying choice architectures will lead to people becoming worse decision-makers.

Also known as paternalistic Nietzschean utilitarianism.

Why does this kind of perverse sentiment slide under the radar so easily? The word "worse" conceals a multitude of problems, as a feasible observational correlate of "freeing people to invest their energy elsewhere", which is whose idea of "worse", anyway?

And if a person wanted to become a better decision maker, would filling the world with random decision challenges be the best strategy?

No, actually, to become better in an effective way, one would construct the challenge gradient with great thought and care.

It pains me to see this kind of sentence being given equal air time, but what can you do? — MaxEnt 23:46, 13 February 2017 (UTC)