Jump to content

User:Sadhwi Srinivas/sandbox

From Wikipedia, the free encyclopedia

An information (or informational) cascade occurs when people observe the actions of others and then make the same choice that the others have made, independently of their own private information signals. A cascade develops, then, when people “abandon their own information in favor of inferences based on earlier people’s actions”[1]. Information cascades provide an explanation for how such situations can occur, how likely they are to cascade incorrect information or actions, how such behavior may arise and desist rapidly, and how effective attempts to originate a cascade tend to be under different conditions [2]. By explaining all of these things, the original Independent Cascade model sought to improve on previous models which were not able to explain cascades of irrational behavior, the fragility and/or short-lived nature of certain cascades.

There are two key conditions in an information cascade model:

  1. Sequential decisions with subsequent actors observing decisions (not information) of previous actors.
  2. A limited action space (e.g. an adopt/reject decision).[3]

One assumption of Information Cascades which has been challenged is the concept that agents always make rational decisions. More social perspectives of cascades, which suggest that agents may act irrationally (e.g., against what they think is optimal) when social pressures are great, exist as compliments to the concept of Information Cascades[4] . While competing models exist, it is more often the problem that the concept of an information cascade is conflated with ideas which do not match the two key conditions of the model, such as social proof, information diffusion [5] and social influence. Indeed, the term information cascade has even been used to refer to such processes [6]

Basic Model

[edit]

Qualitative Example

[edit]

The idea of an information cascade is that cascades often occur when external information obtained from previous participants in an event overrides one's own private signal, irrespective of the correctness of the former over the latter. The experiment conducted in [7] is a useful example of this process. The experiment consisted of two urns labeled A and B. Urn A contains two balls labeled "a" and one labeled "b". Urn B contains one ball labeled "a" and two labeled "b". The urn from which a ball must be drawn during each run is determined randomly and with equal probabilities (from the throw of a dice). The contents of the chosen urn are emptied into a neutral container. The participants are then asked in random order to draw a marble from this container. This entire process may be termed a "run", and a number of such runs are performed.

Each time a participant picks up a marble, he is to decide which urn it belongs to. His decision is then announced for the benefit of the remaining participants in the room. Thus, the (n+1)th participant has information about the decisions made by all the n participants preceding him, and also his private signal which is the label on the ball that he draws during his turn. The experimenters observed that an information cascade was observed in 41 of 56 such runs. This means, in the runs where the cascade occurred, at least one participant gave precedence to earlier decisions over his own private signal. It is possible for such an occurrence to produce the wrong result. This phenomenon is known as "Reverse Cascade".

Quantitative Description

[edit]

The original Independent Cascade model Cite error: The <ref> tag has too many names (see the help page). assumes a sequence of individuals, where each individual can make the decision to accept or reject an idea, action or behavior (here-after referred to as idea). The model also assumes that there is a single correct answer as to whether or not an agent should accept or reject an idea - it is therefore a binary choice. In other words, with probability p, the correct action is to accept, and with probability 1-p the correct answer is to reject. Thus, in the work below, we represent p as P[Accept] (P[A] for short), the probability that the correct decision is to accept, and 1-p as P[Reject] (P[R] for short).

We start with a sequence of N individuals. Each individual in the sequence is aware of both his place in the sequence and the actions (accept or reject) of all previous individuals. In addition, each individual has some private signal which tells him whether or not to accept or reject the idea. This private signal is obtained before anything else occurs in the model.

An agent will make the decision whether to accept or reject the idea based on information taken from the actions of agents ahead of him in the sequence, and his own private information, or "signal". Note that the conditions of an information cascade stipulate that individuals do not have access to the private information of any other individual. An important assumption of the model is that agents make rational choices. That is, if they believe it will benefit them to accept the idea, they will. Otherwise they will reject it. If we assume that there is some value to adopting, which we will call V, then the agent will adopt if they believe V*p > (1-V)*p. If we set V = 0.5, this will mean that agents will accept when they believe that the correct decision is more likely to be accept, and reject in the converse situation.

In the literature [2] [1], a person’s signal telling them to accept is denoted as "H" (a high signal, where high signifies he should accept), and a signal telling them not to accept is "L" (a low signal). The model assumes that when the correct decision is to accept, individuals will be more likely to see an "H", and conversely, when the correct decision is to reject, individuals are more likely to see an "L" signal. We may consider this as a conditional probability - the probability of "H" when the correct action is to accept, or P[H|A]. Similarly P[L|R] is the probability that an agent gets an "L" signal when the correct action is reject. If we give these conditional likelihoods a value of q, then q > 0.5. This is summarized in the table below. [1]

Agent Signal True Probability State
Reject Accept
L q 1-q
H 1-q q

The model proceeds as follows: the first agent determines whether or not to accept solely based on his own signal. Because the model assumes that all agents act rationally, this means that the action (accept or reject) the agent feels is more likely is the action he will choose to take. This decision can be explained using Bayes rule:


If the agent receives an "H" signal, then the likelihood of accepting is obtained by calculating P[A|H]. The equation says that, by virtue of the fact that q > 0.5, the first agent, acting only on his private signal, will always increase his estimate of p with an "H" signal. Similarly, it can be shown that an agent will always decrease his expectation of p when he receives a low signal. Recalling that, if the value, "V", of accepting is equal to the value of rejecting, than an agent will accept if he believes p >0.5, and reject otherwise. Because this agent started out with the assumption that both accepting and rejecting are equally viable options (p = 0.5), the observation of an "H" signal will allow him to conclude that accepting is the rational choice.

The second agent then considers both the first agent’s decision and his own signal, again in a rational fashion. In general, the nth agent considers the decisions of the previous n-1 agents, and his own signal. He makes a decision based on a similar calculation as above, using Bayesian reasoning to determine the most rational choice.

Where "a" is the number of accepts in the previous set plus the agent’s own signal, and "b" is the number of rejects. Thus, a + b = n. As before, we are interested in learning how the value on the right hand side of the equation compares with p. As explained in [1] on Pg. 499 of Chapter 16, we can observe that if there are more accepts than rejects (a > b), an agent will accept, and if there are more rejects (b > a), the agent will reject. The mathematics of this is as below :

If we replace the second term in the denominator with (1-p)q^a*(1-q)^b, the value of the whole expression changes to p. Recall our assumption that q > 0.5. Given this, if a > b, the replacement has actually caused the value in the denominator to increase, and consequently the value of the entire expression to decrease. This means that the original value of the expression was greater than p. Therefore, as explained earlier, the individual will choose to accept in this case, because he had started out with the assumption that p = 0.5. Similarly, if a < b, the replacement will cause the value of the expression to increase to p. This means that the original value was less than p, that is, less than 0.5. Therefore, in this case, the individual will choose to reject.

Practically, then, because an agent has only one signal, which it counts equally with the opinions of all previous agents, then if |a-b| >= 2, the agent’s own opinion will not matter. Furthermore, if this occurs in the sequence at a position n, any following agents n+2, n+3…n+x, will receive no more information from any others beyond the nth agent. That is, it is clear that each one is making a decision only based on information up through the nth agent.

Explicit Model Assumptions

[edit]

The original model makes several assumptions about human behavior and the world in which humans act [2], some of which are relaxed in later versions [1] or in alternate definitions of similar problems, such as the diffusion of innovations.

  1. Boundedly Rational Agents: The original Independent Cascade model assumes humans are boundedly rational[8] – that is, they will always make rational decisions based on the information they can observe, but the information they observe may not be complete or correct. In other words, agents do not have complete knowledge of the world around them (which would allow them to make the correct decision in any and all situations). In this way, there is a point at which, even if a person has correct knowledge of the idea or action cascading, they can be convinced via social pressures to adopt some alternate, incorrect view of the world.
  2. Incomplete Knowledge of Others: The original Independent Cascade Model assumes that agents have incomplete knowledge of the agents which precede them in the specified order. As opposed to definitions where agents have some knowledge of the "private information" held by previous agents, the current agent makes a decision based only on the observable action (whether or not to imitate) of those preceding him. It is important to note that the original creators argue this is a cause of the artifact of real-world information cascades that they can be caused by small shocks.
  3. Behavior of all previous agents is known

Resulting Conditions

[edit]
  1. Cascades will always occur-as discussed, in the simple mode, the likelihood of a cascade occurring increases towards 1 as the number of people making decisions increases towards infinity.
  2. Cascades can be incorrect-because agents make decisions with both bounded rationality and probabilistic knowledge of the initial truth (e.g. whether accepting or rejecting is the correct decision), the incorrect behavior may cascade through the system.
  3. Cascades can be based on little information-mathematically, a cascade of an infinite length can occur based only on the decision of two people. More generally, a small set of people who strongly promote an idea as being rational can rapidly influence a much larger subset of the general population
  4. Cascades are fragile-because agents receive no extra information after the difference between a and b increases beyond 2, and because such differences can occur at small numbers of agents, agents considering opinions from those agents who are making decisions based on actual information can be dissuaded from a choice rather easily. [2] thus suggests that cascades are susceptible to the release of public information when information gleaned from previous decision-makers in the queue is low. [2]also discusses this result in the context of the underlying value p changing over time, in which case a cascade can rapidly change course.

Examples and fields of application

[edit]

Information cascades occur in situations where seeing many people make the same choice provides evidence that outweighs one's own judgment. That is, one thinks: "It's more likely that I'm wrong than that all those other people are wrong. Therefore, I will do as they do."

In what has been termed a "reputational cascade", late responders sometimes go along with the decisions of early responders, not just because the late responders think the early responders are right, but also because they perceive their reputation will be damaged if they dissent from the early responders.[9]

Market cascades

[edit]

Information cascades have become one of the topics of behavioral economics, as they are often seen in financial markets where they can feed speculation and create cumulative and excessive price moves, either for the whole market (market bubble...) or a specific asset, for example a stock that becomes overly popular among investors.

Marketers also use the idea of cascades to attempt to get a buying cascade started for a new product. If they can induce an initial set of people to adopt the new product, then those who make purchasing decisions later on may also adopt the product even it is no better than, or perhaps even worse than, competing products. This is most effective if these later consumers are able to observe the adoption decisions, but not how satisfied the early customers actually were with the choice. This is consistent with the idea that cascades arise naturally when people can see what others do but not what they know[10].

Information cascades are usually considered by economists:

  • as products of rational expectations at their start,
  • as irrational herd behavior if they persist for too long, which signals that collective emotions come also into play to feed the cascade.

Social Network Analysis

[edit]

Dotey et al.[11] state that information flows in the form of cascades on the social network. According to the authors, analysis of virality of information cascades on a social network may lead to many useful applications like determining the most influential individuals within a network. This information can be used for maximizing market effectiveness or influencing public opinion. Various structural and temporal features of a network affect cascade virality.

In contrast to work on information cascades in social networks, the Social Influence Model of belief spread argues that people have some notion of the private beliefs of those in their network [12]. The social influence model, then, relaxes the assumption of information cascades that people are acting only on observable actions taken by others. In addition, the social influence model focuses on embedding people within a social network, as opposed to a queue. Finally, the social influence model relaxes the assumption of the information cascade model that people will either complete an action or not by allowing for a continuous scale of the "strength" of an agents belief that an action should be completed.

Historical examples

[edit]
  • Small protests began in Leipzig, Germany in 1989 with just a handful of activists challenging the German Democratic Republic.[13] For almost a year, protesters met every Monday growing by a few people each time.[13] By the time the government attempted to address it in September 1989, it was too big to quash.[13] In October, the number of protesters reached 100,000 and by the first Monday in November, over 400,000 people marched the streets of Leipzig. Two days later the Berlin Wall was dismantled.[13]

Empirical Studies

[edit]

In addition to the examples above, Information Cascades have been shown to exist in several empirical studies. Perhaps the best example, given above, is [7]. Participants stood in a line behind an urn which had balls of different colors. Sequentially, participants would pick a ball out of the urn, looks at it, and then places it back into the urn. The agent then voices their opinion of which color of balls (red or blue) there is a majority of in the urn for the rest of the participants to hear. Participants get a monetary reward if they guess correctly, forcing the concept of rationality.

Other examples include

  • De Vany and Walls[14] create a statistical model of information cascades where an action is required. They apply this model to the actions people take to go see a movie that has come out at the theatre. De Vany and Walls validate their model on this data, finding a similar Pareto distribution of revenue for different movies.
  • Walden and Browne also adopt the original Information Cascade model, here into an operational model more practical for real world studies, which allows for analysis based on observed variables. Walden and Browne test their model on data about adoption of new technologies by businesses, finding support for their hypothesis that information cascades play a role in this adoption [15]

See also

[edit]

References

[edit]
  1. ^ a b c d e Easley, David (2010). Networks, Crowds and Markets: Reasoning about a Highly Connected Workld. Cambridge University Press. pp. 483–506.
  2. ^ a b c d e Bikhchandani, S., Hirshleifer, D., and Welch, I. (1992), "A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades," Journal of Political Economy, Volume 100, Issue 5, pp. pp. 992-1026. + button to enlarge.
  3. ^ Information Cascades and Rational Herding: An Annotated Bibliography and Resource Reference
  4. ^ Schiller, R.J. (1995). "Conversation, Information and Herd Behavior". Rhetoric and Economic Behavior. 85 (3): 181–185.
  5. ^ Gruhl, Daniel; Guha, R.; Liben-Nowell, David; Tomkins, Andrew (2004). "Information diffusion through blogspace". WWW: 491–501. doi:10.1145/988672.988739. ISBN 158113844X.{{cite journal}}: CS1 maint: date and year (link)
  6. ^ Sadikov, E. (2011). "Correcting for Missing Data in Information Cascades" (PDF). WSDM. Retrieved March 23 2012. {{cite journal}}: Check date values in: |accessdate= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)
  7. ^ a b Anderson, L.R. (1997). "Information Cascades in the Laboratory". The American Economic Review. 87 (5): 847–862. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  8. ^ Newell, A. (1972). Human problem solving. Englewood Cliffs, NY: Prentice Hall.
  9. ^ Pierre Lemieux (2003), "Following the Herd", Regulation, Cato Institute, 21. [1]. Retrieved 14 July 2010.
  10. ^ http://research.ivo-welch.info/palgrave.pdf
  11. ^ Dotey, A., Rom, H. and Vaca C.,Information Diffusion in Social Media. 2011, Stanford University
  12. ^ Friedkin, N.E. and Johnsen, E.C. (2011). Social Influence Network Theory: A Sociological Examination of Small Group Dynamics. Cambridge University Press.{{cite book}}: CS1 maint: multiple names: authors list (link)
  13. ^ a b c d Shirky, Clay (2008). Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press. pp. 161–164. ISBN 978-1594201530.
  14. ^ De Vany, A. (1999). "Uncertainty in the movie industry: does star power reduce the terror of the box office?". Journal of Cultural Economics. 23 (4): 285–318. doi:10.1023/A:1007608125988. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  15. ^ Walden, Eric (2002). "Information Cascades in the Adoption of New Technology". ICIS Proceedings. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
[edit]


Category:Group processes Category:Behavioral finance