Jump to content

Human contingency learning

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 14:39, 18 August 2022 (Alter: doi. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Human contingency learning (HCL) is the observation that people tend to acquire knowledge based on whichever outcome has the highest probability of occurring from particular stimuli. In other words, individuals gather associations between a certain behaviour and a specific consequence. It is a form of learning for many organisms.

Stimulus pairings can have many impacts on responses such as influencing the speed of responses, accuracies of the responses, affective evaluations and causal attributions.[1]

There has been much development about human contingency learning over a span of 20 years. Further development in human contingency learning is required because many models that have been proposed are unable to incorporate all existing data.[2]

Description

Human contingency learning focuses on the acquisition and development of explicit or implicit knowledge of the relationships or statistical correlations between stimuli and responses.[1] It is similar to operant conditioning, which is a learning process where a behaviour can be encouraged or discouraged through praise or punishment. However, human contingency learning has been recognised as a cognitive process and may be considered an addition to classical conditioning.[1] Human contingency learning also has its theoretical roots entrenched in classical conditioning, which focuses on the statistical correlations between two stimuli instead of a stimulus and response.[3]

The methods for the experimentation or studies on human contingency learning are often found to be quite similar.[2] Participants in many studies of human contingency learning are given information about a number of situations where certain stimuli and certain responses are either absent or present.[2] They are then told to determine the extent to which the stimuli are related to the responses.[2] For example, in a trial, participants are provided a list of foods that a fictitious person has eaten (the stimulus) along with details about whether the patient experienced any allergic reactions after the food (the response).[2] According to the Quarterly Journal of Experimental Psychology, the participants will apply this information to determine the probability of that same patient acquiring an allergic reaction after consuming a different set of foods.

Human contingency learning mostly inherits the fundamental concepts from classical conditioning (and some from operant conditioning), which primarily focused on studying animals. It expands upon these studies and provides further application to human behaviour.[4]

Human contingency learning is recognised as an important ability to human survival because it allows organisms to predict and control events in the environment based on previous experiences.[2]

Theoretical roots

Ivan Pavlov, the Father of Classical Conditioning

Origins of classical (Pavlovian) conditioning

Human contingency learning has its roots connected to classical conditioning; also referred to as Pavlovian conditioning after the Russian psychologist, Ivan Pavlov.[5] It is a type of learning through association where two stimuli are linked to create a new response in an animal or person.[3] The popular experiment is known as Pavlov's dogs where food was provided to the dogs along with repeated sounds of a bell; the food, which was the initial stimulus, would cause the dog to salivate.[6] The pairing of the bell with the food resulted in the former becoming the new stimulus even after the food was excluded from the pairing.[6] This therefore meant that the bell (the new stimulus) would invoke an unconditional response from the dogs without the presence of the initial stimulus since the dogs anticipate the arrival of food.[6]

At a procedural level, human contingency learning experiments are very similar to the classical conditioning testing methods.[2] Stimuli consisting of cues and outcomes are paired and the decisions of the participants in response to the stimuli (contingency judgements) are assessed.[2]

Origins of operant conditioning

B.F. Skinner at Harvard circa 1950

Human contingency learning also has strong similarities with operant conditioning.[1] As mentioned, its method of learning involves the use of praise or punishment of a certain behaviour. Once certain behaviours indicate a certain consequence, the individual in testing will make an association between the behaviour and consequence. For example, if a certain behaviour contains a consequence that is positive, then the individual or organism will learn from this and continue to do it as the action has perceived as being rewarded.[7] This theory was developed by B.F. Skinner and explored in his 1938 book “The Behavior of Organisms: An Experimental Analysis”.

The research continued based on the work of Thorndike's law of effect, which states that a particular behaviour persists if pleasant consequences are repeated. The contrary is also true, where if there are unpleasant consequences to a certain behaviour, it is unlikely for that behaviour to continue.[7]

Concepts and theories behind human contingency learning

With all theories, providing an introduction to the fundamental concepts and frameworks underlying the overall cognitive process is necessary. The theories however are still undergoing testing as the methods employed to test the hypotheses are still inconclusive and are subject to review.[2]

Associative theories

Pathway strengthening (Rescorla-Wagner model)

One of the main cognitive theories that is inherent in human contingency learning is pathway strengthening, which is based on the Rescorla-Wagner model. It has been proposed as the mechanism that underlies the gradual learning tendencies to respond to certain inputs.[4] Pathway strengthening is when performance is attributed to the strengthening of pathways linking cue representations with the representation of outcomes.[8] It is a model of classical conditioning where learning is attributed to associations between conditioned and unconditioned stimuli.[9] The main focus of the Rescorla-Wagner model is that conditional stimuli can trigger or signal the unconditional stimuli.[10]

Stronger pathways allow for more efficient and automatic responses.[11] When participants are faced with fast-paced sequence-learning tasks, pathway-strengthening is accounted for in the gradual speeding of their responses.[12]

Associative models assume that the knowledge of a cue-outcome relationship is represented by an associative bond and that the bond should function similarly even without precise semantics of a test question.[5] The illustration of such a relationship can be linked back to the experiment of Pavlov's dogs.

Strengths of conditioned responses that are induced by the conditional stimulus depends on how strong the association is between representations of conditional stimuli and unconditional stimuli.:[2] This relationship can be expressed under the following learning rule or mathematical equation[2]

From the said formula, the level or extent of the associative strength between the conditional stimulus changes for each different trial () and will depend on both the associative strength of the cue acquired previously and the already-present associative strengths of all stimuli currently present within the trial itself ().[2] The term, , represents the highest associative strength that a certain unconditional stimulus can provide.[10] To represent an unconditional stimulus that is present in the trial, the Greek term often has a prescribed positive value (sometimes a value of 1).[10] Conversely, if there was a lack of an unconditional stimulus in the trial, the value undertaken by the Greek term would be 0.[10] The and terms in the formula are constant, representing the speed of learning given a certain unconditional stimulus.[10] Although this model has primarily been applied to classical conditioning, according to Dickinson et al. (1984), the Rescorla-Wagner model has its applications to human contingency learning.[2]

The problem with this however is the inherent assumption of the model, as De Houwer and Beckers state,

One only needs to assume that associations are formed between the representations of cues and outcomes, that the strength of these associations is updated according to the Rescorla–Wagner learning rule, and that judgements about the contingency between a target cue and an outcome are a reflection of the strength of the association that links the representations of that cue and outcome.

— Jan De Houwer & Tom Beckers (2002) A review of recent developments in research and theories on human contingency learning, The Quarterly Journal of Experimental Psychology: Section B, 55:4, 289-310

Therefore, the Rescorla-Wagner model's relevance to human contingency learning is considered problematic because the model "underestimate[s] the active role that observers play when encoding and retrieving knowledge about contingencies".[2]

Predictiveness

Predictiveness is the theory that learning will depend on how engaged the individual is to a stimulus - the more attention that is applied to a certain cue, the greater the speed of conditioning.[13] It is a theory that was developed by Nicholas Mackintosh, a British psychologist. If it is assumed that the predictiveness on associability is positive, then the amount of attention an individual applies to a cue will be a measure of the same cue's reliability as a predictor of a result in relation to other cues.[13] It is an assessment of the extent that an organism is able to reliably predict an outcome to occur based on the predictive power of a cue.[13] According to the Mackintosh model, it has generally been found that when individuals gain knowledge from cues that have a certain predictive force from earlier learning stages, an increase in the learning rate for both animals and humans tends to result.[13]

The extent of stimulus processing can also be compelled by the interaction effects of relative and absolute predictiveness.[14] Absolute predictiveness mechanisms are "expected to dominate when the entire compound of cues is informative".[13] On the other hand, a "relative-predictiveness mechanism should dominate when simultaneously presented cues differ in their predictiveness".[13] Both of these theories relating to predictiveness have been brought forward by different psychologists, with absolute predictiveness being derived from Pearce & Hall and relative predictiveness being part of Mackintosh's theory.[13]

Application

Human contingency learning has been studied under different types of models or paradigms. Some paradigms involve participants being asked to assess the associative relations between stimuli when presented with a combination of stimuli.[1] Particularly in humans, many different studies have been undertaken, such as judgements to determine a relationship between correlated stimuli, judgements on predictive relationships between stimuli and responses while measuring the response time and accuracy gains that may differ between each stimuli and response pair.[1]

Some applications of human contingency learning are summarised in the sub-headings below.

Generalisation decrement

Generalisation decrement is a type of learning that falls under the umbrella of associative learning.[15] It is a concept where both animals and humans base current circumstances of learning through past events if the conditions of such an event is similar to the present one.[16] The importance of applying associative theories to the context of human learning is due to the human behaviour of generalisation.[5] Generalisation is when an association between a stimulus and a response will be generalised or applied superficially to a stimulus that is similar to the initial one.[15] To expand upon this further, when making a learned association concerning a stimulus A, the strength of that association can be dispensed across a number of elements that make up A. When introducing a different object B, if it carries some of the same elements that A contained, the degree to which B inherits stimulus A's associative strength will depend upon the amount of similarities that they both share.[5] The assumption of elements is made because the stimuli can be seen as “compounds composed of constituent elements (i.e., representational features)”.[17]

Pearce's Model

Generalisation decrement can be represented by Pearce's configurable model.[15] Similarities of two stimuli is shown in the following equation:[18]

In the above formula, quantifies the amount of elements that both stimuli contain, with and representing the total number of elements that are exclusive to each element.[18] If a single stimulus gets paired with an unconditional stimulus, the strength of there being a response to the other stimulus has a positive relationship with the number of elements shared with the originally conditioned stimulus.[15] On the other hand, there is an inverse relationship between the conditional stimuli and the elements that are exclusive to each of them.[15]

Expanding on the model further, if elements are added to the stimuli, the response increases as the quantity of common elements is increased.[18] Conversely, if elements are removed, the response will decrease due to a reduction in the quantity of common elements in the stimuli.[18]

References

  1. ^ a b c d e f Schmidt, James R. (2012), "Human Contingency Learning", Encyclopedia of the Sciences of Learning, Springer US, pp. 1455–1456, doi:10.1007/978-1-4419-1428-6_646, ISBN 9781441914279
  2. ^ a b c d e f g h i j k l m n De Houwer, Jan; Beckers, Tom (2002). "A Review of Recent Developments in Research and Theories on Human Contingency Learning". The Quarterly Journal of Experimental Psychology Section B. 55 (4b): 289–310. doi:10.1080/02724990244000034. ISSN 0272-4995. PMID 12350283. S2CID 16165406.
  3. ^ a b McLeod, Saul (2018). "Classical Conditioning | Simply Psychology". www.simplypsychology.org. Retrieved 2019-05-22.
  4. ^ a b Sternberg, Daniel A.; McClelland, James L. (2011-12-22). "Two Mechanisms of Human Contingency Learning". Psychological Science. 23 (1): 59–68. doi:10.1177/0956797611429577. ISSN 0956-7976. PMID 22198929. S2CID 16297513.
  5. ^ a b c d Shanks, David R. (2007). "Associationism and cognition: Human contingency learning at 25". Quarterly Journal of Experimental Psychology. 60 (3): 291–309. doi:10.1080/17470210601000581. ISSN 1747-0218. PMID 17366302. S2CID 31848828.
  6. ^ a b c McLeod, Saul (2018). "Pavlov's Dogs Study and Pavlovian Conditioning Explained | Simply Psychology". www.simplypsychology.org. Retrieved 2019-05-22.
  7. ^ a b McLeod, Saul (2018). "Edward Thorndike - Law of Effect | Simply Psychology". www.simplypsychology.org. Retrieved 2019-05-22.
  8. ^ Rescorla, Robert A. (1971). "Variation in the effectiveness of reinforcement and nonreinforcement following prior inhibitory conditioning". Learning and Motivation. 2 (2): 113–123. doi:10.1016/0023-9690(71)90002-6. ISSN 0023-9690.
  9. ^ Abbott, Bruce B. (2016). "The Rescorla-Wagner Model of Classical Conditioning". users.ipfw.edu. Archived from the original on 2016-09-23. Retrieved 2019-05-22.
  10. ^ a b c d e Chance, Paul (2008). Learning and behavior : active learning edition (international ed.). pp. 85–89. ISBN 978-1111834944. OCLC 952304376.
  11. ^ Cohen, Jonathan D.; Dunbar, Kevin; McClelland, James L. (1990). "On the control of automatic processes: A parallel distributed processing account of the Stroop effect". Psychological Review. 97 (3): 332–361. doi:10.1037/0033-295x.97.3.332. ISSN 0033-295X. PMID 2200075.
  12. ^ Cleeremans, Axel; McClelland, James L. (1991). "Learning the structure of event sequences". Journal of Experimental Psychology: General. 120 (3): 235–253. doi:10.1037/0096-3445.120.3.235. ISSN 0096-3445. PMID 1836490.
  13. ^ a b c d e f g Kattner, Florian (2014-11-26). "Transfer of absolute and relative predictiveness in human contingency learning". Learning & Behavior. 43 (1): 32–43. doi:10.3758/s13420-014-0159-5. ISSN 1543-4494. PMID 25425296.
  14. ^ Le Pelley, M. E. (2004). "The Role of Associative History in Models of Associative Learning: A Selective Review and a Hybrid Model". The Quarterly Journal of Experimental Psychology Section B. 57 (3b): 193–243. doi:10.1080/02724990344000141. ISSN 0272-4995. PMID 15204108.
  15. ^ a b c d e Wheeler, Daniel S.; Amundson, Jeffrey C.; Miller, Ralph R. (2006). "Generalization Decrement in Human Contingency Learning". Quarterly Journal of Experimental Psychology. 59 (7): 1212–1223. doi:10.1080/17470210600576342. ISSN 1747-0218. PMID 16769621. S2CID 45067927.
  16. ^ Mercado, Eduardo; Orduna, Itzel; Myers, Catherine E.; Gluck, Mark A. (2001). "Generalization of auditory classification abilities in bats, humans, and neural networks". doi:10.1037/e537102012-609. {{cite journal}}: Cite journal requires |journal= (help)
  17. ^ Estes, William (2014-06-20). Estes, William (ed.). Handbook of Learning and Cognitive Processes (Volume 2). doi:10.4324/9781315770437. ISBN 9781315770437.
  18. ^ a b c d Pearce, John M. (1987). "A model for stimulus generalization in Pavlovian conditioning". Psychological Review. 94 (1): 61–73. doi:10.1037/0033-295x.94.1.61. ISSN 0033-295X. PMID 3823305.