Jump to content

Talk:Operant conditioning: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Summary in
Line 141: Line 141:


:As for primary and secondary reinforcers, the equivalent terms for these are unconditioned and conditioned reinforcers, not intrinsic or extrinsic. An extrinsically reinforced response would be a behavior that is reinforced with externally delivered consequences (which CAN include food or other primary reinforcers, as well as secondary reinforcers), while an intrinsically reinforced response is a behavior that is rewarding in and of itself without the need for delivering a reinforcer, which is called ''automatic'' reinforcement (ie. the performance of the behavior itself is also the reinforcer, like with a "runner's high" or reading for pleasure).[[User:Lunar Spectrum|Lunar Spectrum]] | [[User talk:Lunar Spectrum|Talk]] 00:34, 22 March 2007 (UTC)
:As for primary and secondary reinforcers, the equivalent terms for these are unconditioned and conditioned reinforcers, not intrinsic or extrinsic. An extrinsically reinforced response would be a behavior that is reinforced with externally delivered consequences (which CAN include food or other primary reinforcers, as well as secondary reinforcers), while an intrinsically reinforced response is a behavior that is rewarding in and of itself without the need for delivering a reinforcer, which is called ''automatic'' reinforcement (ie. the performance of the behavior itself is also the reinforcer, like with a "runner's high" or reading for pleasure).[[User:Lunar Spectrum|Lunar Spectrum]] | [[User talk:Lunar Spectrum|Talk]] 00:34, 22 March 2007 (UTC)

I know that secondary/primary reinforcers are not synomous with intrinsic or extrinsic. I think extrinsic, intrinsic, secondary, and primary reinforcement would all fit into the article. (i havent looked at it in a while i just dont like being misunderstood)[[User:thuglas|thuglas]]<sup>[[User_talk:thuglas|T]]|[[Special:contributions/thuglas|C]]</sup> 15:13, 7 August 2007 (UTC)


== merging with Reinforcement ==
== merging with Reinforcement ==

Revision as of 15:13, 7 August 2007

WikiProject iconPsychology B‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Psychology, a collaborative effort to improve the coverage of Psychology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.

Thorndike

I have added a refutation of the thorndike extension article and cited Chiesa. This whole article is problematic in its treatment of reinforcement theory which is not very "clean" in its presentation.

Moreover the digression into the neurochemistry of reinforcement is something that Skinner has rejected since 1938 when he dismissed physiological explanations as appealing to a "conceptual nervous system (CNS)" and later.


--Florkle 06:23, 17 May 2007 (UTC)[reply]

I don't think it's accurate to relate Thorndike to Operant Conditioning. Skinner's operant was "discovered" by him alone. Thorndike used different terms and explanatory systems. This is very important. Lots of people examined learning in humans and animals before Skinner. None of that was "operant conditioning" because it relied on mediating structures ('expectations', 'drives', etc). The explanatory system is as important as the actual data (perhaps even more so).

Operants were also quantified in the operant chamber - Skinner's invention - which Thorndike did not use.

Moreover it implies that Skinner's position is just another learning theory, and it is not. This is an attempt to rewrite in the dead theories of Thorndike as "operant" theories which have become popular, or scientifically validated. Thorndike was important in his little way. Put his theories on his own page, or change the name of the page to "instrumental learning". Operant = Skinner != Thorndike.

(-Florkle!)

—The preceding unsigned comment was added by Florkle (talkcontribs) 07:25, 16 May 2007 (UTC).[reply]

(Psst. Normally new comments are added to the bottom of the discussion page--makes it easier to find them.) digfarenough (talk) 13:54, 17 May 2007 (UTC)[reply]

Regarding article merging

Looking at the articles in question, I think that the Reinforcement article should not be merged into Operant Conditioning. The Reinforcement article has a good level of detail that makes itself stand as an article on its own. Adding to that the proposal to merge Schedules of Reinforcement into Reinforcement, and the amount of redundant content would bog down the entire article. I think that elements of the Schedule of Reinforcement article can be successfully merged into Reinforcement. But Operant Conditioning already does enough of an overview of reinforcement not to warrant Reinforcement being merged into it. That would detract from the broader focus of the Operant Conditioning article, which should be more about the modification of behavior (operant procedures) rather than the details about the tool used to modify behavior (reinforcement). Lunar Spectrum | Talk 01:11, 14 March 2007 (UTC)[reply]

Potential section about biological basis of operant conditioning

In the section "Factors that alter the effectiveness of consequences" I included the mention of how certain factors are the result of biology. For example, I mentioned that the principles of Immediacy and Contingency are the result of the statistical probability of dopamine to modify the appropriate synapses. However, the necessity of an entire section devoted to the biological basis of operant procedures is becoming clear. I used the dopamine reference only to support the section about "Factors that alter the effectiveness of consequences," but already more biological references have been added to that section. They are good references and should be kept, but they should be moved to their own section because they do not contribute anything to the subject of the section they are currently in.

I think that the biological section should be the second section, placed right after the "Reinforcement, punishment, and extinction" section. It would be a good way to structure the article to first have exposition on reinforcement, punishment, and extinction procedures, then have a three-part section immediately following it to explain the neurophysiological effects of reinforcing stimulation, aversive stimulation, and extinction. An alternative to this might be to simply add such a discussion to each of the existing corresponding articles on reinforcement, punishment, and extinction. --Lunar Spectrum | Talk 00:31, 29 September 2006 (UTC)[reply]


Simplification

What is it that needs simplyfing? Operant conditioning is difficult to understand and does not lend itself to simple explanations.

I don't think it's as complicated as phrases like "is the modification of behavior (the actions of animals) brought about by the consequences that follow upon the occurrence of the behavior." would make it seem. Elf | Talk 04:31, 13 September 2005 (UTC)[reply]

I don't see what need simplifying. Maybe it's because of my Psychology background that this page seems crystal clear to me.

Negative Reenforcement/ Negative Punishment

I believe the article has transposed the definitions for Negative Reenforcement and Negative Punishment. Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior. Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior.

I haven't edited the article because I may be missing something. — Preceding unsigned comment added by 155.76.223.253 (talkcontribs)

No, because punishment always discourages a behaviour, and reinforcement always encourages a behaviour. Taking away a toy removes a pleasant stimulus, thus discouraging the behaviour (which makes it punishment). Taking away a loud noise removes an unpleasant stimulus, thus encouraging the behaviour (which makes it reinforcement).
Even intuitively, it doesn't make sense to punish a child (for example) by removing something unpleasant - "Since you didn't clean up, you don't have to do your homework tommorrow!" =) — Preceding unsigned comment added by 129.128.232.250 (talkcontribs)
Right; I'm going to rephrase it--reread the definitions of terms, because they're not used in exactly the same way that most people use the terms individually in regular conversation; as 129.128.232.250 said :
  • Reinforcment is something that causes a behavior to increase in frequency
  • Punishment is something that causes a behavior to decrease in frequency
  • Positive is simply adding something (note that adding something unpleasant is still adding something; people tend to think of "positive" as meaning "something nice", but in behavior science, that's not what it means)
  • Negative is simply removing something (whether pleasant or unpleasant)
For example, then, "negative punishment" is the REMOVAL of SOMETHING to cause a behavior to DECREASE.
Elf | Talk 05:52, 15 February 2006 (UTC)[reply]
  • ~ChocoboFreak~To put it in the simplest way I can, it isn't "Since you didn't clean up, you don't have to do your homework tommorrow!". Think of a rat in a box. You want it to learn to press a button frequently. One way of doing it is to make sure that the rat contiunously receives mild electric shocks (the unpleasant stimulus) unless it presses the button. If the rat presses the button, an unpleasant stimulus is taken away (so it is more likely to want to press the button again). To use your homework analogy, it's like letting the child do their homework the next day if they do something good.

According the operant conditioning, if you want a child to clean their room, you could punish them for having it dirty (Positive Punishment: adding something which is not good for them: say, smacking them/Negative Punishment: taking something away that they like: say, a toy). That's what most parents think about when they think about changing a child's behaviour. You could, however, reward them for cleaning it when they clean it (Positive reinforcement: Giving them something to make them more likely to repeat the behaviour: say, giving them money/Negative Reinforcement: taking away something bad when they do something good: in my example of the rat, it's taking away the electric shocks).~ChocoboFreak~

Suggestions for additions

  • Mention the primary and antithetical approach to psychology-- Cognition. Cognition and

Behaviorism are mutually incompatible. Although the operant conditioning approach works well for some contexts (e.g. animal training), the cognitive approach accomplishes the same, but using a different mechanism.

  • Might be worth mentioning the failed application of Operant conditioning to human language learning, and Chomsky's critique (it's already in the behaviorism article).
  • Add new section on Animal Training. There is already an entry animal training, but note here the technical issues: Note examples of 'positive reinforcement' and 'positive punishment' training techniques. Also note that although 'modern' animal trainers consider themselves to use OC, they often rely on techniques that are not strictly operant conditioning in the original Skinnerian sense, e.g. bridging.

Santaduck 03:13, 20 January 2006 (UTC)[reply]

Cat or rat?

At first it said the person worked w/ cats but then it said rats!! Which one is it?

~ChocoboFreak~ Skinner worked with rats, Thorndike worked with cats. It appears to have been fixed. Somebody probably just got the two people mixed up.

Consequences

The consequences link doesn't really make sense.128.213.28.129 20:43, 15 February 2006 (UTC)[reply]

What? You don't think a parlour game is a crucial piece of operant conditioning? (Removed link.) Elf | Talk 23:38, 15 February 2006 (UTC)[reply]

Not clear on prey drive activity not being a reward

New section includes this paragraph:

"In dog training, the use of the prey drive, particularly in training working dogs, detection dogs, etc., the stimulation of these fixed action patterns, relative to the dog's predatory instincts, are the key to producing very difficult yet consistent behaviors, and in most cases, do not involve operant, classical, or any other kind of conditioning."

So I don't understand what the point is--that allowing the dog to indulge prey drive when they do something correct is NOT a positive reinforcement? It seems to me like it is. Dog does the weave poles really fast, they get the tug toy. Dog doesn't go as fast, dog doesn't get to play tug. How is that not a positive reinforcer? Elf | Talk 00:55, 24 February 2006 (UTC)[reply]

~ChocoboFreak: It seems like it is a positive reinforcer to me as well. "This is because the prey drive, once started, follows an inevitable sequence: the search, the eye-stalk, the chase, the grab-bite, the kill-bite". According to this, it seems close to being classical conditioning (in a way).

The section on prey drive is inconsistent with the rest of the article. Not everyone agrees that tracking or working dogs have to be rewarded every time; this is more the author's bias than fact, especially without seeing any citations. It is stated that prey drive is an example of an exception to operant conditioning. This is conjecture, as again no sources are cited. Giving the toy or throwing the ball is an addition of something the animal wants - therefore it is positive reinforcement. Even though this is not a food reward, it is a conditioned reinforcer. If the animal does something correctly, it is given this reinforcement. We really don't care why the animal wants the reward. The fact that it works for the reward makes it operant conditioning.


  • 08/28/2006 I believe that the author of the "prey drive" section may be misunderstanding an aspect of the limitations placed on the effectiveness of a reinforcer and is seeing it as a refutation of operant procedures. Within operant conditioning, there are indeed a number of factors that can reduce how effective a reinforcer can be. The factor the author seems to reference is Satiation. Obviously, if the dog's reward is a big meal, this will drastically reduce the effectiveness of reinforcement using treats because their hunger is already satiated. That is why trainers will use a variety of different reinforcers, such a treats, toys, and praise, in order train their animal. But this does not, as the author claims, constitute a "drawback" of operant conditioning. It's just how reinforcement works. It would not be favorable for evolution to produce a species that can always be reinforced by the same thing to no end. If food were ALWAYS a reinforcer, we'd spend our whole lives at the dinner table and never be able stop. That is why we have various biological mechanisms that regulate the effectiveness of a reinforcer depending on our bodies' needs. Behaviorists call these "Establishing Operations" and the article would probably benefit from their mention. --Lunar Spectrum


OK, going by a suggestion on the new contributor's question page, I'm going to lay out what I think should be done with this section. The whole "drawbacks and limitations" section needs to be redone. Obviously, Behavior Analysis tends to draw a lot of ire and so the popular insistence for such a section, no matter how badly done, is very strong. However, the opening paragraph on the "drawbacks" section illustrates this problem nicely. A Nobel laureate is cited as stating that operant conditioning doesn't take into account "fixed" reflexes, yet in the very same paragraph we have an explanation (though incomplete) about how operant conditioning isn't supposed to deal with reflexes to begin with because the form of a reflex is, as mentioned, biologically fixed in form, whereas operant behavior is defined as behavior whose form is modifiable by consequences. This demonstrates something that BF Skinner himself noted, that a person's criticism of Behavior Analysis is inversely proportional to how much they actually understand it (a phenomenon that also holds true for other scientific models, like Evolution by Natural Selection). I intend to keep that criticism of the Nobel laureate in the article, but expand upon the paragraph to explain Skinner's rationale for not including reflexes as a form of operant behavior.

Also, the entire "prey drive" portion needs to be removed. In its place would be a listing of factors that alter the effectiveness of consequences, factors such as what I previously mentioned about "satiation." It could look like this:

  1. Satiation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior.
  2. Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness upon the response is reduced. But if a consequence follows the response reliably, it's effectiveness is increased. If someone has a habit of getting to work late, but is only occassionally reprimanded for their lateness, the reprimand will not be a very effective punishment.
  3. Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the consequence. If someone's liscense plate is caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them over, then their speeding behavior is more likely to be affected.
  4. Size: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually large lottery jackpot, for example, might be enough to get someone to buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery jackpot is small, the same person might not feel it worth the effort to drive out and find a place to buy a ticket. In this example, it's also useful to note that "effort" is a punishing consequence. How these opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed or not.

I will wait approximately a week (maybe more) for further feedback about my intended alterations. Afterwards, I will see how much of what I have included above I will implement. Lunar Spectrum 05:17, 30 August 2006 (UTC)[reply]


Nearly a month has passed and there is no comment about my suggestion. I think I will simply add what I have outlined above in a new section and deal with the prey-drive section some other time. --Lunar Spectrum | Talk 02:05, 26 September 2006 (UTC)[reply]

Negative reinforcement and punishment

For what it's worth, note in passing that Karen Pryor: Don't Shoot the Dog! defines negative reinforcement and punishment differently. To Pryor, the main difference is timing. A negative reinforcement is something disagreeable that the subject can immediately stop by changing his behavior. A punishment is something that happens later that the subject cannot immediately stop by changing his behavior. If Auntie frowns when I put my feet on the coffee table, and stops frowning when I take them off, that is what Pryor calls a negative reinforcement. If I get a bad grade on my report card that reflects all the work I haven't done in class this year, that is what Pryor calls a punishment. Pryor notes that even though punishment is everyone's favorite method of untraining unwanted behavior, it rarely works because the subject usually has difficulty connecting the punishment with the behavior; often, the subject learns to evade punishment instead.

The behaviorist psychologist H. J. Eysenck talks in similar terms in his book Psychology Is About People, Chapter 3. He insists on talking about positive and negative reinforcement instead of reward and punishment, despite the clumsiness of his preferred terms, because with rewards and punishments the timing may make it difficult for the subject to connect the result with the behavior.

Extinction, other suggestions

I'm not too sure what goes ineffective when extinction occurs. I assume its the reward (the pellet)... but then it seems like the behavior became extinct. Regardless, I'm confused and this paragraph ought to be clarified.

Extinction is a related term that occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.

I would also explain in the intro that Operant Conditioning is not absolute - it doesn't ensure that the subject will always perform a task (as using the prey drive I gather does.) That little factoid came out of the blue in that section.

Merges

Useful info on both articles... Schedule of reinforcement should not be an article. Reinforcement probably shouldnt be - both should redirect here. —The preceding unsigned comment was added by Thuglas (talkcontribs) 05:21, 2 March 2007 (UTC).[reply]

If we do merge both articles, the current article may be too big to read. See article size for more information.--Janarius 14:53, 2 March 2007 (UTC)[reply]

I was kinda thinking that so i put a second link to merge schedules of reinforcement into reinforcment. perhaps a little thing on extrinsic and intrinsic reinforcement and secondary/primary reinforcement could be added thuglasT|C 17:37, 2 March 2007 (UTC)[reply]

I was thinking the same thing, but we probably need more opinions to decide on that matter. About primary/secondary reinforcement, is it also called primary or unconditioned reinforcer or is it something else?--Janarius 16:28, 3 March 2007 (UTC)[reply]

yeah i think that would work primary means food or something secondary means money the differences between extrinsic/intrinsic and primary are very little, but for some reason they remain seperate in my mind

i figure if noone complains in a week or so we should go ahead and be WP:bold ive posted the link on WP psych. i dont think anyone would disagree with this idea.


As I mentioned at the top of the page (wasn't clear on whether there was a convention of putting more recent talk page content at the top or at the bottom) I think it would be good to merge Schedules of reinforcement into Reinforcement, but not Reinforcement into Operant conditioning. Like someone else mentioned before, Reinforcement merged into OC could be too large and Reinforcement has more than enough content to merit its own article and already stands on its own. The focus of the Operant Conditioning article should be on operant procedures, which use consequences to modify behavior. Reinforcement is only one of a group of different kinds of consequences and is in no way the "be-all end-all" of Operant conditioning and to merge them would seriously disrupt the balance of the OC article in that respect.
As for primary and secondary reinforcers, the equivalent terms for these are unconditioned and conditioned reinforcers, not intrinsic or extrinsic. An extrinsically reinforced response would be a behavior that is reinforced with externally delivered consequences (which CAN include food or other primary reinforcers, as well as secondary reinforcers), while an intrinsically reinforced response is a behavior that is rewarding in and of itself without the need for delivering a reinforcer, which is called automatic reinforcement (ie. the performance of the behavior itself is also the reinforcer, like with a "runner's high" or reading for pleasure).Lunar Spectrum | Talk 00:34, 22 March 2007 (UTC)[reply]

I know that secondary/primary reinforcers are not synomous with intrinsic or extrinsic. I think extrinsic, intrinsic, secondary, and primary reinforcement would all fit into the article. (i havent looked at it in a while i just dont like being misunderstood)thuglasT|C 15:13, 7 August 2007 (UTC)[reply]

merging with Reinforcement

Having lot of material on reinforcement in the operant conditioning makes the article too large. Reinforcement deserves separate section than operant conditioning. Rather than a mergefrom the reinforcement, i suggest that appropriate sections be merged to reinforcement. The two articles - Schedule of reinforcement and reinforcement can be merged together Kpmiyapuram 12:17, 10 April 2007 (UTC)[reply]

I completely agree and I think the older comments tend to agree along those lines as well. Nobody has really commented on it for a long while though, so I think I'm going to remove the merge tags from the operant conditioning article. I'll leave the merge tags referring to reinforcement and schedules articles though in case anyone wants to go ahead with it. Lunar Spectrum | Talk 02:49, 11 April 2007 (UTC)[reply]

Biological correlates of operant conditioning

This section currently appears to have material that fits for "biological correlates of classical conditioning" and not those of operant conditioning. Kpmiyapuram 13:51, 11 April 2007 (UTC)[reply]

If you mean the first paragraph for that section, that's just a case of a contributer using "conditioned stimulus" where "conditioned reinforcer" might be more accurate. I've had brief discussions on the matter with that contributer who comes from the position that the distinctions between classical and operant conditioning isn't as clear cut as previously thought (ex. he mentioned how unconditioned stimuli tend to also function as primary reinforcers or primary punishers). Although I come from the perspective of maintaining terminological consistency, I thought the point was a fair one so I left some of the language in the article as it was. As for the content in the second paragraph in that section, that's all about neuromodulators like dopamine and acetylcholine that correlate with the modification of the synapse (behavior) upon the delivery of a consequence, very much the domain of operant conditioning. Lunar Spectrum | Talk 06:15, 12 April 2007 (UTC)[reply]
i disagree about neuromodulators. That's bio- neuro- psychology, not operant conditioning. It shows the current cognitive bias that is trying to move away from the study of behavior which is what made operant conditioning such a powerful paradigm to begin with. --florkle 23:53, 23 May 2007 (UTC)[reply]
A purely behaviorist/conditioning approach was long ago shown to be flawed as a complete explanation for experimental findings in animals, and even within the realm of operant conditioning, its biological substrate is surely an important topic for discussion. To be sure, it is worth including references from authors with a more historical perspective who were vehement supporters of a behavioralism approach, but this article should also emphasize the modern understanding of operant conditioning, which, from what I've seen, is increasingly couched in terms of neural substrates and even, indeed, cognitive approaches. digfarenough (talk) 19:22, 24 May 2007 (UTC)[reply]

Extinction

There is some material on Extinction (psychology) in a separate article but i see that the current article on operant conditioning discusses it at more length. perhaps the information could be reorganized or merged. Kpmiyapuram 14:18, 24 April 2007 (UTC)[reply]

Yes, I think that a lot of the information in the extinction section would serve the article on Extinction (psychology) well. I think the reason I wrote it in the operant conditioning article was because it addresses the variable nature of operant behavior, but some reorganizing is definitely in order. Information about extinction bursts would go well in the extinction article. Information about extinction-induced variability can fit in both Extinction (psychology) and Shaping (psychology). Lunar Spectrum | Talk 03:44, 7 May 2007 (UTC)[reply]
After some brief research, I think the section on extinction-induced variability would be better if it were changed to "Operant variability" and it could be a good place to add other information about behavioral variability across various situations, not just during extinction. Will start moving some stuff around and see how it comes out. Lunar Spectrum | Talk 04:44, 7 May 2007 (UTC)[reply]

new sections

Why are the sections "verbal behavior" and "four term contingency" at the beginning of the article? The latter seems unneeded and the former seems like it should go much later, if at all. And why do we have this paragraph arguing that Skinner's work wasn't based on Thorndike's? Is this information relevant to discussing what operant conditioning is? If anything, I think that should be moved to a separate history section. I'm also surprised reinforcement learning isn't linked in this article, but I'll toss that into the "see also" section now... digfarenough (talk) 13:53, 17 May 2007 (UTC)[reply]

it was there arguing the reverse. --florkle 23:51, 23 May 2007 (UTC)[reply]

I also think the new additions disrupt the flow of the article. They certainly might have their place somehwere in it, but right now it seems a bit random. And it also seems that the biological section was moved from 3rd section to, apparently, the very last??? To my thinking, the biology section should be near the beginning since despite being the most heavily disparaged area of psychology, operant conditioning is more solidly grounded in biology than anything else in the field. So I think having that biological basis close to the top is important for the credibility of the subject matter. I think an appropriate structure to the article would be 1. history 2. basics 3. biological underpinnings 4. plus various other special topics.

that is not skinner's rationale & its not behaviorist. skinner rejected biological justifications - see functional relationship arguments, such as Chiesa's --florkle 23:51, 23 May 2007 (UTC)[reply]

Additionally, I think a special section on verbal behavior should have to clearly explain how an understanding of verbal operants extends from operant conditioning, which it presently does not accomplish. It can be done (I'd have to look over some of my old notes and google for some sources), but as an advanced topic it should go somewhere towards the end. Theoretical extentions of operant conditioning, like Skinner's Verbal Behavior, should not greatly detract from the focus of this particular article: namely, operant conditioning procedures, which are factual experimental findings. And it's certainly not a "theory" of operant conditioning... no more than a physicist would call the laws of kinematics a "theory" of kinematics.

verbal behavior is the extension of operant theory to humans. Why should we care about pigeons and rats if it doesn't generalize? --florkle 23:51, 23 May 2007 (UTC)[reply]

And having checked on the article for Verbal Behavior, I'm now concerned about NPOV issues regarding the user who made the recent section changes in the Operant conditioning article. In the talk page for Verbal Behavior he recently states that he has "nuked all references to Chomsky's" review. Now, I may think that Chomksy's review is completely flawed. But for historical reasons, his review is appropriate subject matter for that article. It would be like having a biography on Abraham Lincoln without mentioning John Wilkes Boothe. Anyway, I'm restoring the biological section to its original place in the article and moving some other stuff down to the bottom until it can be worked out. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)[reply]

chomsky has been restored & extended. see article. --florkle 23:51, 23 May 2007 (UTC)[reply]

It's a complete myth that Skinner rejected biology's role in behavior. It's true that Skinner was opposed to giving explanatory status to unknown mediating constructs. For example, Chomsky coming along and saying "environment can't explain verbal behavior, therefore I will invent an imaginary Language Acquiring Device and claim it exists somewhere in the brain." That is the kind of hypothetical mediationism that Skinner was against, when people pull mediating constructs out of nowhere. There's a recent article explaining Skinner's regard for biology's role in behavior in The Behavior Analyst. Even more recently is a good 2007 article outlining current research about the relationship between biology and the three-term contingency [1]. The simple fact of the matter is that neurology is the hardware of organic "learning machines." To deny that stimuli and responses are transmitted along neurons and modified at the synaptic level would be rediculous. Consider how over a hundred years ago Darwin had en enitrely environmental account of evolution (natural selection). He had no biological mechanism to explain how variation occured and how traits were passed on. He only knew that it happened, and he had strong evidence for it. Then with the discovery of DNA, Darwin's model of evolutionary change was justified because DNA behaves in exactly the way that Darwin's model predicted. Skinner's behavior analysis is much the same way. His model of learning is being justified by biological findings and biology will ultimately be what redeems behavior analysis as a "hard" science separate from psychology. Furthermore, it's very important to note that Skinner is not the be-all end-all of behavior analysis. To treat it as such is to group it with all the other dead models populating psychology texts. A living and breathing science has the ability to expand and further clarify its subject of study. Lunar Spectrum | Talk 19:44, 26 May 2007 (UTC)[reply]


"Role of cognition" article

There's been an interesting addition to the article in the form of a "further reading" section. It's an article that purports that cognition is a mediating influence on behavior under classical and operant conditioning procedures. Of course, the idea that cognition plays a role as a mediator of behavior goes against the radical behaviorist position that cognition is itself a form of behavior subject to the same laws as overt behavior, no more and no less. The authors go on to build a case (one that I don't consider convincing) using past research to support their assertion. For example, they claim that if behavior is affected by consequences, then it must be "goal-oriented" and that "expectancies" must be involved and that, therefore, this means cognition governs behavior. This is a clear example of invoking unseen causal agents. They also cite research on rat maze running whose results they interpret to mean that rats form "cognitive maps" instead of learned responses, such as the case in which a rat has learned to run a maze, then during a new trial when a path is blocked the rat uses a parallel path as an alternative, even though the rat has not learned to use that alternative parallel path. I think this does not exclude, to my satisfaction, the influence of the rat's past acquired history of navigational repertoires upon the behavior seen in the experiment. Another area the authors cite is a 1974 review by William Brewer which investigates the effects of informed consequences upon human behavior. These are cases in which neutral stimuli have acquired reinforcing or punishing functions upon a subject's behavior without any conditioning taking place. All of the Brewer (1974) examples, as far as I can see, can easily be explained by stimulus equivalence in which new stimulus functions can emerge through membership in an equivalence relation, which is a thoroughly behavior analytic area of research. Understandably, Brewer (1974) couldn't have known about stimulus equivalence as a behavioral explanation for the results he was seeing... but I think more should be expected of the present authors. It goes on to cite Rescorla (1988) which, for all intensive purposes, seems to be based upon a complete misrepresentation of the behavioral account of contingency. He claims that behaviorists view the degree of stimulus control exerted by a CS as determined by the number of CS>US pairings (which is not true of behaviorists) and goes on to state that he has "discovered" that the true relationship is the predictive value of the CS (which is what behaviorists already consider to be true). He states that behaviorists are therefore wrong (according to his understanding of behaviorism) and that there must therefore be some kind of "goal-directed" cognition going on to account for it.

I could really go on and on... and maybe I'm making a mountain out of a mole hill, but I think this reference really doesn't belong here. I guess I could remove it without much fuss, but considering the level of misunderstanding of behavior analysis among cognitivists/constructivists I could easily see how simply removing it might elicit the reaction that I was removing fair criticism of behavior analysis. Maybe if we left the reference in the article, it could instead be a blue-print for elements of conditioning that could be further addressed in the body of the article itself? At the least it would be nice for others to review the reference themselves before having it removed. What do you think? Lunar Spectrum | Talk 04:13, 1 June 2007 (UTC)[reply]

I think the reference should stay. Like you said, it could be a blueprint for other aspects of conditioning. I'm simply saying that because my psych professors strongly emphasize the role of congition in conditioning and learning.--Janarius 13:46, 1 June 2007 (UTC)[reply]
It gets a little tricky here. We can distinguish "cognition vs. operant conditioning" as well as "cognition vs. all-behavior-is-stimulus-response-learning". The cognitive map study with the three paths you mentioned was an experiment by Tolman, but he had a much simpler experiment that showd that rat behavior is not simply stimulus-response learning. One group of rats was allowed to run a maze with food reward at the end and the number of "errors" they made (wrong turns on the way to the food) were recorded on each day. Another group ran the maze but no food reward was given for a few days. When a food reward was finally introduced, the number of errors the rats made dropped right away. A stimulus-response explanation of the task would require that food reward information propagates backward through the maze, whereas the cognitive map hypothesis suggests that the rats formed a map in their head, so that as soon as the reward was available, spatial information they had already learned could be used to guide behavior. So I think we should keep in mind that even if you think cognition has nothing to do with operant conditioning, there is much evidence that cognition is involved in actual behavior which implies that stimulus-response learning is not the sole basis for all behavior. (See also Packard and McGaugh 1996, an inactivation study that showed a clear double dissociation between the two ways of guiding behavior). Summary: I'm neither opposing nor supporting that reference; just saying that this is an article on operant conditioning, not all animal behavior. digfarenough (talk) 14:26, 1 June 2007 (UTC)[reply]
Well, the Tolman experiment you describe has the same problems I mentioned above about the effects of previous experience upon maze-running behavior. In the example you mentioned, the experiment assumes that the food reward is the only reinforcer experienced by the rat. The navigational responses of an actual rat in that kind of situation are going to be reinforced for proficiency by naturally occurring consequences if it is allowed to wander according to other motivations, like relieving stimulus deprivation. By comparison, the cognitive mapping model starts to look a lot more like a gray mystery box with the caption, "Something we don't understand happens here, then we see this result in performance. Let's give this mystery box a name and say that it explains the behavior." Also, regarding the Packard & McGaugh reference, the results look interesting, but I have to wonder if it has more to do with the inactivation of learning and cue-sensitivity instead of "place learning" vs. "response learning." From what I understand, current thinking on navigational behavior has been moving away from the cognitive maps model and is moving back in the direction of stimulus discrimination. I'm aware of a relatively recent review of research that also leans in that direction. Of course, that doesn't change the fact that cognitive explanations of behavior are the prevailing view in psychology, and should not be ignored as being part of the debate. It just doesn't sit right with me for it to sit there as if it didn't have the flaws I mentioned before. Then again, we're just talking about a "further reading" section. I orginally thought about adding citations for further reading that would balance out the section, but refutation by behaviorists specifically addressing the issues I've outlined has been difficult to find. Anything else I could put there would only serve to stray from the topic of operant conditioning, so I'm not inclined to do that. I'm resigned to having the article reference stay since at least I've had a chance to discuss my concerns about it here in the talk page. Lunar Spectrum | Talk 07:39, 2 June 2007 (UTC)[reply]
I disagree about the Tolman experiment, but it might take a while to explain. My arguments would be heavily based on those given in the first chapter of Eichenbaum and Cohen's book. Briefly, the non-food-reinforced rats didn't decrease their escape latency until food was introduced, so knowledge of food was the only changed factor. "Cognitive map" isn't used a lot these days (that I've seen), as it is loaded by the work of O'Keefe and Nadel's famed book from the '70s, but the idea of cognition being important in navigation and behavior is increasingly common from what I've seen. (Give me a few months and maybe the third chapter in my thesis can convince you!:)). But I don't think cognition is a black box. I could give you a list of references that model this form of navigation as a graph search, either in hippocampus or (more correctly in my opinion) in neocortex. If anything, I think your approach to stimulus-response learning might be vague. If you take a particular model of operant conditioning, such as reinforcement learning, it gives a specific framework in which these ideas can be considered, and it becomes clear quickly that the Tolman results can't be explained (in that framework) as stimulus-response learning. Of course, Tolman was a long time ago and there are many other results since then. Anyhow, I must admit I'm more interested in discussing these ideas themselves than in deciding whether or not any given reference should go in the article. :) digfarenough (talk) 15:13, 2 June 2007 (UTC)[reply]

"Defensive" POV

Hi - I kind of think parts of the article sound very defensive and somebody is getting rather uptight about the Skinner/Thorndike debate. I think credit is less important than making sure the point of the article is clear and explains what the current understanding of operant conditioning IS rather than making the article all messy about who made up what and so on. If I want to know who came up with what I don't think I'd come to Wikipedia to get that info.

Yeah, I agree that the Thorndike section still sounds defensive. I think I will try to drop the parts where it sounds like the author is trying to refute a relationship between Skinner and Thorndike. Besides that, is there something else in the article you're saying sounds defensive? Lunar Spectrum | Talk 02:22, 29 June 2007 (UTC)[reply]