Talk:Artificial neural network

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Cognitive science (Rated C-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Cognitive science, a collaborative effort to improve the coverage of Cognitive science on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
WikiProject Robotics (Rated B-class, High-importance)
WikiProject icon Artificial neural network is within the scope of WikiProject Robotics, which aims to build a comprehensive and detailed guide to Robotics on Wikipedia. If you would like to participate, you can choose to edit this article, or visit the project page (Talk), where you can join the project and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
WikiProject Statistics (Rated C-class, Low-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

C-Class article C  This article has been rated as C-Class on the quality scale.
 Low  This article has been rated as Low-importance on the importance scale.

Abject Failure[edit]

This article fails. Wikipedia is supposed to be an encyclopedia, not a graduate math text, comprehensible only by math geeks. More plain text for normal people is sorely needed. I could not make head nor tails of this article, and I hold two degrees in computer science. —Preceding unsigned comment added by (talk) 11:27, 8 May 2010 (UTC)

Feel free to fix it. The problem partly stems from the fact that there is no concrete agreed definition of what an ANN is. It seems to me that it was a fancy word that researchers used for a few decades before it went out of fashion. The lack of coherence in the article is partially a reflection of this. User A1 (talk) 12:27, 8 May 2010 (UTC)

Merge suggestion[edit]

Consensus is to not merge. NN, BNN and ANN are three separate entities. Consensus is to keep three separate articles and slim each down to a more specific version by removing NN from ANN and ANN from BNN etc.

Done - Weblink suggestion: Free bilingual PDF manuscript (200pages)[edit]

I currently am at the RoboCup 2009 Competition in graz, where I found the site because different to the robocup site ;) it presents recent news and pictures about robocup.

What I found there might be something for this wikipage: a neural networks PDF manuscript is presented that seems to be extended often, is free, contains whole lots of illustrations and (this is special) is available in English and German Language. I also noticed that its german version is linked in the german wikipedia. I want to start a discussion if it should be added as weblink in this article. If there will be no protest, I would try and add it in the next few days. (talk) 07:42, 4 July 2009 (UTC)

Looks like a good resource. I would prefer to link to the PDF directly, however the author has stated they do not wish this to be done. User A1 (talk) 09:05, 4 July 2009 (UTC)
They say they don't wish this to be done because of the extension they make which even include filename changes (talk) 10:54, 4 July 2009 (UTC)
As an aside, in my opinion, it would be better if the author made it cc-by-sa-nc, rather than the somewhat vague licencing terms give. User A1 (talk) 09:08, 4 July 2009 (UTC)
Yeah, someone wants to mail and explain that to him? Not everyone is aware of such licenses (talk) 10:54, 4 July 2009 (UTC)
Another small thing, just to ley you know: If I place a link, I will just copy and translate that of ... (talk) 10:56, 4 July 2009 (UTC)
Placed the link as rough translation of that from the german wikipedia. Anyone mailed the authors concerning the license issue? RoadBear (talk) 08:45, 7 July 2009 (UTC)

Very Complicated[edit]

Does anyone else feel like this page is incomprehensible? Paskari (talk) 16:38, 13 January 2009 (UTC)

Yeah, reading the article one doesn't know what all of this stuff have to do with neurons (I mean, the article apparently only talks about functions). —Preceding unsigned comment added by (talk) 11:32, 4 March 2009 (UTC)

Against Merging[edit]

I prefer leaving "Neural Network" as it is because the contents on the heading "Neural Network" gives the basic understanding of the Biological Neural Network and differs, in a great way, from Artifical Neural Network and its understanding.

I agree. Neural network must talk about the generic term and Biological NN. Pepe Ochoa (talk) 22:17, 26 March 2009 (UTC)

The main discussion in neural network is about artificial Neural Network.So they should be merged with a discussion of Natural Neural network in introduction.Bpavel88 (talk) 19:03, 1 May 2009 (UTC)

I would agree that substantial differences lie between the two types, and that there is specific terminology used for the artifical types that would not be appropriate for the non-artificial page (talk) 03:49, 7 June 2010 (UTC)

Types of Neural Networks[edit]

I think this page should have 2-3 paragraphs tops for all the types of neural networks. than we can split up the types into a new page, making it more readable. Oldag07 (talk) 17:21, 20 August 2009 (UTC)

It's a good idea. Now "Feedforward neural network" has only 3-sentence description, and less known types have much more... julekmen (talk) 12:13, 23 October 2009 (UTC)

Broken citation[edit]

I came to this page to find out about the computational power of neural networks. There was a claim that a particular neural network (not described) has universal turing power, but the link and DOI in the citation both seem to point to a non-existent paper. (talk) 04:17, 15 October 2009 (UTC)

I've fixed it. Thanks for pointing out the error. User A1 (talk) 07:48, 15 October 2009 (UTC)

Remarks by Dewdney (1997)[edit]

The remarks by Dewdney are really from a sour physicist missing the point. For difficult problems you first want to see the existence proof that a universal function approximator can do (part of) the job. Once that is the case you go hunt for the concise or 'real' solution. The Dewdy comment is very surprising, because that was about six years after the invention of the convolutional neural MLP by Yann LeCun, still unbeaten in handwritten character recognition after twenty years (better than 99.3 percent on the NIST benchmark). If the citation to Dewdney remains in there, the balance requires that (more) success stories are presented more clearly in this article. — Preceding unsigned comment added by (talk) 15:48, 3 October 2011 (UTC)

Dewdney's criticism is indeed outdated. One should add something about the spectacular recent successes since 2009: Between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions in pattern recognition and machine learning[1]. For example, the bi-directional and multi-dimensional Long short term memory (LSTM)[2][3] by Alex Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages to be learned. Recent deep learning methods for feedforward networks alternate convolutional layers[4] and max-pooling layers[5], topped by several pure classification layers. Fast GPU-based implementations of this approach by Dan Ciresan and colleagues at IDSIA have won several pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition[6], the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge[7], and others. Their neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[8] on important benchmarks such as traffic sign recognition (IJCNN 2012), or the famous MNIST handwritten digits problem of Yann LeCun at NYU. Deep, highly nonlinear neural architectures similar to the 1980 Neocognitron by Kunihiko Fukushima[9] and the "standard architecture of vision"[10] can also be pre-trained by unsupervised methods[11][12] of Geoff Hinton's lab at Toronto University. Deeper Learning (talk) 22:23, 13 December 2012 (UTC)


The entire article is absolutely terrible; there are so many good facts, but the organization is atrocious. A team needs to come in, clean up the article, word it well, and it's quite a shame because of how developed it's become. If anyone wants this article to at least reach a B-class rating on the quality scale (which is extremely important due to the article's importance in Wikipedia), we really need to clean it up. It's incomprehensible, and as someone pointed out above, it just talks about the functions of an artificial neural network, rather than how it's modelled upon biological neural networks, which is the principle purpose of this article, to explain how the two are related, and the history/applications of the system. Even worse, there are NO citations in the first few sections, and they are quite scarce. There is an excessive amount of subsections, which themselves are mere paragraphs.

Final verdict: This article needs to be re-written!

Thanks, Rifasj123 (talk) 22:47, 19 June 2012 (UTC)

I agree with the above opinion. A lot of the statements in the article just come across as complete nonsense. The lead section is just crammed with terminology and hardly summarizes the article. Take the first sentence in the body of the article:

This almost seems as if it has deliberately been written to confuse the reader. Again, going down to Models,

This just doesn't make any sense! Nobody can get anywhere reading this article, it's just babbling and jargon glued together with mumbo-jumbo. JoshuSasori (talk) 03:50, 14 September 2012 (UTC)

I've done a bit of work on cleaning up the article & will now see what response this gets. If there are no problems then I will continue cleaning up and removing the babbling and nonsense. JoshuSasori (talk) 03:36, 22 September 2012 (UTC)
I think part of the problem is that a good portion of the editors are grad students/postdocs procrastinating from reading academic papers that sound exactly like this. We have to keep trying I guess. SamuelRiv (talk) 17:40, 28 February 2013 (UTC)
  • Support a total rewrite per Rifasj123 above. This article is just laden with errors and original research. Just zap it. And merge in Neural Network in the process. History2007 (talk) 23:53, 19 March 2013 (UTC)
Please discuss merging with Neural network over at Talk:Neural network#Proposed merge with Artificial neural network. QVVERTYVS (hm?) 16:40, 4 August 2013 (UTC)

Proposed merge with Deep learning[edit]

"Deep learning" is little more than a fad term for the current generation of neural nets, and this page describes neural net technology almost exclusively. The page neural network could do with an update from the more recent and better-written material on this page. QVVERTYVS (hm?) 11:12, 4 August 2013 (UTC)

I am against merger. I disagree that deep learning is merely a fad - there are fundamental differences between distributed representation implementations (e.g. deep belief networks and deep autoencoders) and they all step further from the term neural network than simply being artificial. On the basis that deep learning is just another neural network term, we'd end up merging anything to do with machine learning into one page. However, I agree the related articles need work and balance. p.r.newman (talk) 13:54, 20 August 2013 (UTC)
Oppose – I basically agree with Mr. Newman. Deep learning is a rather specific conception in the context of ANNs. Obviously, a short and concise section on the topic should be a welcome part to ANN. Kind regards, (talk) 22:47, 30 August 2013 (UTC)
I oppose the proposed merger. The term describes a theory that is more general than any particular implementation, such as an ANN, to wit, a big chunk of the current article describes how DL might be implemented in wet(brain)ware, which presumably can't be tucking into ANN since the brain ain't "A" :-) Jshrager (talk) 03:44, 9 September 2013 (UTC)
I am against the merger. Even if several traits are shared between "classical" neural networks and deep learning's networks they are sufficiently different to deserve their own page. Also, merging would create a single massive article regarding all neural-net-like things. However, I agree the articles could be better organized. Efrenchavez (talk) 02:59, 15 September 2013 (UTC)
Against - big time. This is the correct term, and is as separate from neural networks as it is from deep learning.
Deep learning is one method of ANN programming, and so a sub-topic of ANN, which covers all aspects of programming, hardware and abstract thought on the matter. Chaosdruid (talk) 20:22, 15 September 2013 (UTC)
Comment: The deep learning article basically includes a claim in its own lead section that implies it is a content fork. Chaosdruid aptly points out above that deep learning is a sub-topic of artificial neural networks. But, that contradicts the quote included in the lead section of the deep learning article. There is no (clear) explanation anywhere in the article on how deep learning is related to neural networks, so the readers are left to figure it out on their own. If they take the lead section's word for it, they will go away with the belief that deep learning is not a sub-topic of neural networks, which is what the lead strongly implies. See Talk:Deep learning#"Deep learning" synonymous with "neural networks"?. The Transhumanist 02:00, 25 September 2013 (UTC)
I don't think there's any clear-cut definition of "deep learning" out there, but all the DL research that I've seen revolves around techniques that would usually be considered neural nets; the remark in deep learning's lead that it's not necessarily about NNs is, I think, OR. (And "neural nets", in computer science, is also a vague term that nowadays means learning with multiple layers and backprop.) QVVERTYVS (hm?) 16:39, 25 September 2013 (UTC)
Withdrawn. QVVERTYVS (hm?) 22:55, 22 October 2013 (UTC)

Rename and scope[edit]

There is a major problem of this article. It only covers the use in computer science. There are biological neural network that are artificially created. See here for an example: Implanted neurons, grown in the lab, take charge of brain circuitry.

Also, in computer science, the term, neural network, is very established. Major universities use NN instead of ANN as the name of subjects. Here is an example: It should be renamed to neural network(computer).

My views of merge with other articles can be found on the talk page of neural network. Science.philosophy.arts (talk) 01:45, 20 September 2013 (UTC)

Perhaps neural network (computer science) or neural network (machine learning) is more appropriate then? But I agree; the neural network articles are currently a mess and don't have clearly defined scopes. I've been trying to move content from neural network to this article and remove all non-CS-related materials to get a clearer picture, but at some points my efforts stalled. QVVERTYVS (hm?) 12:26, 20 September 2013 (UTC)
We should make the title as short as possible. Science.philosophy.arts (talk) 15:03, 20 September 2013 (UTC)

Last section should be deleted[edit]

While looking at the article, I realized that the "Recent improvements" and the "Successes in pattern recognition contests since 2009" sections are very similar. For instance, a quote from the former section:

Such neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[21] on benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem of Yann LeCun and colleagues at NYU.

And the latter:

Their neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[21] on important benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem of Yann LeCun at NYU.

Wow. Since the former section is better integrated into the article and the latter section seems to be only something tacked on at the end, beginning with the slightly NPOVy phrase "[the] neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions", I would strongly recommend that the latter section be deleted and its content merged into the former section (this process seems to have been halfway carried out already). Comments? APerson (talk!) 02:20, 21 December 2013 (UTC)

Backpropagation didn't solve the exclusive or problem[edit]

"Also key later advances was the backpropagation algorithm which effectively solved the exclusive-or problem (Werbos 1975).[6]"

The Backpropagation algorithm doesn't solve the Xor problem, it allows efficient training of neural networks. It's just that a neural network can solve the Xor problem while a single neuron/perceptron can't.

[13] (talk) 13:07, 15 April 2014 (UTC)Taylor

Machining application of artificial neural network[edit]

Artificial neural network has various application in production or manufacturing[14] that are capable of machine learning[15] & pattern recognition[16]. Various machining[17] processes require prediction of various results on the basis of the input data or quality[18] characteristics[19] provided in the machining process & similarly back tracking of required quality characteristics for a given result or desired output characteristics.--Rahulpratapsingh06 (talk) 12:39, 5 May 2014 (UTC)

Relationship between quality characteristics and output[edit]

The relationship between various quality characteristics & outputs can be learned by the artificial neural network design on the basis of the algorithms and programing over the data provided, which is machine learning or pattern recognition.--Rahulpratapsingh06 (talk) 12:29, 5 May 2014 (UTC)

Types of artificial neural networks[edit]

I remove this part : "Some may be as simple as a one-neuron layer with an input and an output, and others can mimic complex systems such as dANN, which can mimic chromosomal DNA through sizes at the cellular level, into artificial organisms and simulate reproduction, mutation and population sizes.[20]" because dANN is not popular. What do you think ? --Vinchaud20 (talk) 10:05, 19 May 2014 (UTC)

Absolutely right. This seems to be a plug for dANN, a rather minor project. QVVERTYVS (hm?) 09:14, 20 May 2014 (UTC)

Also " Artificial neural networks can be autonomous and learn by input from outside "teachers" or even self-teaching from written-in rules." should be remove because it is a reformulation of the learning process. And here, we speak about the "Type of Neural network" and not the "learning process" --Vinchaud20 (talk) 10:12, 19 May 2014 (UTC).

Used this article on my website.[edit] — Preceding unsigned comment added by Neomahakala108 (talkcontribs) 08:28, 22 June 2014 (UTC)


I want to know about fluidization

recent improvements and successes since 2009[edit]

recent improvements and successes since 2009 are nearly identical. I think the since 2009 section is obsolete

LuxMaryn (talk) 13:26, 26 November 2014 (UTC)

Agreed. Feel free to merge the two. QVVERTYVS (hm?) 14:24, 26 November 2014 (UTC)