Talk:Recurrent neural network
Bidirectional associative memory
The orphaned article "Bidirectional associative memory" (BAM) references this article, claiming that BAM is a kind of RNN. If this is correct, then this article should mention BAM and reference the BAM article. Likewise if it is appropriate to mention "Content-addressable memory" in this context, this should be done (however the article about that is biased towards computer hardware technology and not machine learning). 18.104.22.168 (talk) 16:06, 17 July 2008 (UTC)
Better now. Remove template?
I think the article is in much better shape now than it was a couple of months ago, although it still needs to be polished. But I guess one could take out this template now:
||This article needs attention from an expert in Computer science. (November 2008)|
"RNN can use their internal memory to process arbitrary sequences of inputs."
Some types can, but the typical RNN has nodes with binary threshold outputs, which makes it a finite state machine. This article needs clarification of what types are turing-complete. — Preceding unsigned comment added by Mister Mormon (talk • contribs) 13:22, 18 December 2010 (UTC)
- I don't understand what's disputed in that statement. An FSM can process arbitrary input sequences. And no RNN can be Turing complete, because they don't have unbounded memory (as far as any type I've heard of). Dicklyon (talk) 17:04, 18 December 2010 (UTC)
- Isn't 'arbitrarily long' part of 'arbitrary'? Without unbounded memory, some types of processing are impossible. I know of at least one Turing complete RNN: http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/Mister Mormon (talk) 20:09, 18 December 2010 (UTC)
- True, but that sentence can be interpreted more strongly; I still suggest a change. As for the paper, no learning algorithm is presented, so it isn't useful regardless of its power. Anyway, can't RNNs have unbounded memory if weights and node outputs are rational numbers? There are several papers where they can hypercompute if numbers are real. — Preceding unsigned comment added by Mister Mormon (talk • contribs) 12:03, 23 December 2010 (UTC)
- Well, Hava Siegelman got a Science paper out of showing that a recurrent neural network with sigmoidal units and exact reals initialised with uncomputable values in the weights or units can compute uncomputable functions. And it turns out that by following this line of research she was able to close some long-open conjectures in circuit theory. Barak (talk) 17:34, 27 December 2010 (UTC)
Yeah, thanks. It's super-Turing complete. Seriously, are all published RNNs either finite or uncomputable? Where's the middle ground with rational/integer weights and no thresholds in literature? I would be surprised if there were none to be found, since sub-symbolic AI has been in use for 30 years.Mister Mormon (talk) 17:58, 28 December 2010 (UTC)
Hey, this paper on a Turing-complete net could be helpful: http://www.math.rutgers.edu/~sontag/FTP_DIR/aml-turing.pdf Mister Mormon (talk) 02:29, 10 September 2011 (UTC)
about the picture, it seems to me that there supposed to be multi connections from the context layer forward to the hidden layer and not just one to one. although the save state connections from the hidden layer to the context are indeed one to one.  
QUOTE:In particular, RNNs cannot be easily trained for large numbers of neuron units nor for large numbers of inputs units. Successful training has been mostly in time series problems with few inputs.
Current(2013) state of art in speech recognition technique do use RNN. And speech require a lot of input. Check this: SPEECH RECOGNITION WITH DEEP RECURRENT NEURAL NETWORKS Alex Graves, Abdel-rahman Mohamed and Geoffrey Hinton. 22.214.171.124 (talk) 11:42, 5 August 2013 (UTC)
- ELMAN, JEFFREY (1990). "Finding Structure in Time". COGNITIVE SCIENCE.
- ELMAN, JEFFREY (1990). Finding Structure in Time. p. 5.