Talk:Recurrent neural network

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Bidirectional associative memory[edit]

The orphaned article "Bidirectional associative memory" (BAM) references this article, claiming that BAM is a kind of RNN. If this is correct, then this article should mention BAM and reference the BAM article. Likewise if it is appropriate to mention "Content-addressable memory" in this context, this should be done (however the article about that is biased towards computer hardware technology and not machine learning). 195.60.183.2 (talk) 16:06, 17 July 2008 (UTC)

I added a few words and a ref. Dicklyon (talk) 06:04, 26 June 2010 (UTC)

Better now. Remove template?[edit]

I think the article is in much better shape now than it was a couple of months ago, although it still needs to be polished. But I guess one could take out this template now:

Epsiloner (talk) 15:41, 7 December 2010 (UTC)

Disputed Statement[edit]

"RNN can use their internal memory to process arbitrary sequences of inputs."

Some types can, but the typical RNN has nodes with binary threshold outputs, which makes it a finite state machine. This article needs clarification of what types are turing-complete. — Preceding unsigned comment added by Mister Mormon (talkcontribs) 13:22, 18 December 2010 (UTC)

I don't understand what's disputed in that statement. An FSM can process arbitrary input sequences. And no RNN can be Turing complete, because they don't have unbounded memory (as far as any type I've heard of). Dicklyon (talk) 17:04, 18 December 2010 (UTC)
Isn't 'arbitrarily long' part of 'arbitrary'? Without unbounded memory, some types of processing are impossible. I know of at least one Turing complete RNN: http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/Mister Mormon (talk) 20:09, 18 December 2010 (UTC)
There's no claim that all types of processing are possible, is there? Dicklyon (talk) 00:33, 19 December 2010 (UTC)
I'm having a hard time understanding or believing that paper about a finite RNN being Turing complete. Dicklyon (talk) 00:36, 19 December 2010 (UTC)
True, but that sentence can be interpreted more strongly; I still suggest a change. As for the paper, no learning algorithm is presented, so it isn't useful regardless of its power. Anyway, can't RNNs have unbounded memory if weights and node outputs are rational numbers? There are several papers where they can hypercompute if numbers are real. — Preceding unsigned comment added by Mister Mormon (talkcontribs) 12:03, 23 December 2010 (UTC)
Encoding things via unbounded precision would be a very different model, hardly relevant here. Go ahead and make improvements if you see a way. Dicklyon (talk) 19:21, 23 December 2010 (UTC)
Hardly relevant to 'recurrent neural network'? —Preceding unsigned comment added by 71.163.181.66 (talk) 22:26, 23 December 2010 (UTC)
Well, Hava Siegelman got a Science paper out of showing that a recurrent neural network with sigmoidal units and exact reals initialised with uncomputable values in the weights or units can compute uncomputable functions. And it turns out that by following this line of research she was able to close some long-open conjectures in circuit theory. Barak (talk) 17:34, 27 December 2010 (UTC)
Barak, thanks for that update. I'm not sure what it means, but not Turing complete, anyway. Dicklyon (talk) 20:57, 27 December 2010 (UTC)

Yeah, thanks. It's super-Turing complete. Seriously, are all published RNNs either finite or uncomputable? Where's the middle ground with rational/integer weights and no thresholds in literature? I would be surprised if there were none to be found, since sub-symbolic AI has been in use for 30 years.Mister Mormon (talk) 17:58, 28 December 2010 (UTC)

Hey, this paper on a Turing-complete net could be helpful: http://www.math.rutgers.edu/~sontag/FTP_DIR/aml-turing.pdf Mister Mormon (talk) 02:29, 10 September 2011 (UTC)

Elman network[edit]

about the picture, it seems to me that there supposed to be multi connections from the context layer forward to the hidden layer and not just one to one. although the save state connections from the hidden layer to the context are indeed one to one. [1] [2]

Not true[edit]

QUOTE:In particular, RNNs cannot be easily trained for large numbers of neuron units nor for large numbers of inputs units. Successful training has been mostly in time series problems with few inputs.

Current(2013) state of art in speech recognition technique do use RNN. And speech require a lot of input. Check this: SPEECH RECOGNITION WITH DEEP RECURRENT NEURAL NETWORKS Alex Graves, Abdel-rahman Mohamed and Geoffrey Hinton. 50.100.193.20 (talk) 11:42, 5 August 2013 (UTC)

  1. ^ ELMAN, JEFFREY (1990). "Finding Structure in Time". COGNITIVE SCIENCE. 
  2. ^ ELMAN, JEFFREY (1990). Finding Structure in Time. p. 5.