Differentiable neural computer

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
A differentiable neural computer being trained to store and recall dense binary numbers. Performance of a reference task during training shown. Upper left: the input (red) and target (blue), as 5-bit words and a 1 bit interrupt signal. Upper right: the model's output

A differentiable neural computer (DNC) is a recurrent artificial neural network architecture with an autoassociative memory. The model was published in 2016 by Alex Graves et al. of DeepMind.[1]


So far, DNCs have only been demonstrated to handle relatively simple tasks, which could have been easily solved using conventional computer programming decades ago. But DNCs don't need to be programmed for each problem they are applied to, but can instead be trained. This attention span allows the user to feed complex data structures such as graphs sequentially, and recall them during later use. Furthermore, they can learn some aspects of symbolic reasoning and apply it to the use of working memory. Some experts see promise that they can be trained to perform complex, structured tasks[1][2] and address big-data applications that require some sort of rational reasoning, such as generating video commentaries or semantic text analysis.[3][4]

DNC can be trained to navigate a variety of rapid transit systems, and then what the DNC learns can be applied, for example, to get around on the London Underground. A neural network without memory would typically have to learn about each different transit system from scratch. On graph traversal and sequence-processing tasks with supervised learning, DNCs performed better than alternatives such as long short-term memory or a neural turing machine.[5] With a reinforcement learning approach to a block puzzle problem inspired by SHRDLU, DNC was trained via curriculum learning, and learned to make a plan. It performed better than a traditional recurrent neural network.[5]


DNC system diagram.

DNC networks were introduced as an extension of the Neural Turing Machine (NTM), with the addition of memory attention mechanisms that control where the memory is stored, and temporal attention that records the order of events. This structure allows DNCs to be more robust and abstract than a NTM, and still perform tasks that have longer-term dependencies than some of its predecessors such as the LSTM network. The memory, which is simply a matrix, can be allocated dynamically and accessed indefinitely. The DNC is differentiable end-to-end (each subcomponent of the model is differentiable, therefore so is the whole model). This makes it possible to optimize them efficiently using gradient descent. It learns how to store and retrieve the information such that it satisfies the task execution.[3][6][7]

The DNC model is similar to the Von Neumann architecture, and because of the resizability of memory, it is Turing complete.[8] Differentiable Neural Computers were inspired by the mammalian hippocampus.[5]

Traditional DNC[edit]

DNC, as originally published[1]

Independent variables
Input vector
Target vector
Controller input matrix

Deep (layered) LSTM
Input gate vector
Output gate vector
Forget gate vector
State gate vector,
Hidden gate vector,

DNC output vector
Read & Write heads
Interface parameters

Read heads
Read keys
Read strengths
Free gates
Read modes,

Write head
Write key
Write strength
Erase vector
Write vector
Allocation gate
Write gate
Memory matrix,
Matrix of ones
Usage vector
Precedence weighting,
Temporal link matrix,
Write weighting
Read weighting
Read vectors

Content-based addressing,
Lookup key , key strength
Indices of ,
sorted in ascending order of usage
Allocation weighting
Write content weighting
Read content weighting
Forward weighting
Backward weighting
Memory retention vector
Weight matrix, bias vector
Zeros matrix, ones matrix, identity matrix
Element-wise multiplication
Cosine similarity
Sigmoid function
Oneplus function
   for j = 1, …, K. Softmax function


Refinements to the model have been published since the original paper's release. Sparse memory addressing results in a time and space complexity reduction of thousands of times. This can be achieved by using an approximate nearest neighbors algorithm, such as Locality-sensitive hashing, or a random k-d tree like the Fast Library for Approximate Nearest Neighbors from UBC.[9] Adding Adaptive Computation Time (ACT) separates computation time from data time, which uses the fact that problem length and problem difficulty are not always the same.[10] Training using synthetic gradients performs considerably better than Backpropagation through time (BPTT).[11] Another training improvement in terms of robustness can be achieved with use of DNA normalization and a Bypass Dropout as regularization. [12]


  1. ^ a b c Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago (2016-10-12). "Hybrid computing using a neural network with dynamic external memory". Nature. 538 (7626): 471–476. Bibcode:2016Natur.538..471G. doi:10.1038/nature20101. ISSN 1476-4687. PMID 27732574.
  2. ^ "Differentiable neural computers | DeepMind". DeepMind. Retrieved 2016-10-19.
  3. ^ a b Burgess, Matt. "DeepMind's AI learned to ride the London Underground using human-like reason and memory". WIRED UK. Retrieved 2016-10-19.
  4. ^ Jaeger, Herbert (2016-10-12). "Artificial intelligence: Deep neural reasoning". Nature. 538 (7626): 467–468. Bibcode:2016Natur.538..467J. doi:10.1038/nature19477. ISSN 1476-4687. PMID 27732576.
  5. ^ a b c James, Mike. "DeepMind's Differentiable Neural Network Thinks Deeply". www.i-programmer.info. Retrieved 2016-10-20.
  6. ^ "DeepMind AI 'Learns' to Navigate London Tube". PCMAG. Retrieved 2016-10-19.
  7. ^ Mannes, John. "DeepMind's differentiable neural computer helps you navigate the subway with its memory". TechCrunch. Retrieved 2016-10-19.
  8. ^ "RNN Symposium 2016: Alex Graves - Differentiable Neural Computer".
  9. ^ Jack W Rae; Jonathan J Hunt; Harley, Tim; Danihelka, Ivo; Senior, Andrew; Wayne, Greg; Graves, Alex; Timothy P Lillicrap (2016). "Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes". arXiv:1610.09027 [cs.LG].
  10. ^ Graves, Alex (2016). "Adaptive Computation Time for Recurrent Neural Networks". arXiv:1603.08983 [cs.NE].
  11. ^ Jaderberg, Max; Wojciech Marian Czarnecki; Osindero, Simon; Vinyals, Oriol; Graves, Alex; Silver, David; Kavukcuoglu, Koray (2016). "Decoupled Neural Interfaces using Synthetic Gradients". arXiv:1608.05343 [cs.LG].
  12. ^ Franke, Jörg; Niehues, Jan; Waibel, Alex (2018). "Robust and Scalable Differentiable Neural Computer for Question Answering". arXiv:1807.02658 [cs.CL].

External links[edit]