Jump to content

Talk:Autoencoder

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by RutiWinkler (talk | contribs) at 23:55, 3 December 2021 (→‎Autoencoders variational equation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Please add {{WikiProject banner shell}} to this page and add the quality rating to that template instead of this project banner. See WP:PIQA for details.
WikiProject iconRobotics Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
Please add {{WikiProject banner shell}} to this page and add the quality rating to that template instead of this project banner. See WP:PIQA for details.
WikiProject iconComputer science Start‑class
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
Things you can help WikiProject Computer science with:

Template:IEP assignment

Training section

Much of the training section does not seem to be directly related to auto-encoders in particular but about neural networks in general. No? BrokenSegue 09:17, 29 August 2011 (UTC)[reply]

Clarification of "An output layer, where each neuron has the same meaning as in the input layer"

I don't understand what "has the same meaning as in the input layer" means in the output layer definition in the article. Can someone explain or clarify in the article, please. Many thanks, p.r.newman (talk) 09:53, 10 October 2012 (UTC)[reply]

==Answer: The outputs are the same as the inputs, i.e. y_i = x_i. The autoencoder tries to learn the identity function. Although it might seem that if the number of hidden units >= the number of input units (/output units) the resulting weights would be the trivial identity, in practice this does not turn out to be the case (probably due to the fact that the weights start so small). Sparse autoencoders, where a limited number of hidden units can be activated at once, avoid this problem even in theory. 216.169.216.1 (talk) 16:47, 17 September 2013 (UTC) Dave Rimshnick[reply]

Where is the structure section taken from?

I was wondering if there were any book sources that could be added as reference where a similar approach in describing the autoencoder is taken. Also, what are the W and b terms? It's not very clear what role W and b play on the decoding and enconding process.

Hi there, for anyone struggling to get the correct scientific quote for Autoencoder and where the argmin stuff can be found, the source you're looking for "Threaded Ensembles of Supervised and Unsupervised Neural Networks for Stream Learning" & anyone who unlike me cares enough could add that quote to the article, glhf — Preceding unsigned comment added by 2003:EB:6724:3F08:B8F2:3F33:F768:C858 (talk) 16:46, 9 November 2019 (UTC)[reply]

Split proposed

I think it would make sense to split out the "variational autoencoder" section, given that they are generative models and their purpose differs significantly from classic autoencoders. Thoughts? Skjn (talk) 15:48, 19 May 2020 (UTC)[reply]

I feel like the current content in that section is already too WP:TEXTBOOK and if anything should be trimmed or gutted, rather than expanded into its own article. Rolf H Nelson (talk) 04:54, 21 May 2020 (UTC)[reply]
I second this proposal. The sheer magnitude of variational autoencoder based methods that have been developed in the past two years is immense. Definitely worth an independent article. — Preceding unsigned comment added by Parthzoozoo (talkcontribs) 17:22, 18 June 2020 (UTC)[reply]
Also agree with the proposal - they really are a quite different concept, as asserted in the text — Preceding unsigned comment added by 193.129.26.79 (talk) 15:18, 17 August 2020 (UTC)[reply]
I agree, they use variational inference which is very different from standard autoencoders. — Preceding unsigned comment added by 62.226.49.10 (talk) 22:34, 30 August 2020 (UTC)[reply]


Autoencoders variational equation

The equation that yields the parametrization of the auto encoder and its conjugate auto decoder appears to be wrong. The minimum extends over all x in X and all sampled parametrizations of phi and psi and the "arg" that realizes the minimum yields the optimized phi and psi parametrization.RutiWinkler (talk) 14:37, 3 December 2021 (UTC)[reply]