User:Linzhuoli/sandbox
This is a user sandbox of Linzhuoli. You can use it for testing or practicing edits. This is not the sandbox where you should draft your assigned article for a dashboard.wikiedu.org course. To find the right sandbox for your assignment, visit your Dashboard course page and follow the Sandbox Draft link for your assigned article in the My Articles section. |
Word Embedding
Development of Word Embedding Technique
[edit]The Word Embedding technique began to develop since 2000. Benjio et al provided in a series of papers the "Neural probabilistic language models" to reduce the high dimensionality of words representations in contexts by "learning a distributed representation for words". (Benjo et al, 2003).[1] Roweis and Saul published in science how to use "locally linear embedding"(LLE) to discover representations of high dimensional data structure. [2]The area developed gradually and really took off after 2010, partly because important advances had been made since then on the quality of vectors and the training speed of the model. There are many branches and many research groups working on word embedding. For example, the probably most famous group is lead by Google(Tomas Mikolov et, al). In 2013, they offered a word2vec toolkit that can analyze the analogy of words embedded in the vector space. Most of new word embedding techniques rely on a neural network architecture instead of more traditional "n-gram" models and unsupervised learning. [3]
Notes
[edit]- ^ "A Neural Probabilistic Language Model".
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ "Nonlinear Dimensionality Reduction by Locally Linear Embedding".
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ "A Scalable Hierarchical Distributed Language Model".
{{cite journal}}
: Cite journal requires|journal=
(help)