Text-to-image model

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Artem.G (talk | contribs) at 07:19, 11 September 2022 (short desc should be before the article). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A text-to-image example from DALL-E 2 created using the prompt "a medieval painting of a man sitting at a computer editing a Wikipedia article"

A text-to-image model is a machine learning model which takes as input a natural language description and produces an image matching that description. Such models began to be developed in the mid-2010s, as a result of advances in deep neural networks. In 2022, the output of state of the art text-to-image models, such as OpenAI's DALL-E 2, Google Brain's Imagen and StabilityAI's Stable Diffusion began to approach the quality of real photographs and human-drawn art.

Text-to-image models generally combine a language model, which transforms the input text into a latent representation, and a generative image model, which produces an image conditioned on that representation. The most effective models have generally been trained on massive amounts of image and text data scraped from the web.[1]

An image conditioned on the prompt "an astronaut riding a horse, by Hiroshige", generated by Stable Diffusion, a large-scale text-to-image model released in 2022.

History

Before the rise of deep learning, there were some limited attempts to build text-to-image models, but they were limited to effectively creating collages by arranging together existing component images, such as from a database of clip art.[2][3]

The more tractable inverse problem, image captioning, saw a number of successful deep learning approaches prior to the first text-to-image models.[4]

The first modern text-to-image model, alignDRAW was introduced in 2015 by researchers from the University of Toronto. AlignDRAW extended the previously-introduced DRAW architecture (which used a recurrent variational autoencoder with an attention mechanism) to be conditioned on text sequences.[4] Images generated by alignDRAW were blurry and not photorealistic, but the model was able to generalize to objects not represented in the training data (such as a red school bus), and appropriately handled novel prompts such as "a stop sign is flying in blue skies", showing that it was not merely "memorizing" data from the training set.[4][5]

In 2016, Reed, Akata, Yan et al. became the first to use generative adversarial networks for the text-to-image task.[5][6] With models trained on narrow, domain-specific datasets, they were able to generate "visually plausible" images of birds and flowers from text captions like "an all black bird with a distinct thick, rounded bill". A model trained on the more diverse COCO dataset produced images which were "from a distance... encouraging", but which lacked coherence in their details.[5]

One of the first text-to-image models to capture widespread public attention was OpenAI's DALL-E, announced in January 2021.[7] A successor capable of generating more complex and realistic images, DALL-E 2, was unveiled in April 2022.[8]

Architecture and training

Text-to-image models have been built using a variety of architectures. The text encoding step may be performed with a recurrent neural network such as a long short-term memory (LSTM) network, though transformer models have since become a more popular option. For the image generation step, conditional generative adversarial networks have been commonly used, with diffusion models also becoming a popular option in recent years. Rather than directly training a model to output a high-resolution image conditioned on a text embedding, a popular technique is to train a model to generate low-resolution images, and use one or more auxiliary deep learning models to upscale it, filling in finer details.

Text-to-image models are trained on large datasets of (text, image) pairs, often scraped from the web. With their 2022 Imagen model, Google Brain reported positive results from using a large language model trained separately on a text-only corpus (with its weights subsequently frozen), a departure from the theretofore standard approach.[9]

Datasets

Examples of images and captions from three public datasets which are commonly used to train text-to-image models.

Training a text-to-image model requires a dataset of images paired with text captions. One dataset commonly used for this purpose is COCO (Common Objects in Context). Released by Microsoft in 2014, COCO consists of around 123,000 images depicting a diversity of objects, with five captions per image, generated by human annotators. Oxford-120 Flowers and CUB-200 Birds are smaller datasets of around 10,000 images each, restricted to flowers and birds, respectively. It is considered less difficult to train a high-quality text-to-image model with these datasets, because of their narrow range of subject matter.[6]

See also

References

  1. ^ Vincent, James (May 24, 2022). "All these images were generated by Google's latest text-to-image AI". The Verge. Vox Media. Retrieved May 28, 2022.
  2. ^ Agnese, Jorge; Herrera, Jonathan; Tao, Haicheng; Zhu, Xingquan (October 2019). "A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  3. ^ Zhu, Xiaojin; Goldberg, Andrew B.; Eldawy, Mohamed; Dyer, Charles R.; Strock, Bradley (2007). "A text-to-picture synthesis system for augmenting communication" (PDF). AAAI. 7: 1590–1595.
  4. ^ a b c Mansimov, Elman; Parisotto, Emilio; Lei Ba, Jimmy; Salakhutdinov, Ruslan (November 2015). "Generating Images from Captions with Attention". ICLR.
  5. ^ a b c Reed, Scott; Akata, Zeynep; Logeswaran, Lajanugen; Schiele, Bernt; Lee, Honglak (June 2016). "Generative Adversarial Text to Image Synthesis" (PDF). International conference on machine learning.
  6. ^ a b Frolov, Stanislav; Hinz, Tobias; Raue, Federico; Hees, Jörn; Dengel, Andreas (December 2021). "Adversarial text-to-image synthesis: A review". Neural Networks. 144: 187–209.
  7. ^ Coldewey, Devin (5 January 2021). "OpenAI's DALL-E creates plausible images of literally anything you ask it to". TechCrunch.
  8. ^ Coldewey, Devin (6 April 2022). "OpenAI's new DALL-E model draws anything — but bigger, better and faster than before". TechCrunch.
  9. ^ Saharia, Chitwan (23 May 2022). "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding". {{cite journal}}: Cite journal requires |journal= (help)