File:X-Y plot of algorithmically-generated AI art of European-style castle in Japan demonstrating DDIM diffusion steps.png

Page contents not supported in other languages.
This is a file from the Wikimedia Commons
From Wikipedia, the free encyclopedia

Original file(2,560 × 1,734 pixels, file size: 7.11 MB, MIME type: image/png)

Summary

Description

An X/Y plot of algorithmically-generated AI artworks depicting a European-style castle in Japan, created using the Stable Diffusion V1-5 AI diffusion model. This plot serves to demonstrate the U-Net denoising process, using the DDIM sampling method. Diffusion models algorithmically generate images by repeatedly removing Gaussian noise, step-by-step, and then decoding the denoised output into pixel space. Shown here are a smaller subset of steps within a 40-step generation process.

Procedure/Methodology

These images were generated using an NVIDIA RTX 4090; since Ada Lovelace chipsets (using compute capability 8.9, which requires CUDA 11.8) are not fully supported by the pyTorch dependency libraries currently used by Stable Diffusion, I've used a custom build of xformers, along with pyTorch cu116 and cuDNN v8.6, as a temporary workaround. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.

A batch of 512x768 images were generated with txt2img using the following prompts:

Prompt: a (european castle:1.3) in japan. by Albert Bierstadt, ray traced, octane render, 8k

Negative prompt: None

Settings: Sampler: DDIM, CFG scale: 7, Size: 512x768

During the generation of this batch, the X/Y plot was generated using the "X/Y plot" txt2img script, along with the following settings:

  • X-axis: Steps: 1, 2, 3, 5, 8, 10, 15, 20, 30, 40
  • Y-axis: None
Date
Source Own work
Author Benlisquare
Permission
(Reusing this file)
Output images

As the creator of the output images, I release this image under the licence displayed within the template below.

Stable Diffusion AI model

The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.

Addendum on datasets used to teach AI neural networks
Artworks generated by Stable Diffusion are algorithmically created based on the AI diffusion model's neural network as a result of learning from various datasets; the algorithm does not use preexisting images from the dataset to create the new image. Ergo, generated artworks cannot be considered derivative works of components from within the original dataset, nor can any coincidental resemblance to any particular artist's drawing style fall foul of de minimis. While an artist can claim copyright over individual works, they cannot claim copyright over mere resemblance over an artistic drawing or painting style. In simpler terms, Vincent van Gogh can claim copyright to The Starry Night, however he cannot claim copyright to a picture of a T-34 tank painted with similar brushstroke styles as Gogh's The Starry Night created by someone else.
Other versions
Using DDIM sampling method
Using Euler ancestral sampling method

Licensing

I, the copyright holder of this work, hereby publish it under the following licenses:
w:en:Creative Commons
attribution share alike
This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
You are free:
  • to share – to copy, distribute and transmit the work
  • to remix – to adapt the work
Under the following conditions:
  • attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
GNU head Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.
You may select the license of your choice.

Captions

Add a one-line explanation of what this file represents

Items portrayed in this file

depicts

1 November 2022

image/png

File history

Click on a date/time to view the file as it appeared at that time.

Date/TimeThumbnailDimensionsUserComment
current22:55, 31 October 2022Thumbnail for version as of 22:55, 31 October 20222,560 × 1,734 (7.11 MB)Benlisquarerearrange images into a 5-by-2 to optimise space
22:48, 31 October 2022Thumbnail for version as of 22:48, 31 October 20225,120 × 867 (6.63 MB)Benlisquare{{Information |Description=An X/Y plot of algorithmically-generated AI artworks depicting a European-style castle in Japan, created using the [https://huggingface.co/runwayml/stable-diffusion-v1-5 Stable Diffusion V1-5] AI diffusion model. This plot serves to demonstrate the noise diffusion process, using the DDIM sampling method. Diffusion models algorithmically generate images by repeatedly applying Gaussian noise, step-by-step, and then decoding the denoised output into pixel space. Shown...
The following pages on the English Wikipedia use this file (pages on other projects are not listed):

Global file usage

The following other wikis use this file:

Metadata