Disco Diffusion

Generate art images from text Disco Diffusion is a model for generating artistic, compelling and beautiful images from just text inputs.

Artistic images

Perfect for landscapes

4 models available

Up to 1280x1280

Image input option

Our awesome features

How to work with it

1. Type your idea

Neural network creates images based on your text. This text is called prompt. The more precise your prompt is, the better the results will be

2. Add an image

You need to add an image, from which the generation will start. It will make the results more predictable You can either add your own photo or quickly create a reference in Magic Diffusion node You can switch it off in the settings

3. Change steps

Steps define how detailed and how close to the prompt the results will be. The more steps, the better the final image is, but it also will take longer to create one image.

4. Launch

Press the green button to launch the AI magic process. You can download the results immediately or upscale it to achieve higher resolution

Input

Text

Output

Image

Main task

Create images, create art

Neural network

Disco Diffusion

Release date

29.10.2021

License

MIT

Parameters

Our awesome features
1
Model

Choice from 4 models Each model has a unique style. Imagnet, a default option, can generates art in all styles With other models the final images will not vary a lot from Imagnet, unless you add the corresponding name of the model in the prompt along with selecting the model ('pixel art', 'watercolor' or 'sci-fi').

Our awesome features
2
Steps

Choice from 6 options from x5 to x250 It defines how detailed and close to the prompt your image will be. The higher number of steps is, the longer it takes to generate one image. Good results can be achieved in 100-250 steps We recommend to experiment with 100 steps, as it's the most optimal time spent and quality correlation

Our awesome features
3
Width and height

Range: from 64 to 1024 (one step is 64 pixels) It defines the width (x) and height (y) of the final image. We recommend to keep horizontal orientation for landscapes and environments.

Our awesome features
4
Seed

Range: from 0 to 4294967296 The generation starts from a particular image (or noise). The same seed number will give you similar results each time, whereas by changing it to another number may dramatically influence the final output. We recommend to use different seed numbers if you're experimenting, and as soon as you find the perfect result

Our awesome features
5
Height and width symmetry

Checkboxes It enables symmetry mode for height and width retrospectively. It splits the image in the half and mirrors either the vertical or horisontal part of it

Our awesome features
6
Use start image

A checkbox If the box is checked, the neural network will use init_image as a reference for composition and colors. From this image the generation will start. You can add your own image or create one in other Generate nodes

Our awesome features
7
Start image influence

Range: from 0 to 80 It defines how close to the init_image the final image will be. The higher the number is, the more similar to the init_image a generated artwork will be. We recommend to keep it between 10-40 if you use Start image

Our awesome features
8
Secondary model

A checkbox Turning it on make everything seem more "photorealistic", 3D and gives voluminous effect. Having the secondary model off definitely gives more abstract, artistic and flat kind of look, can result in extra warm lighting and vibrant colors.

Our awesome features
9
Text influence

Range: from 0 to 10000 (one step is 10) It defines how strongly your prompt will influence the final image. Higher is generally better, but if it is too strong it will overshoot the goal and distort the image. We recommend to keep it between 5000 and 8000

Our awesome features
10
TV scale

Range: from 0 to 10000 (one step is 10) Controls ‘smoothness’ of final output. The higher the number is, the more denoised the final image will be. The difference in images is not significant, so we advise not to change it unless you're an experienced DD user

Our awesome features
11
Range scale

Range: from 0 to 10000 (one step is 10) It defines color contrast. Lower range_scale will increase contrast. Very low numbers create a reduced color palette, resulting in more vibrant or poster-like images. Higher range_scale will reduce contrast, for more muted images. The difference in images is not significant, so we advise not to change it unless you're an experienced DD user

Our awesome features
12
Sat scale

Range: from 0 to 10000 (one step is 10) It controls color saturation and makes images more saturated. The difference in images is not significant, so we advise not to change it unless you're an experienced DD user

Our awesome features
13
Init scale

Range: from 0 to 10000 (one step is 10) It controls the influence of the start image and boosts warm colors vibrancy The difference in images is not significant, so we advise not to change it unless you're an experienced DD user

Our awesome features
14
Trans per

Range: from 0 to 100 It increases color vibrancy. The results are different only at the extreme numbers, so we recommend to not use it, unless you're an experienced DD user

Our awesome features
15
Latent mode

A checkbox It enables the latent diffusion model. It generates photorealistic images very fast, but only if you choose a smaller size (256x256). We recommend to not use it, unless you're an experienced DD user.

Our awesome features
16
Scale uncond (CFG)

The range: from 0 to 100 Works only with Latent mode on It defines how close the image to the initial seed will be. It also Impacts on image contrast. The higher the number is, the less detailed, more blurry and more vibrant the image will be. We recommend to keep it in the range of 7 to 15

Our awesome features
17
Eta

The range: from 0 to 1 Works only with Latent mode on Impacts on image depth and color range. We recommend to keep it around 0.8 so that the picture is more balanced.

Our awesome features
18
plms

A checkbox Works only with Latent mode on It defines how AI will generate the image. here is also almost no difference in the output image depending on which sampler you use. We recommend to change it only if you're an experienced user

No results found, try adjusting your search and filters.
Something went wrong, contact us if refreshing doesn’t fix this.