Home Artists Prompts Demo v2 Demo



Stable Diffusion

AI generated images


Stable Diffusion is a machine learning, text-to-image model developed by StabilityAI, in collaboration with EleutherAI and LAION, to generate digital images from natural language descriptions. The model can be used for other tasks too, like generating image-to-image translations guided by a text prompt.

SD was trained on a subset of the LAION-Aesthetics V2 dataset, using 256 Nvidia A100 GPUs !
Unlike models like DALL-E, Stable Diffusion makes its source code available. It can run on most consumer hardware equipped with a decent GPU.


Stable Diffusion 2.0 available : online version 2 demo (txt2img). Version 1 still available : demo and Inpainting pages for testing Stable Diffusion basic interface. Create your image with a click !


Full version available in member section : txt2img tool. Standard model 1.4 (hash: 7460a6fa)


Artwork from Community

week 39
Community Gallery
week 40
Porsche electric modern villa
week 41
Stablediffusion artwork week 41
week 42
landscape TOKYO city Fujiyama stablediffusion
week 43
massive imperial militarist capital spaceship scifi
week 44
futuristic lighthouse epic composition landscape
week 45
sea shore painting by john foster
week 46
wolf isometric sephirothart


Gallery

Art
Art Gallery
Architecture
Architecture artwork Gallery
Fun
Fun Gallery
Interior Design
Interior Design Gallery
Animals
Animals Gallery
Seascape
Seascape artwork Gallery
Landscape
Landscape Gallery
Fantasy
Fantasy Gallery
Misc
Miscellaneous Gallery
Animation
Animated Gallery
Sci-fi
Sci-fi Gallery
Cats
Cats Gallery



Ressources for Stable Diffusion : artists, studios, txt2img, prompts libraries, wiki, demos (external links) :


A new era for Art


Stable Diffusion is based on an image generation technique called latent diffusion models (LDMs). Unlike other popular image synthesis methods such as generative adversarial networks (GANs) and the auto-regressive technique used by DALL-E, LDMs generate images by iteratively "de-noising" data in a latent representation space, then decoding the representation into a full image. LDM was developed by the Machine Vision and Learning research group at the Ludwig Maximilian University of Munich and described in a paper presented at the recent IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR). Earlier this year, InfoQ covered Google's Imagen model, another diffusion-based image generation AI.

The Stable Diffusion model can support several operations. Like DALL-E, it can be given a text description of a desired image and generate a high-quality that matches that description. It can also generate a realistic-looking image from a simple sketch plus a textual description of the desired image. Meta AI recently released a model called Make-A-Scene that has similar image-to-image capabilities.

Many users of Stable Diffusion have publicly posted examples of generated images; Katherine Crowson, lead developer at Stability AI, has shared many images on Twitter. Some commenters are troubled by the impact that AI-based image synthesis will have on artists and the art world. The same week that Stable Diffusion was released, an AI-generated artwork won first prize in an art competition at the Colorado State Fair.


Back to top     Gallery