Home Artists Prompts Demo



Stable Diffusion

AI generated images

Stable Diffusion is a machine learning model developed by StabilityAI, in collaboration with EleutherAI and LAION. This powerful text-to-image model is capable of generating digital images from natural language descriptions, making it a revolutionary tool for digital art and visual storytelling. Unlike other models such as DALL-E, Stable Diffusion makes its source code available, making it more accessible to artists, designers, and developers alike.

Using a subset of the LAION-Aesthetics V2 dataset and the computing power of 256 Nvidia A100 GPUs, Stable Diffusion has been trained to produce images that capture the essence of natural language prompts. This model is not only capable of generating original digital art pieces but can also be used for other tasks, such as image-to-image translation guided by a text prompt.

Check out some examples of the images it has generated on our prompt gallery and image gallery.


Free online tools : online demo version 2.1 (txt2img)   -   Version 1.5 still available here : demo 1.5

Online webui (txt2img and img2img)

  Modify image with Inpainting   -   Upscaling   -   Prompt generator

  Full version available in member section : txt2img tool (custom 1.5 model with epi_noiseoffset LoRA) - 200 free images/day   -   Stable Diffusion for mobile devices


New ! 🔥   Free ChatGPT 4   🔥


Feb 2023
February 2023 community artwork gallery
Jan 2023
Perfect crystal flower wedding ring brilliant - January 2023 community artwork gellery
Dec 2022
portrait Van Gogh, impressionism Monet Style - Masterpiece - December 2022 community artwork gallery
Nov 2022
mmorpg game map ocean - November 2022 community artowrk gallery
week 46
wolf isometric sephirothart
week 45
sea shore painting by john foster
week 44
futuristic lighthouse epic composition landscape
week 43
massive imperial militarist capital spaceship scifi
week 42
landscape TOKYO city Fujiyama stablediffusion
week 41
artwork week 41
week 40
Porsche electric modern villa
week 39
Community Gallery


Art
Art Gallery
Architecture
Architecture artwork Gallery
Fun
Fun Gallery
Interior Design
Parisian style interior of living-room
Animals
Animals Gallery
Seascape
Seascape artwork Gallery
Landscape
Landscape Gallery
Fantasy
Fantasy Gallery
Misc
Miscellaneous Gallery
Animation
Animated Gallery
Sci-fi
Sci-fi Gallery
Cats
Cats Gallery




A new era for Art


Stable Diffusion is based on an image generation technique called latent diffusion models (LDMs). Unlike other popular image synthesis methods such as generative adversarial networks (GANs) and the auto-regressive technique used by DALL-E, LDMs generate images by iteratively "de-noising" data in a latent representation space, then decoding the representation into a full image. LDM was developed by the Machine Vision and Learning research group at the Ludwig Maximilian University of Munich and described in a paper presented at the recent IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR). Earlier this year, InfoQ covered Google's Imagen model, another diffusion-based image generation AI.

The Stable Diffusion model can support several operations. Like DALL-E, it can be given a text description of a desired image and generate a high-quality that matches that description. It can also generate a realistic-looking image from a simple sketch plus a textual description of the desired image. Meta AI recently released a model called Make-A-Scene that has similar image-to-image capabilities.

Many users have publicly posted examples of generated images; Katherine Crowson, lead developer at Stability AI, has shared many images on Twitter. Some commenters are troubled by the impact that AI-based image synthesis will have on artists and the art world. The same week that Stable Diffusion was released, an AI-generated artwork won first prize in an art competition at the Colorado State Fair.


Back to top     Gallery