Exploring and experimenting with StabilityAI's diffusion models, namely Stable Diffusion. Huge shoutout to @entmike who created a web wrapper to use Runpod GPUs and easily create Stable Diffusion artworks.
Just as it was when experimenting with Disco Diffusion, it was incredibly interesting to learn more and more of how the AI interprets the input, both the prompt as well as all the other parameters, and getting better at formulating of what I want the AI to do. Instead of ~20 minutes per generated image on an A100 gpu, it now takes about 10 seconds on a RTX 3090; incredible work by StabilityAI.
You may notice the number of iterations for each image; to get the results I had in mind, I had to fiddle around a lot with the parameters of the generation as well as the prompt formulation. Nth iteration means that this was the nth image generated from the same conceptual idea.
↝ Tools used
Stable Diffusion, Photoshop