I fine-trained the OpenAI 512x512 diffusion model on 30.000 images of the AIDA architecture database of the Harvard dataverse, with more or less success. For higher resolution results, I used Topaz Gigapixel AI to upscale all 30k training images to 512x512. I let the training run for 1.5 million iterations, after which the model seemed to overfit and you could see upscaling artifacts more clearly while quality did not improve.
If you are interested in giving my fine-tuned model a whirl, just head to the DiscoStream Colab notebook and select Architecture_Diffusion_1-5m →
↝ Tools used
Python, Topaz Gigapixel AI, Disco Diffusion