5 Key Differences Between GANs and Diffusion Models: What You Need to Know Now(AI Painting Creation Intro Course 6)
Explore how diffusion models surpass GANs in image quality and diversity, with insights into training and inference. Discover AI's new favorite artist.
Welcome to the "AI Painting Creation Intro Course" Series
In the last lecture, we were introduced to the old artist, GAN, and discussed how diffusion models compensate for GAN's shortcomings in detail, diversity of styles, and general editing capabilities.
If GAN is the old artist, then diffusion models are undoubtedly the new, sought-after artists of today.
Models like DALL-E, Imagen, and Stable Diffusion all work their magic using diffusion models.
If you've used Midjourney, you might have noticed that as the progress bar moves, the image goes from blurry to clear.
You can click the image below to see this process.
You probably guessed it—this is likely the work of diffusion models!
I say "likely" because Midjourney hasn't disclosed its underlying algorithm. We will cover this in a dedicated session later, so stay tuned.
So, how do diffusion models work?
What is their optimization goal?
How do they compare with GANs?
In this session, we'll explore these basic questions as we start our journey into diffusion models.
Keep reading with a 7-day free trial
Subscribe to AI Disruption to keep reading this post and get 7 days of free access to the full post archives.