From d09995baa87c27472bd7bbc5c1d4ccf05cb02f8a Mon Sep 17 00:00:00 2001 From: nunzip Date: Thu, 14 Mar 2019 19:44:05 +0000 Subject: Fix section 1 --- report/paper.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'report') diff --git a/report/paper.md b/report/paper.md index f22ad3b..9aaa278 100644 --- a/report/paper.md +++ b/report/paper.md @@ -16,7 +16,7 @@ Training a shallow GAN with no convolutional layers poses problems such as mode \end{center} \end{figure} -Mode collapse is achieved with our naive *vanilla GAN* (Appendix-\ref{fig:vanilla_gan}) implementation after 200,000 batches. The generated images observed during a mode collapse can be seen on figure \ref{fig:mode_collapse}. The output of the generator only represents few of the labels originally fed. When mode collapse is reached loss function of the generator stops improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tends to zero as the discriminator learns to assume and classify the fake 1s, while the generator is stuck producing 1 and hence not able to improve. +Some of the main challanges faced when training a GAN are: **mode collapse**, **low quality** of images and **mismatch** between generator and discriminator loss. Mode collapse is achieved with our naive *vanilla GAN* (Appendix-\ref{fig:vanilla_gan}) implementation after 200,000 batches. The generated images observed during a mode collapse can be seen on figure \ref{fig:mode_collapse}. The output of the generator only represents few of the labels originally fed. When mode collapse is reached loss function of the generator stops improving as shown in figure \ref{fig:vanilla_loss}. We observe, the discriminator loss tends to zero as the discriminator learns to assume and classify the fake 1s, while the generator is stuck producing 1 and hence not able to improve. A significant improvement to this vanilla architecture is Deep Convolutional Generative Adversarial Networks (DCGAN). -- cgit v1.2.3-54-g00ecf