From acb5c19dcda7955b44112ecc9cc0794babc7cf4c Mon Sep 17 00:00:00 2001 From: Vasil Zlatanov Date: Thu, 14 Mar 2019 23:45:27 +0000 Subject: Push stuff around --- report/paper.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/report/paper.md b/report/paper.md index a028093..b4e812e 100644 --- a/report/paper.md +++ b/report/paper.md @@ -18,7 +18,7 @@ Training a shallow GAN with no convolutional layers poses problems such as mode Some of the main challanges faced when training a GAN are: **mode collapse**, **low quality** of images and **mismatch** between generator and discriminator loss. Mode collapse is achieved with our naive *vanilla GAN* (Appendix-\ref{fig:vanilla_gan}) implementation after 200,000 batches. The generated images observed during a mode collapse can be seen in figure \ref{fig:mode_collapse}. The output of the generator only represents few of the labels originally fed. When mode collapse is reached the loss function of the generator stops improving as shown in figure \ref{fig:vanilla_loss}. We observe the discriminator loss tends to zero as the discriminator learns to assume and classify the fake 1s, while the generator is stuck producing 1 and hence not able to improve. -A significant improvement to this vanilla architecture is Deep Convolutional Generative Adversarial Networks (DCGAN). +An improvement to the vanilla architecture is Deep Convolutional Generative Adversarial Networks (DCGAN). # DCGAN @@ -329,8 +329,6 @@ the overall performance. \end{figure} -\newpage - # Bonus Questions ## Relation to PCA -- cgit v1.2.3